[Gluster-users] trashcan on dist. repl. volume with geo-replication
Dietmar Putz
dietmar.putz at 3qsdn.com
Tue Mar 13 15:06:26 UTC 2018
Hi Kotresh,
...another test. this time the trashcan was enabled on master only. as
in the test before it's a gfs 3.12.6 on ubuntu 16.04.4
the geo rep error appeared again and disabling the trashcan does not
change anything.
as in the former test the error appears when i try to list files in the
trashcan.
the shown gfid belongs to a directory in trashcan with just one file in
it...like in the former test.
[2018-03-13 11:08:30.777489] E [master(/brick1/mvol1):784:log_failures]
_GMaster: ENTRY FAILED data=({'uid': 0, 'gfid':
'71379ee0-c40a-49db-b3ed-9f3145ed409a', 'gid': 0, 'mode': 16877,
'entry': '.gfid/4f59c068-6c77-40f2-b556-aa761834caf1/dir1', 'op':
'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
below the setup, further informations and all activities.
is there anything else i could test or check...?
a generally question, is there a recommendation for the use of the
trashcan feature in geo-replication envrionments...?
for my use-case it's not necessary to activate it on the slave...but is
this needed to activate it on master and slave ?
best regards
Dietmar
master volume :
root at gl-node1:~# gluster volume info mvol1
Volume Name: mvol1
Type: Distributed-Replicate
Volume ID: 7590b6a0-520b-4c51-ad63-3ba5be0ed0df
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl-node1-int:/brick1/mvol1
Brick2: gl-node2-int:/brick1/mvol1
Brick3: gl-node3-int:/brick1/mvol1
Brick4: gl-node4-int:/brick1/mvol1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.trash-max-filesize: 2GB
features.trash: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root at gl-node1:~#
slave volume :
root at gl-node5:~# gluster volume info mvol1
Volume Name: mvol1
Type: Distributed-Replicate
Volume ID: aba4e057-7374-4a62-bcd7-c1c6f71e691b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl-node5-int:/brick1/mvol1
Brick2: gl-node6-int:/brick1/mvol1
Brick3: gl-node7-int:/brick1/mvol1
Brick4: gl-node8-int:/brick1/mvol1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root at gl-node5:~#
root at gl-node1:~# gluster volume geo-replication mvol1
gl-node5-int::mvol1 config
special_sync_mode: partial
state_socket_unencoded:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket
gluster_log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no
-i /var/lib/glusterd/geo-replication/secret.pem
ignore_deletes: false
change_detector: changelog
gluster_command_dir: /usr/sbin/
state_file:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.status
remote_gsyncd: /nonexistent/gsyncd
log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.log
changelog_log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-changes.log
socketdir: /var/run/gluster
working_dir:
/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1
state_detail_file:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-detail.status
use_meta_volume: true
ssh_command_tar: ssh -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
pid_file:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.pid
georep_session_working_dir:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/
access_mount: true
gluster_params: aux-gfid-mount acl
root at gl-node1:~#
root at gl-node1:~# gluster volume geo-replication mvol1
gl-node5-int::mvol1 status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
gl-node1-int mvol1 /brick1/mvol1 root
gl-node5-int::mvol1 gl-node5-int Active Changelog Crawl
2018-03-13 09:43:46
gl-node4-int mvol1 /brick1/mvol1 root
gl-node5-int::mvol1 gl-node8-int Active Changelog Crawl
2018-03-13 09:43:47
gl-node2-int mvol1 /brick1/mvol1 root
gl-node5-int::mvol1 gl-node6-int Passive N/A N/A
gl-node3-int mvol1 /brick1/mvol1 root
gl-node5-int::mvol1 gl-node7-int Passive N/A N/A
root at gl-node1:~#
volume's are locally mounted as :
gl-node1:/mvol1 20G 65M 20G 1% /m_vol
gl-node5:/mvol1 20G 65M 20G 1% /s_vol
prepare some directories and files...important from my point of view
that there is a directory which contains just one file (in this case
'dir1') :
root at gl-node1:~/tmp/test# mkdir dir1
root at gl-node1:~/tmp/test# mkdir dir5
root at gl-node1:~/tmp/test# mkdir dir10
root at gl-node1:~/tmp/test# cd dir10
root at gl-node1:~/tmp/test/dir10# for i in {1..10}
> do
> touch file$i
> done
root at gl-node1:~/tmp/test/dir10#
root at gl-node1:~/tmp/test/dir10# cp file[1-5] ../dir5
root at gl-node1:~/tmp/test/dir10# cp file1 ../dir1
root at gl-node1:~/tmp/test# ls dir10
file1 file10 file2 file3 file4 file5 file6 file7 file8 file9
root at gl-node1:~/tmp/test# ls dir5
file1 file2 file3 file4 file5
root at gl-node1:~/tmp/test# ls dir1
file1
root at gl-node1:~/tmp/test#
copy structure to the master volume :
root at gl-node1:~/tmp/test# mkdir /m_vol/test
root at gl-node1:~/tmp/test#cp -p -r * /m_vol/test/
collection of gfid's and distribution of the files over the bricks on
master :
tron at dp-server:~/central$ ./mycommand.sh -H master -c "cat
/root/tmp/get_file_gfid.out"
Host : gl-node1
brick1/mvol1/test/dir10/file1 0x934c4202114849ff87f68eda2ca79c53
brick1/mvol1/test/dir10/file2 0xbba2bf22a6034a388f60bd8af447fade
brick1/mvol1/test/dir10/file5 0x1d78d8e5609e4485a8faeef0172f703d
brick1/mvol1/test/dir10/file6 0xff325e1fbed84297be9f0634de3db8b9
brick1/mvol1/test/dir10/file8 0x019b04bdac824eab8747923cbdf1c155
brick1/mvol1/test/dir5/file3 0x34168e08a8cb47b4919e9aa90b7cadaf
brick1/mvol1/test/dir5/file4 0xc1c22afb583c40c3b2700beea652693b
-----------------------------------------------------
Host : gl-node2
brick1/mvol1/test/dir10/file1 0x934c4202114849ff87f68eda2ca79c53
brick1/mvol1/test/dir10/file2 0xbba2bf22a6034a388f60bd8af447fade
brick1/mvol1/test/dir10/file5 0x1d78d8e5609e4485a8faeef0172f703d
brick1/mvol1/test/dir10/file6 0xff325e1fbed84297be9f0634de3db8b9
brick1/mvol1/test/dir10/file8 0x019b04bdac824eab8747923cbdf1c155
brick1/mvol1/test/dir5/file3 0x34168e08a8cb47b4919e9aa90b7cadaf
brick1/mvol1/test/dir5/file4 0xc1c22afb583c40c3b2700beea652693b
-----------------------------------------------------
Host : gl-node3
brick1/mvol1/test/dir1/file1 0x463499f572c140c99688f31a74b46dce
brick1/mvol1/test/dir10/file3 0xcae961daacff44949833052b732bd9d3
brick1/mvol1/test/dir10/file4 0xde0e1862f4a3477f8544396fc06d45aa
brick1/mvol1/test/dir10/file7 0xf3009c09491b44bea7a9528bda459bfb
brick1/mvol1/test/dir10/file9 0xaf6947b1f40f4bcf923d14156475c48b
brick1/mvol1/test/dir10/file10 0x954f604ff9c24e2a98d4b6b732e8dd5a
brick1/mvol1/test/dir5/file1 0x395c43b8eb474b0bbaaa8adc6d684cc1
brick1/mvol1/test/dir5/file2 0xc2f0d4913a664b8494c1a4102230d35e
brick1/mvol1/test/dir5/file5 0x5225783836304b949777a241a5199988
-----------------------------------------------------
Host : gl-node4
brick1/mvol1/test/dir1/file1 0x463499f572c140c99688f31a74b46dce
brick1/mvol1/test/dir10/file3 0xcae961daacff44949833052b732bd9d3
brick1/mvol1/test/dir10/file4 0xde0e1862f4a3477f8544396fc06d45aa
brick1/mvol1/test/dir10/file7 0xf3009c09491b44bea7a9528bda459bfb
brick1/mvol1/test/dir10/file9 0xaf6947b1f40f4bcf923d14156475c48b
brick1/mvol1/test/dir10/file10 0x954f604ff9c24e2a98d4b6b732e8dd5a
brick1/mvol1/test/dir5/file1 0x395c43b8eb474b0bbaaa8adc6d684cc1
brick1/mvol1/test/dir5/file2 0xc2f0d4913a664b8494c1a4102230d35e
brick1/mvol1/test/dir5/file5 0x5225783836304b949777a241a5199988
-----------------------------------------------------
tron at dp-server:~/central$ ./mycommand.sh -H master -c "cat
/root/tmp/get_dir_gfid.out"
Host : gl-node1
brick1/mvol1 0x00000000000000000000000000000001
brick1/mvol1/.trashcan 0x00000000000000000000000000000005
brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee
brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead
brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add
brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6
-----------------------------------------------------
Host : gl-node2
brick1/mvol1 0x00000000000000000000000000000001
brick1/mvol1/.trashcan 0x00000000000000000000000000000005
brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee
brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead
brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add
brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6
-----------------------------------------------------
Host : gl-node3
brick1/mvol1 0x00000000000000000000000000000001
brick1/mvol1/.trashcan 0x00000000000000000000000000000005
brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee
brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead
brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add
brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6
-----------------------------------------------------
Host : gl-node4
brick1/mvol1 0x00000000000000000000000000000001
brick1/mvol1/.trashcan 0x00000000000000000000000000000005
brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee
brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead
brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add
brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6
-----------------------------------------------------
remove some files and list trashcan :
root at gl-node1:/m_vol/test# ls
dir1 dir10 dir5
root at gl-node1:/m_vol/test# rm -rf dir5/
root at gl-node1:/m_vol/test#
root at gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/
total 12
drwxr-xr-x 3 root root 4096 Mar 13 10:59 .
drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..
drwxr-xr-x 2 root root 4096 Mar 13 10:59 dir5
root at gl-node1:/m_vol/test#
root at gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/dir5/
total 8
drwxr-xr-x 2 root root 4096 Mar 13 10:59 .
drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..
-rw-r--r-- 1 root root 0 Mar 13 10:32 file1_2018-03-13_105918
-rw-r--r-- 1 root root 0 Mar 13 10:32 file2_2018-03-13_105918
-rw-r--r-- 1 root root 0 Mar 13 10:32 file3_2018-03-13_105918
-rw-r--r-- 1 root root 0 Mar 13 10:32 file4_2018-03-13_105918
-rw-r--r-- 1 root root 0 Mar 13 10:32 file5_2018-03-13_105918
root at gl-node1:/m_vol/test#
root at gl-node1:/m_vol/test# rm -rf dir1
root at gl-node1:/m_vol/test#
both directories dir5 and dir1 have been removed on master and slave :
root at gl-node1:/# ls -l /m_vol/test/
total 4
drwxr-xr-x 2 root root 4096 Mar 13 10:32 dir10
root at gl-node1:/# ls -l /s_vol/test/
total 4
drwxr-xr-x 2 root root 4096 Mar 13 10:32 dir10
root at gl-node1:/#
check trashcan, dir1 is not listed :
root at gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/ ### deleted
dir1 is not shown
total 12
drwxr-xr-x 4 root root 4096 Mar 13 11:03 .
drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..
drwxr-xr-x 2 root root 4096 Mar 13 10:59 dir5
root at gl-node1:/m_vol/test#
check trashcan on bricks, deleted 'dir1' exist only on the nodes which
had stored the only file 'file1' in that directory :
tron at dp-server:~/central$ ./mycommand.sh -H master -c "ls -la
/brick1/mvol1/.trashcan/test/"
Host : gl-node1
total 0
drwxr-xr-x 3 root root 18 Mar 13 10:59 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node2
total 0
drwxr-xr-x 3 root root 18 Mar 13 10:59 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node3
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:03 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node4
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:03 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5
-----------------------------------------------------
tron at dp-server:~/central$
until now the geo-replication is working fine.
root at gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/dir1
total 8
drwxr-xr-x 2 root root 4096 Mar 13 11:03 .
drwxr-xr-x 3 root root 4096 Mar 13 11:03 ..
-rw-r--r-- 1 root root 0 Mar 13 10:33 file1_2018-03-13_110343
root at gl-node1:/m_vol/test#
directly after the last command the geo replication is partially faulty,
this message appears on gl-node1 and gl-node2 :
[2018-03-13 11:08:30.777489] E [master(/brick1/mvol1):784:log_failures]
_GMaster: ENTRY FAILED data=({'uid': 0, 'gfid':
'71379ee0-c40a-49db-b3ed-9f3145ed409a', 'gid': 0, 'mode': 16877,
'entry': '.gfid/4f59c068-6c77-40f2-b556-aa761834caf1/dir1', 'op':
'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
[2018-03-13 11:08:30.777816] E
[syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The above
directory failed to sync. Please fix it to proceed further.
check on bricks, after 'ls -la /m_vol/.trashcan/test/dir1' directory
'dir1' appears on all master bricks :
tron at dp-server:~/central$ ./mycommand.sh -H master -c "ls -la
/brick1/mvol1/.trashcan/test/"
Host : gl-node1
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:08 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 6 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node2
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:08 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 6 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node3
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:03 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5
-----------------------------------------------------
Host : gl-node4
total 0
drwxr-xr-x 4 root root 30 Mar 13 11:03 .
drwxr-xr-x 3 root root 18 Mar 13 10:59 ..
drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1
drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5
-----------------------------------------------------
tron at dp-server:~/central$
new collection of gfid's, looking for the mentioned gfid's on all master
nodes :
tron at dp-server:~/central$ ./mycommand.sh -H master -c "cat
/root/tmp/get_dir_gfid.out | grep 9f3145ed409a"
Host : gl-node1
brick1/mvol1/.trashcan/test/dir1 0x71379ee0c40a49dbb3ed9f3145ed409a
-----------------------------------------------------
Host : gl-node2
brick1/mvol1/.trashcan/test/dir1 0x71379ee0c40a49dbb3ed9f3145ed409a
-----------------------------------------------------
Host : gl-node3
brick1/mvol1/.trashcan/test/dir1 0x71379ee0c40a49dbb3ed9f3145ed409a
-----------------------------------------------------
Host : gl-node4
brick1/mvol1/.trashcan/test/dir1 0x71379ee0c40a49dbb3ed9f3145ed409a
-----------------------------------------------------
tron at dp-server:~/central$ ./mycommand.sh -H master -c "cat
/root/tmp/get_dir_gfid.out | grep aa761834caf1"
Host : gl-node1
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-----------------------------------------------------
Host : gl-node2
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-----------------------------------------------------
Host : gl-node3
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-----------------------------------------------------
Host : gl-node4
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-----------------------------------------------------
tron at dp-server:~/central$
Am 13.03.2018 um 10:13 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your repsonse...
> answers inside...
>
> best regards
> Dietmar
>
>
> Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
>> Hi Dietmar,
>>
>> I am trying to understand the problem and have few questions.
>>
>> 1. Is trashcan enabled only on master volume?
> no, trashcan is also enabled on slave. settings are the same as on
> master but trashcan on slave is complete empty.
> root at gl-node5:~# gluster volume get mvol1 all | grep -i trash
> features.trash on
> features.trash-dir .trashcan
> features.trash-eliminate-path (null)
> features.trash-max-filesize 2GB
> features.trash-internal-op off
> root at gl-node5:~#
>
>> 2. Does the 'rm -rf' done on master volume synced to slave ?
> yes. entire content of ~/test1/b1/* on slave has been removed.
>> 3. If trashcan is disabled, the issue goes away?
>
> after disabling features.trash on master and slave the issue
> remains...stop and restart of master/slave volume and geo-replication
> has no effect.
> root at gl-node1:~# gluster volume geo-replication mvol1
> gl-node5-int::mvol1 status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> SLAVE SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
> ----------------------------------------------------------------------------------------------------------------------------------------------------
> gl-node1-int mvol1 /brick1/mvol1 root
> gl-node5-int::mvol1 N/A Faulty N/A N/A
> gl-node3-int mvol1 /brick1/mvol1 root
> gl-node5-int::mvol1 gl-node7-int Passive N/A N/A
> gl-node2-int mvol1 /brick1/mvol1 root
> gl-node5-int::mvol1 N/A Faulty N/A N/A
> gl-node4-int mvol1 /brick1/mvol1 root
> gl-node5-int::mvol1 gl-node8-int Active Changelog Crawl
> 2018-03-12 13:56:28
> root at gl-node1:~#
>>
>> The geo-rep error just says the it failed to create the directory
>> "Oracle_VM_VirtualBox_Extension" on slave.
>> Usually this would be because of gfid mismatch but I don't see that
>> in your case. So I am little more interested
>> in present state of the geo-rep. Is it still throwing same errors and
>> same failure to sync the same directory. If
>> so does the parent 'test1/b1' exists on slave?
> it is still throwing the same error as show below.
> the directory 'test1/b1' is empty as expected and exist on master and
> slave.
>
>
>>
>> And doing ls on trashcan should not affect geo-rep. Is there a easy
>> reproducer for this ?
> i have made several tests on 3.10.11 and 3.12.6 and i'm pretty sure
> there was one without activation of the trashcan feature on
> slave...with same / similiar problems.
> i will come back with a more comprehensive and reproducible
> description of that issue...
>
>>
>>
>> Thanks,
>> Kotresh HR
>>
>> On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz
>> <dietmar.putz at 3qsdn.com <mailto:dietmar.putz at 3qsdn.com>> wrote:
>>
>> Hello,
>>
>> in regard to
>> https://bugzilla.redhat.com/show_bug.cgi?id=1434066
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1434066>
>> i have been faced to another issue when using the trashcan
>> feature on a dist. repl. volume running a geo-replication. (gfs
>> 3.12.6 on ubuntu 16.04.4)
>> for e.g. removing an entire directory with subfolders :
>> tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
>>
>> afterwards listing files in the trashcan :
>> tron at gl-node1:/myvol-1/test1$ ls -la /myvol-1/.trashcan/test1/b1/
>>
>> leads to an outage of the geo-replication.
>> error on master-01 and master-02 :
>>
>> [2018-03-12 13:37:14.827204] I [master(/brick1/mvol1):1385:crawl]
>> _GMaster: slave's time stime=(1520861818, 0)
>> [2018-03-12 13:37:14.835535] E
>> [master(/brick1/mvol1):784:log_failures] _GMaster: ENTRY
>> FAILED data=({'uid': 0, 'gfid':
>> 'c38f75e3-194a-4d22-9094-50ac8f8756e7', 'gid': 0, 'mode': 16877,
>> 'entry':
>> '.gfid/5531bd64-ac50-462b-943e-c0bf1c52f52c/Oracle_VM_VirtualBox_Extension',
>> 'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
>> [2018-03-12 13:37:14.835911] E
>> [syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The
>> above directory failed to sync. Please fix it to proceed further.
>>
>>
>> both gfid's of the directories as shown in the log :
>> brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52f52c
>> brick1/mvol1/.trashcan/test1/b1/Oracle_VM_VirtualBox_Extension
>> 0xc38f75e3194a4d22909450ac8f8756e7
>>
>> the shown directory contains just one file which is stored on
>> gl-node3 and gl-node4 while node1 and 2 are in geo replication error.
>> since the filesize limitation of the trashcan is obsolete i'm
>> really interested to use the trashcan feature but i'm concerned
>> it will interrupt the geo-replication entirely.
>> does anybody else have been faced with this situation...any
>> hints, workarounds... ?
>>
>> best regards
>> Dietmar Putz
>>
>>
>> root at gl-node1:~/tmp# gluster volume info mvol1
>>
>> Volume Name: mvol1
>> Type: Distributed-Replicate
>> Volume ID: a1c74931-568c-4f40-8573-dd344553e557
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: gl-node1-int:/brick1/mvol1
>> Brick2: gl-node2-int:/brick1/mvol1
>> Brick3: gl-node3-int:/brick1/mvol1
>> Brick4: gl-node4-int:/brick1/mvol1
>> Options Reconfigured:
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexing: on
>> features.trash-max-filesize: 2GB
>> features.trash: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>>
>> root at gl-node1:/myvol-1/test1# gluster volume geo-replication
>> mvol1 gl-node5-int::mvol1 config
>> special_sync_mode: partial
>> gluster_log_file:
>> /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.gluster.log
>> ssh_command: ssh -oPasswordAuthentication=no
>> -oStrictHostKeyChecking=no -i
>> /var/lib/glusterd/geo-replication/secret.pem
>> change_detector: changelog
>> use_meta_volume: true
>> session_owner: a1c74931-568c-4f40-8573-dd344553e557
>> state_file:
>> /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.status
>> gluster_params: aux-gfid-mount acl
>> remote_gsyncd: /nonexistent/gsyncd
>> working_dir:
>> /var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1
>> state_detail_file:
>> /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-detail.status
>> gluster_command_dir: /usr/sbin/
>> pid_file:
>> /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.pid
>> georep_session_working_dir:
>> /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/
>> ssh_command_tar: ssh -oPasswordAuthentication=no
>> -oStrictHostKeyChecking=no -i
>> /var/lib/glusterd/geo-replication/tar_ssh.pem
>> master.stime_xattr_name:
>> trusted.glusterfs.a1c74931-568c-4f40-8573-dd344553e557.d62bda3a-1396-492a-ad99-7c6238d93c6a.stime
>> changelog_log_file:
>> /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-changes.log
>> socketdir: /var/run/gluster
>> volume_id: a1c74931-568c-4f40-8573-dd344553e557
>> ignore_deletes: false
>> state_socket_unencoded:
>> /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket
>> log_file:
>> /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.log
>> access_mount: true
>> root at gl-node1:/myvol-1/test1#
>>
>> --
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>>
>>
>>
>>
>> --
>> Thanks and Regards,
>> Kotresh H R
>
> --
> Dietmar Putz
> 3Q GmbH
> Kurfürstendamm 102
> D-10711 Berlin
>
> Mobile: +49 171 / 90 160 39
> Mail:dietmar.putz at 3qsdn.com
--
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
Mobile: +49 171 / 90 160 39
Mail: dietmar.putz at 3qsdn.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180313/1c920e62/attachment.html>
More information about the Gluster-users
mailing list