<div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div>Hi Dietmar,<br><br></div>I am trying to understand the problem and have few questions.<br><br></div>1. Is trashcan enabled only on master volume?<br></div>2. Does the &#39;rm -rf&#39; done on master volume synced to slave ?<br></div>3. If trashcan is disabled, the issue goes away?<br></div><br></div>The geo-rep error just says the it failed to create the directory &quot;Oracle_VM_Virtua<wbr>lBox_Extension&quot; on slave.<br>Usually this would be because of gfid mismatch but I don&#39;t see that in your case. So I am little more interested<br></div>in present state of the geo-rep. Is it still throwing same errors and same failure to sync the same directory. If<br></div>so does the parent &#39;test1/b<wbr>1&#39; exists on slave?<br><br></div>And doing ls on trashcan should not affect geo-rep. Is there a easy reproducer for this ?<br><br><br></div>Thanks,<br></div>Kotresh HR<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz <span dir="ltr">&lt;<a href="mailto:dietmar.putz@3qsdn.com" target="_blank">dietmar.putz@3qsdn.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
in regard to<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1434066" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1434066</a><br>
i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)<br>
for e.g. removing an entire directory with subfolders :<br>
tron@gl-node1:/myvol-1/test1/b<wbr>1$ rm -rf *<br>
<br>
afterwards listing files in the trashcan :<br>
tron@gl-node1:/myvol-1/test1$ ls -la /myvol-1/.trashcan/test1/b1/<br>
<br>
leads to an outage of the geo-replication.<br>
error on master-01 and master-02 :<br>
<br>
[2018-03-12 13:37:14.827204] I [master(/brick1/mvol1):1385:cr<wbr>awl] _GMaster: slave&#39;s time stime=(1520861818, 0)<br>
[2018-03-12 13:37:14.835535] E [master(/brick1/mvol1):784:log<wbr>_failures] _GMaster: ENTRY FAILED    data=({&#39;uid&#39;: 0, &#39;gfid&#39;: &#39;c38f75e3-194a-4d22-9094-50ac8<wbr>f8756e7&#39;, &#39;gid&#39;: 0, &#39;mode&#39;: 16877, &#39;entry&#39;: &#39;.gfid/5531bd64-ac50-462b-943e<wbr>-c0bf1c52f52c/Oracle_VM_Virtua<wbr>lBox_Extension&#39;, &#39;op&#39;: &#39;MKDIR&#39;}, 2, {&#39;gfid_mismatch&#39;: False, &#39;dst&#39;: False})<br>
[2018-03-12 13:37:14.835911] E [syncdutils(/brick1/mvol1):299<wbr>:log_raise_exception] &lt;top&gt;: The above directory failed to sync. Please fix it to proceed further.<br>
<br>
<br>
both gfid&#39;s of the directories as shown in the log :<br>
brick1/mvol1/.trashcan/test1/b<wbr>1 0x5531bd64ac50462b943ec0bf1c52<wbr>f52c<br>
brick1/mvol1/.trashcan/test1/b<wbr>1/Oracle_VM_VirtualBox_Extensi<wbr>on 0xc38f75e3194a4d22909450ac8f87<wbr>56e7<br>
<br>
the shown directory contains just one file which is stored on gl-node3 and gl-node4 while node1 and 2 are in geo replication error.<br>
since the filesize limitation of the trashcan is obsolete i&#39;m really interested to use the trashcan feature but i&#39;m concerned it will interrupt the geo-replication entirely.<br>
does anybody else have been faced with this situation...any hints, workarounds... ?<br>
<br>
best regards<br>
Dietmar Putz<br>
<br>
<br>
root@gl-node1:~/tmp# gluster volume info mvol1<br>
<br>
Volume Name: mvol1<br>
Type: Distributed-Replicate<br>
Volume ID: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 2 x 2 = 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gl-node1-int:/brick1/mvol1<br>
Brick2: gl-node2-int:/brick1/mvol1<br>
Brick3: gl-node3-int:/brick1/mvol1<br>
Brick4: gl-node4-int:/brick1/mvol1<br>
Options Reconfigured:<br>
changelog.changelog: on<br>
geo-replication.ignore-pid-che<wbr>ck: on<br>
geo-replication.indexing: on<br>
features.trash-max-filesize: 2GB<br>
features.trash: on<br>
transport.address-family: inet<br>
nfs.disable: on<br>
performance.client-io-threads: off<br>
<br>
root@gl-node1:/myvol-1/test1# gluster volume geo-replication mvol1 gl-node5-int::mvol1 config<br>
special_sync_mode: partial<br>
gluster_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.<wbr>gluster.log<br>
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/secret.pem<br>
change_detector: changelog<br>
use_meta_volume: true<br>
session_owner: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
state_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.status<br>
gluster_params: aux-gfid-mount acl<br>
remote_gsyncd: /nonexistent/gsyncd<br>
working_dir: /var/lib/misc/glusterfsd/mvol1<wbr>/ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1<br>
state_detail_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1-detail.status<br>
gluster_command_dir: /usr/sbin/<br>
pid_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.pid<br>
georep_session_working_dir: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<br>
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/tar_ssh.pem<br>
master.stime_xattr_name: trusted.glusterfs.a1c74931-568<wbr>c-4f40-8573-dd344553e557.d62bd<wbr>a3a-1396-492a-ad99-7c6238d93c6<wbr>a.stime<br>
changelog_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1-<wbr>changes.log<br>
socketdir: /var/run/gluster<br>
volume_id: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
ignore_deletes: false<br>
state_socket_unencoded: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1.socket<br>
log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.log<br>
access_mount: true<br>
root@gl-node1:/myvol-1/test1#<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</div>