<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Hi Kotresh,</p>
    ...another test. this time the trashcan was enabled on master only. 
    as in the test before it's a gfs 3.12.6 on ubuntu 16.04.4<br>
    the geo rep error appeared again and disabling the trashcan does not
    change anything.<br>
    as in the former test the error appears when i try to list files in
    the trashcan.<br>
    the shown gfid belongs to a directory in trashcan with just one file
    in it...like in the former test.<br>
    <br>
    <tt>[2018-03-13 11:08:30.777489] E
      [master(/brick1/mvol1):784:log_failures] _GMaster: ENTRY FAILED  
       data=({'uid': 0, 'gfid': '71379ee0-c40a-49db-b3ed-9f3145ed409a',
      'gid': 0, 'mode': 16877, 'entry':
      '.gfid/4f59c068-6c77-40f2-b556-aa761834caf1/dir1', 'op': 'MKDIR'},
      2, {'gfid_mismatch': False, 'dst': False})<br>
      <br>
    </tt>below the setup, further informations and all activities.<br>
    is there anything else i could test or check...?<br>
    <br>
    a generally question, is there a recommendation for the use of the
    trashcan feature in geo-replication envrionments...?<br>
    for my use-case it's not necessary to activate it on the slave...but
    is this needed to activate it on master and slave ?<br>
    <br>
    best regards<br>
    <br>
    Dietmar<br>
    <p><br>
    </p>
    <p>master volume :<tt><br>
        root@gl-node1:~# gluster volume info mvol1<br>
         <br>
        Volume Name: mvol1<br>
        Type: Distributed-Replicate<br>
        Volume ID: 7590b6a0-520b-4c51-ad63-3ba5be0ed0df<br>
        Status: Started<br>
        Snapshot Count: 0<br>
        Number of Bricks: 2 x 2 = 4<br>
        Transport-type: tcp<br>
        Bricks:<br>
        Brick1: gl-node1-int:/brick1/mvol1<br>
        Brick2: gl-node2-int:/brick1/mvol1<br>
        Brick3: gl-node3-int:/brick1/mvol1<br>
        Brick4: gl-node4-int:/brick1/mvol1<br>
        Options Reconfigured:<br>
        changelog.changelog: on<br>
        geo-replication.ignore-pid-check: on<br>
        geo-replication.indexing: on<br>
        features.trash-max-filesize: 2GB<br>
        features.trash: on<br>
        transport.address-family: inet<br>
        nfs.disable: on<br>
        performance.client-io-threads: off<br>
        root@gl-node1:~# <br>
        <br>
        <br>
      </tt>slave volume :<tt><br>
        root@gl-node5:~# gluster volume info mvol1<br>
         <br>
        Volume Name: mvol1<br>
        Type: Distributed-Replicate<br>
        Volume ID: aba4e057-7374-4a62-bcd7-c1c6f71e691b<br>
        Status: Started<br>
        Snapshot Count: 0<br>
        Number of Bricks: 2 x 2 = 4<br>
        Transport-type: tcp<br>
        Bricks:<br>
        Brick1: gl-node5-int:/brick1/mvol1<br>
        Brick2: gl-node6-int:/brick1/mvol1<br>
        Brick3: gl-node7-int:/brick1/mvol1<br>
        Brick4: gl-node8-int:/brick1/mvol1<br>
        Options Reconfigured:<br>
        transport.address-family: inet<br>
        nfs.disable: on<br>
        performance.client-io-threads: off<br>
        root@gl-node5:~#<br>
        <br>
        root@gl-node1:~# gluster volume geo-replication mvol1
        gl-node5-int::mvol1 config<br>
        special_sync_mode: partial<br>
        state_socket_unencoded:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket<br>
        gluster_log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.gluster.log<br>
        ssh_command: ssh -oPasswordAuthentication=no
        -oStrictHostKeyChecking=no -i
        /var/lib/glusterd/geo-replication/secret.pem<br>
        ignore_deletes: false<br>
        change_detector: changelog<br>
        gluster_command_dir: /usr/sbin/<br>
        state_file:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.status<br>
        remote_gsyncd: /nonexistent/gsyncd<br>
        log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.log<br>
        changelog_log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-changes.log<br>
        socketdir: /var/run/gluster<br>
        working_dir:
/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1<br>
        state_detail_file:
/var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-detail.status<br>
        use_meta_volume: true<br>
        ssh_command_tar: ssh -oPasswordAuthentication=no
        -oStrictHostKeyChecking=no -i
        /var/lib/glusterd/geo-replication/tar_ssh.pem<br>
        pid_file:
        /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/monitor.pid<br>
        georep_session_working_dir:
        /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/<br>
        access_mount: true<br>
        gluster_params: aux-gfid-mount acl<br>
        root@gl-node1:~#<br>
        <br>
        root@gl-node1:~# gluster volume geo-replication mvol1
        gl-node5-int::mvol1 status<br>
         <br>
        MASTER NODE     MASTER VOL    MASTER BRICK     SLAVE USER   
        SLAVE                  SLAVE NODE      STATUS     CRAWL
        STATUS       LAST_SYNCED                  <br>
----------------------------------------------------------------------------------------------------------------------------------------------------<br>
        gl-node1-int    mvol1         /brick1/mvol1    root         
        gl-node5-int::mvol1    gl-node5-int    Active     Changelog
        Crawl    2018-03-13 09:43:46          <br>
        gl-node4-int    mvol1         /brick1/mvol1    root         
        gl-node5-int::mvol1    gl-node8-int    Active     Changelog
        Crawl    2018-03-13 09:43:47          <br>
        gl-node2-int    mvol1         /brick1/mvol1    root         
        gl-node5-int::mvol1    gl-node6-int    Passive   
        N/A                N/A                          <br>
        gl-node3-int    mvol1         /brick1/mvol1    root         
        gl-node5-int::mvol1    gl-node7-int    Passive   
        N/A                N/A                          <br>
        root@gl-node1:~#<br>
        <br>
      </tt>volume's are locally mounted as :</p>
    <p><tt>
      </tt><tt><tt>gl-node1:/mvol1                     20G   65M   20G  
          1% /m_vol<br>
          gl-node5:/mvol1                     20G   65M   20G   1%
          /s_vol<br>
        </tt><br>
        <br>
      </tt>prepare some directories and files...important from my point
      of view that there is a directory which contains just one file (in
      this case 'dir1') :<tt><br>
        <br>
        root@gl-node1:~/tmp/test# mkdir dir1<br>
        root@gl-node1:~/tmp/test# mkdir dir5<br>
        root@gl-node1:~/tmp/test# mkdir dir10<br>
        root@gl-node1:~/tmp/test# cd dir10<br>
        root@gl-node1:~/tmp/test/dir10# for i in {1..10}<br>
        &gt; do<br>
        &gt; touch file$i<br>
        &gt; done<br>
        root@gl-node1:~/tmp/test/dir10#<br>
        root@gl-node1:~/tmp/test/dir10# cp file[1-5] ../dir5<br>
        root@gl-node1:~/tmp/test/dir10# cp file1 ../dir1<br>
        root@gl-node1:~/tmp/test# ls dir10<br>
        file1  file10  file2  file3  file4  file5  file6  file7  file8 
        file9<br>
        root@gl-node1:~/tmp/test# ls dir5<br>
        file1  file2  file3  file4  file5<br>
        root@gl-node1:~/tmp/test# ls dir1<br>
        file1<br>
      </tt><tt><tt>root@gl-node1:~/tmp/test#<br>
          <br>
        </tt></tt>copy structure to the master volume :</p>
    <p><tt><tt> </tt>root@gl-node1:~/tmp/test# mkdir /m_vol/test<br>
        root@gl-node1:~/tmp/test#</tt><tt><tt> cp -p -r * /m_vol/test/</tt><br>
        <br>
        <br>
      </tt>collection of gfid's and distribution of the files over the
      bricks on master :<tt><br>
        <br>
        tron@dp-server:~/central$ ./mycommand.sh -H master -c "cat
        /root/tmp/get_file_gfid.out"<br>
        <br>
        Host : gl-node1<br>
        brick1/mvol1/test/dir10/file1 0x934c4202114849ff87f68eda2ca79c53<br>
        brick1/mvol1/test/dir10/file2 0xbba2bf22a6034a388f60bd8af447fade<br>
        brick1/mvol1/test/dir10/file5 0x1d78d8e5609e4485a8faeef0172f703d<br>
        brick1/mvol1/test/dir10/file6 0xff325e1fbed84297be9f0634de3db8b9<br>
        brick1/mvol1/test/dir10/file8 0x019b04bdac824eab8747923cbdf1c155<br>
        brick1/mvol1/test/dir5/file3 0x34168e08a8cb47b4919e9aa90b7cadaf<br>
        brick1/mvol1/test/dir5/file4 0xc1c22afb583c40c3b2700beea652693b<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        brick1/mvol1/test/dir10/file1 0x934c4202114849ff87f68eda2ca79c53<br>
        brick1/mvol1/test/dir10/file2 0xbba2bf22a6034a388f60bd8af447fade<br>
        brick1/mvol1/test/dir10/file5 0x1d78d8e5609e4485a8faeef0172f703d<br>
        brick1/mvol1/test/dir10/file6 0xff325e1fbed84297be9f0634de3db8b9<br>
        brick1/mvol1/test/dir10/file8 0x019b04bdac824eab8747923cbdf1c155<br>
        brick1/mvol1/test/dir5/file3 0x34168e08a8cb47b4919e9aa90b7cadaf<br>
        brick1/mvol1/test/dir5/file4 0xc1c22afb583c40c3b2700beea652693b<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        brick1/mvol1/test/dir1/file1 0x463499f572c140c99688f31a74b46dce<br>
        brick1/mvol1/test/dir10/file3 0xcae961daacff44949833052b732bd9d3<br>
        brick1/mvol1/test/dir10/file4 0xde0e1862f4a3477f8544396fc06d45aa<br>
        brick1/mvol1/test/dir10/file7 0xf3009c09491b44bea7a9528bda459bfb<br>
        brick1/mvol1/test/dir10/file9 0xaf6947b1f40f4bcf923d14156475c48b<br>
        brick1/mvol1/test/dir10/file10
        0x954f604ff9c24e2a98d4b6b732e8dd5a<br>
        brick1/mvol1/test/dir5/file1 0x395c43b8eb474b0bbaaa8adc6d684cc1<br>
        brick1/mvol1/test/dir5/file2 0xc2f0d4913a664b8494c1a4102230d35e<br>
        brick1/mvol1/test/dir5/file5 0x5225783836304b949777a241a5199988<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        brick1/mvol1/test/dir1/file1 0x463499f572c140c99688f31a74b46dce<br>
        brick1/mvol1/test/dir10/file3 0xcae961daacff44949833052b732bd9d3<br>
        brick1/mvol1/test/dir10/file4 0xde0e1862f4a3477f8544396fc06d45aa<br>
        brick1/mvol1/test/dir10/file7 0xf3009c09491b44bea7a9528bda459bfb<br>
        brick1/mvol1/test/dir10/file9 0xaf6947b1f40f4bcf923d14156475c48b<br>
        brick1/mvol1/test/dir10/file10
        0x954f604ff9c24e2a98d4b6b732e8dd5a<br>
        brick1/mvol1/test/dir5/file1 0x395c43b8eb474b0bbaaa8adc6d684cc1<br>
        brick1/mvol1/test/dir5/file2 0xc2f0d4913a664b8494c1a4102230d35e<br>
        brick1/mvol1/test/dir5/file5 0x5225783836304b949777a241a5199988<br>
        <br>
        -----------------------------------------------------<br>
        tron@dp-server:~/central$ ./mycommand.sh -H master -c "cat
        /root/tmp/get_dir_gfid.out"<br>
        <br>
        Host : gl-node1<br>
        <br>
        brick1/mvol1 0x00000000000000000000000000000001<br>
        <br>
        brick1/mvol1/.trashcan 0x00000000000000000000000000000005<br>
        brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee<br>
        brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead<br>
        brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add<br>
        brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        <br>
        brick1/mvol1 0x00000000000000000000000000000001<br>
        <br>
        brick1/mvol1/.trashcan 0x00000000000000000000000000000005<br>
        brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee<br>
        brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead<br>
        brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add<br>
        brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        <br>
        brick1/mvol1 0x00000000000000000000000000000001<br>
        <br>
        brick1/mvol1/.trashcan 0x00000000000000000000000000000005<br>
        brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee<br>
        brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead<br>
        brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add<br>
        brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        <br>
        brick1/mvol1 0x00000000000000000000000000000001<br>
        <br>
        brick1/mvol1/.trashcan 0x00000000000000000000000000000005<br>
        brick1/mvol1/test 0x4f1156d6daec4f55916f01d67b6fc4ee<br>
        brick1/mvol1/test/dir1 0x3cd90325735b4ae39cb04c3c3b74eead<br>
        brick1/mvol1/test/dir10 0x7225082118cc4148866e4996e7fa1add<br>
        brick1/mvol1/test/dir5 0x4b89aa7ee6624c15babf62f17e6f52d6<br>
        <br>
        -----------------------------------------------------<br>
        <br>
      </tt></p>
    <p><tt><br>
      </tt></p>
    <p>remove some files and list trashcan :</p>
    <p><tt> root@gl-node1:/m_vol/test# ls<br>
        dir1  dir10  dir5<br>
        root@gl-node1:/m_vol/test# rm -rf dir5/<br>
        root@gl-node1:/m_vol/test#<br>
        root@gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/<br>
        total 12<br>
        drwxr-xr-x 3 root root 4096 Mar 13 10:59 .<br>
        drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 4096 Mar 13 10:59 dir5<br>
        root@gl-node1:/m_vol/test#<br>
        root@gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/dir5/<br>
        total 8<br>
        drwxr-xr-x 2 root root 4096 Mar 13 10:59 .<br>
        drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:32 file1_2018-03-13_105918<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:32 file2_2018-03-13_105918<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:32 file3_2018-03-13_105918<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:32 file4_2018-03-13_105918<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:32 file5_2018-03-13_105918<br>
        root@gl-node1:/m_vol/test#<br>
        root@gl-node1:/m_vol/test# rm -rf dir1<br>
        root@gl-node1:/m_vol/test#<br>
        <br>
      </tt>both directories dir5 and dir1 have been removed on master
      and slave : <tt><br>
        <br>
        root@gl-node1:/# ls -l /m_vol/test/<br>
        total 4<br>
        drwxr-xr-x 2 root root 4096 Mar 13 10:32 dir10<br>
        root@gl-node1:/# ls -l /s_vol/test/<br>
        total 4<br>
        drwxr-xr-x 2 root root 4096 Mar 13 10:32 dir10<br>
        root@gl-node1:/#<br>
        <br>
        <br>
      </tt>check trashcan, dir1 is not listed :<tt><br>
        <br>
        root@gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/       
        ### deleted dir1 is not shown<br>
        total 12<br>
        drwxr-xr-x 4 root root 4096 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 4096 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 4096 Mar 13 10:59 dir5<br>
        root@gl-node1:/m_vol/test#<br>
        <br>
        <br>
      </tt>check trashcan on bricks, deleted 'dir1' exist only on the
      nodes which had stored the only file 'file1' in that directory :</p>
    <p><tt>
        tron@dp-server:~/central$ ./mycommand.sh -H master -c "ls -la
        /brick1/mvol1/.trashcan/test/"<br>
        <br>
        Host : gl-node1<br>
        total 0<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        total 0<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        tron@dp-server:~/central$<br>
        <br>
        <br>
      </tt>until now the geo-replication is working fine.<tt><br>
        <br>
        root@gl-node1:/m_vol/test# ls -la /m_vol/.trashcan/test/dir1<br>
        total 8<br>
        drwxr-xr-x 2 root root 4096 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 4096 Mar 13 11:03 ..<br>
        -rw-r--r-- 1 root root    0 Mar 13 10:33 file1_2018-03-13_110343<br>
        root@gl-node1:/m_vol/test#<br>
        <br>
      </tt>directly after the last command the geo replication is
      partially faulty, this message appears on gl-node1 and gl-node2 :<tt>
        <br>
        <br>
        [2018-03-13 11:08:30.777489] E
        [master(/brick1/mvol1):784:log_failures] _GMaster: ENTRY
        FAILED    data=({'uid': 0, 'gfid':
        '71379ee0-c40a-49db-b3ed-9f3145ed409a', 'gid': 0, 'mode': 16877,
        'entry': '.gfid/4f59c068-6c77-40f2-b556-aa761834caf1/dir1',
        'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})<br>
        [2018-03-13 11:08:30.777816] E
        [syncdutils(/brick1/mvol1):299:log_raise_exception] &lt;top&gt;:
        The above directory failed to sync. Please fix it to proceed
        further.<br>
        <br>
        <br>
        <br>
        <br>
      </tt>check on bricks, after 'ls -la /m_vol/.trashcan/test/dir1'
      directory 'dir1' appears on all master bricks :<tt><tt><br>
          <br>
        </tt>tron@dp-server:~/central$ ./mycommand.sh -H master -c "ls
        -la /brick1/mvol1/.trashcan/test/"<br>
        <br>
        Host : gl-node1<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:08 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root  6 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:08 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root  6 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 68 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        total 0<br>
        drwxr-xr-x 4 root root 30 Mar 13 11:03 .<br>
        drwxr-xr-x 3 root root 18 Mar 13 10:59 ..<br>
        drwxr-xr-x 2 root root 37 Mar 13 11:03 dir1<br>
        drwxr-xr-x 2 root root 99 Mar 13 10:59 dir5<br>
        <br>
        -----------------------------------------------------<br>
        tron@dp-server:~/central$<br>
        <br>
        <br>
      </tt>new collection of gfid's, looking for the mentioned gfid's on
      all master nodes :<tt><br>
        <br>
        tron@dp-server:~/central$ ./mycommand.sh -H master -c "cat
        /root/tmp/get_dir_gfid.out | grep 9f3145ed409a"<br>
        <br>
        Host : gl-node1<br>
        brick1/mvol1/.trashcan/test/dir1
        0x71379ee0c40a49dbb3ed9f3145ed409a<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        brick1/mvol1/.trashcan/test/dir1
        0x71379ee0c40a49dbb3ed9f3145ed409a<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        brick1/mvol1/.trashcan/test/dir1
        0x71379ee0c40a49dbb3ed9f3145ed409a<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        brick1/mvol1/.trashcan/test/dir1
        0x71379ee0c40a49dbb3ed9f3145ed409a<br>
        <br>
        -----------------------------------------------------<br>
        tron@dp-server:~/central$ ./mycommand.sh -H master -c "cat
        /root/tmp/get_dir_gfid.out | grep aa761834caf1"<br>
        <br>
        Host : gl-node1<br>
        brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node2<br>
        brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node3<br>
        brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1<br>
        <br>
        -----------------------------------------------------<br>
        Host : gl-node4<br>
        brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1<br>
        <br>
        -----------------------------------------------------<br>
        tron@dp-server:~/central$</tt></p>
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">Am 13.03.2018 um 10:13 schrieb Dietmar
      Putz:<br>
    </div>
    <blockquote type="cite"
      cite="mid:1e13d30d-0998-074d-0c28-a2a8bb6ce60b@3qsdn.com">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <p>Hi Kotresh,</p>
      thanks for your repsonse...<br>
      answers inside...<br>
      <br>
      best regards<br>
      Dietmar<br>
      <br>
      <br>
      <div class="moz-cite-prefix">Am 13.03.2018 um 06:38 schrieb
        Kotresh Hiremath Ravishankar:<br>
      </div>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>
                      <div>
                        <div>
                          <div>
                            <div>
                              <div>Hi Dietmar,<br>
                                <br>
                              </div>
                              I am trying to understand the problem and
                              have few questions.<br>
                              <br>
                            </div>
                            1. Is trashcan enabled only on master
                            volume?<br>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </blockquote>
      no, trashcan is also enabled on slave. settings are the same as on
      master but trashcan on slave is complete empty.<br>
      <tt>root@gl-node5:~# gluster volume get mvol1 all | grep -i trash</tt><tt><br>
      </tt><tt>features.trash                         
        on                                      </tt><tt><br>
      </tt><tt>features.trash-dir                     
        .trashcan                               </tt><tt><br>
      </tt><tt>features.trash-eliminate-path          
        (null)                                  </tt><tt><br>
      </tt><tt>features.trash-max-filesize            
        2GB                                     </tt><tt><br>
      </tt><tt>features.trash-internal-op             
        off                                     </tt><tt><br>
      </tt><tt>root@gl-node5:~# </tt><br>
      <br>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>
                      <div>
                        <div>2. Does the 'rm -rf' done on master volume
                          synced to slave ?<br>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </blockquote>
      yes. entire content of ~/test1/b1/* on slave has been removed.<br>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>
                      <div>3. If trashcan is disabled, the issue goes
                        away?<br>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </blockquote>
      <br>
      after disabling features.trash on master and slave the issue
      remains...stop and restart of master/slave volume and
      geo-replication has no effect.<br>
      <tt>root@gl-node1:~# gluster volume geo-replication mvol1
        gl-node5-int::mvol1 status</tt><tt><br>
      </tt><tt> </tt><tt><br>
      </tt><tt>MASTER NODE     MASTER VOL    MASTER BRICK     SLAVE
        USER    SLAVE                  SLAVE NODE      STATUS     CRAWL
        STATUS       LAST_SYNCED                  </tt><tt><br>
      </tt><tt>----------------------------------------------------------------------------------------------------------------------------------------------------</tt><tt><br>
      </tt><tt>gl-node1-int    mvol1         /brick1/mvol1   
        root          gl-node5-int::mvol1    N/A             Faulty    
        N/A                N/A                          </tt><tt><br>
      </tt><tt>gl-node3-int    mvol1         /brick1/mvol1   
        root          gl-node5-int::mvol1    gl-node7-int    Passive   
        N/A                N/A                          </tt><tt><br>
      </tt><tt>gl-node2-int    mvol1         /brick1/mvol1   
        root          gl-node5-int::mvol1    N/A             Faulty    
        N/A                N/A                          </tt><tt><br>
      </tt><tt>gl-node4-int    mvol1         /brick1/mvol1   
        root          gl-node5-int::mvol1    gl-node8-int    Active    
        Changelog Crawl    2018-03-12 13:56:28          </tt><tt><br>
      </tt><tt>root@gl-node1:~#</tt><br>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div><br>
                    </div>
                    The geo-rep error just says the it failed to create
                    the directory "Oracle_VM_Virtua<wbr>lBox_Extension"
                    on slave.<br>
                    Usually this would be because of gfid mismatch but I
                    don't see that in your case. So I am little more
                    interested<br>
                  </div>
                  in present state of the geo-rep. Is it still throwing
                  same errors and same failure to sync the same
                  directory. If<br>
                </div>
                so does the parent 'test1/b<wbr>1' exists on slave?<br>
              </div>
            </div>
          </div>
        </div>
      </blockquote>
      it is still throwing the same error as show below.<br>
      the directory 'test1/b1' is empty as expected and exist on master
      and slave.<br>
      <br>
      <br>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div>
              <div><br>
              </div>
              And doing ls on trashcan should not affect geo-rep. Is
              there a easy reproducer for this ?<br>
            </div>
          </div>
        </div>
      </blockquote>
      i have made several tests on 3.10.11 and 3.12.6 and i'm pretty
      sure there was one without activation of the trashcan feature on
      slave...with same / similiar problems.<br>
      i will come back with a more comprehensive and reproducible
      description of that issue...<br>
      <br>
      <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
        <div dir="ltr">
          <div>
            <div><br>
              <br>
            </div>
            Thanks,<br>
          </div>
          Kotresh HR<br>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Mon, Mar 12, 2018 at 10:13 PM,
            Dietmar Putz <span dir="ltr">&lt;<a
                href="mailto:dietmar.putz@3qsdn.com" target="_blank"
                moz-do-not-send="true">dietmar.putz@3qsdn.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
              <br>
              in regard to<br>
              <a
                href="https://bugzilla.redhat.com/show_bug.cgi?id=1434066"
                rel="noreferrer" target="_blank" moz-do-not-send="true">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1434066</a><br>
              i have been faced to another issue when using the trashcan
              feature on a dist. repl. volume running a geo-replication.
              (gfs 3.12.6 on ubuntu 16.04.4)<br>
              for e.g. removing an entire directory with subfolders :<br>
              tron@gl-node1:/myvol-1/test1/b<wbr>1$ rm -rf *<br>
              <br>
              afterwards listing files in the trashcan :<br>
              tron@gl-node1:/myvol-1/test1$ ls -la
              /myvol-1/.trashcan/test1/b1/<br>
              <br>
              leads to an outage of the geo-replication.<br>
              error on master-01 and master-02 :<br>
              <br>
              [2018-03-12 13:37:14.827204] I
              [master(/brick1/mvol1):1385:cr<wbr>awl] _GMaster: slave's
              time stime=(1520861818, 0)<br>
              [2018-03-12 13:37:14.835535] E
              [master(/brick1/mvol1):784:log<wbr>_failures] _GMaster:
              ENTRY FAILED    data=({'uid': 0, 'gfid':
              'c38f75e3-194a-4d22-9094-50ac8<wbr>f8756e7', 'gid': 0,
              'mode': 16877, 'entry': '.gfid/5531bd64-ac50-462b-943e<wbr>-c0bf1c52f52c/Oracle_VM_Virtua<wbr>lBox_Extension',
              'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})<br>
              [2018-03-12 13:37:14.835911] E
              [syncdutils(/brick1/mvol1):299<wbr>:log_raise_exception]
              &lt;top&gt;: The above directory failed to sync. Please
              fix it to proceed further.<br>
              <br>
              <br>
              both gfid's of the directories as shown in the log :<br>
              brick1/mvol1/.trashcan/test1/b<wbr>1
              0x5531bd64ac50462b943ec0bf1c52<wbr>f52c<br>
              brick1/mvol1/.trashcan/test1/b<wbr>1/Oracle_VM_VirtualBox_Extensi<wbr>on
              0xc38f75e3194a4d22909450ac8f87<wbr>56e7<br>
              <br>
              the shown directory contains just one file which is stored
              on gl-node3 and gl-node4 while node1 and 2 are in geo
              replication error.<br>
              since the filesize limitation of the trashcan is obsolete
              i'm really interested to use the trashcan feature but i'm
              concerned it will interrupt the geo-replication entirely.<br>
              does anybody else have been faced with this
              situation...any hints, workarounds... ?<br>
              <br>
              best regards<br>
              Dietmar Putz<br>
              <br>
              <br>
              root@gl-node1:~/tmp# gluster volume info mvol1<br>
              <br>
              Volume Name: mvol1<br>
              Type: Distributed-Replicate<br>
              Volume ID: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
              Status: Started<br>
              Snapshot Count: 0<br>
              Number of Bricks: 2 x 2 = 4<br>
              Transport-type: tcp<br>
              Bricks:<br>
              Brick1: gl-node1-int:/brick1/mvol1<br>
              Brick2: gl-node2-int:/brick1/mvol1<br>
              Brick3: gl-node3-int:/brick1/mvol1<br>
              Brick4: gl-node4-int:/brick1/mvol1<br>
              Options Reconfigured:<br>
              changelog.changelog: on<br>
              geo-replication.ignore-pid-che<wbr>ck: on<br>
              geo-replication.indexing: on<br>
              features.trash-max-filesize: 2GB<br>
              features.trash: on<br>
              transport.address-family: inet<br>
              nfs.disable: on<br>
              performance.client-io-threads: off<br>
              <br>
              root@gl-node1:/myvol-1/test1# gluster volume
              geo-replication mvol1 gl-node5-int::mvol1 config<br>
              special_sync_mode: partial<br>
              gluster_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.<wbr>gluster.log<br>
              ssh_command: ssh -oPasswordAuthentication=no
              -oStrictHostKeyChecking=no -i
              /var/lib/glusterd/geo-replicat<wbr>ion/secret.pem<br>
              change_detector: changelog<br>
              use_meta_volume: true<br>
              session_owner: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
              state_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.status<br>
              gluster_params: aux-gfid-mount acl<br>
              remote_gsyncd: /nonexistent/gsyncd<br>
              working_dir: /var/lib/misc/glusterfsd/mvol1<wbr>/ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1<br>
              state_detail_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1-detail.status<br>
              gluster_command_dir: /usr/sbin/<br>
              pid_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.pid<br>
              georep_session_working_dir: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<br>
              ssh_command_tar: ssh -oPasswordAuthentication=no
              -oStrictHostKeyChecking=no -i
              /var/lib/glusterd/geo-replicat<wbr>ion/tar_ssh.pem<br>
              master.stime_xattr_name: trusted.glusterfs.a1c74931-568<wbr>c-4f40-8573-dd344553e557.d62bd<wbr>a3a-1396-492a-ad99-7c6238d93c6<wbr>a.stime<br>
              changelog_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1-<wbr>changes.log<br>
              socketdir: /var/run/gluster<br>
              volume_id: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
              ignore_deletes: false<br>
              state_socket_unencoded: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1.socket<br>
              log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.log<br>
              access_mount: true<br>
              root@gl-node1:/myvol-1/test1#<span class="HOEnZb"><font
                  color="#888888"><br>
                  <br>
                  -- <br>
                  <br>
                  ______________________________<wbr>_________________<br>
                  Gluster-users mailing list<br>
                  <a href="mailto:Gluster-users@gluster.org"
                    target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                  <a
                    href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></font></span></blockquote>
          </div>
          <br>
          <br clear="all">
          <br>
          -- <br>
          <div class="gmail_signature" data-smartmail="gmail_signature">
            <div dir="ltr">
              <div>Thanks and Regards,<br>
              </div>
              Kotresh H R<br>
            </div>
          </div>
        </div>
      </blockquote>
      <br>
      <pre class="moz-signature" cols="72">-- 
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
 
Mobile:   +49 171 / 90 160 39
Mail:     <a class="moz-txt-link-abbreviated" href="mailto:dietmar.putz@3qsdn.com" moz-do-not-send="true">dietmar.putz@3qsdn.com</a></pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
 
Mobile:   +49 171 / 90 160 39
Mail:     <a class="moz-txt-link-abbreviated" href="mailto:dietmar.putz@3qsdn.com">dietmar.putz@3qsdn.com</a></pre>
  </body>
</html>