<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Hi Kotresh,</p>
    thanks for your repsonse...<br>
    answers inside...<br>
    <br>
    best regards<br>
    Dietmar<br>
    <br>
    <br>
    <div class="moz-cite-prefix">Am 13.03.2018 um 06:38 schrieb Kotresh
      Hiremath Ravishankar:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>
                      <div>
                        <div>
                          <div>
                            <div>Hi Dietmar,<br>
                              <br>
                            </div>
                            I am trying to understand the problem and
                            have few questions.<br>
                            <br>
                          </div>
                          1. Is trashcan enabled only on master volume?<br>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    no, trashcan is also enabled on slave. settings are the same as on
    master but trashcan on slave is complete empty.<br>
    <tt>root@gl-node5:~# gluster volume get mvol1 all | grep -i trash</tt><tt><br>
    </tt><tt>features.trash                         
      on                                      </tt><tt><br>
    </tt><tt>features.trash-dir                     
      .trashcan                               </tt><tt><br>
    </tt><tt>features.trash-eliminate-path          
      (null)                                  </tt><tt><br>
    </tt><tt>features.trash-max-filesize            
      2GB                                     </tt><tt><br>
    </tt><tt>features.trash-internal-op             
      off                                     </tt><tt><br>
    </tt><tt>root@gl-node5:~# </tt><br>
    <br>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>
                      <div>2. Does the 'rm -rf' done on master volume
                        synced to slave ?<br>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    yes. entire content of ~/test1/b1/* on slave has been removed.<br>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div>3. If trashcan is disabled, the issue goes
                      away?<br>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    after disabling features.trash on master and slave the issue
    remains...stop and restart of master/slave volume and
    geo-replication has no effect.<br>
    <tt>root@gl-node1:~# gluster volume geo-replication mvol1
      gl-node5-int::mvol1 status</tt><tt><br>
    </tt><tt> </tt><tt><br>
    </tt><tt>MASTER NODE     MASTER VOL    MASTER BRICK     SLAVE
      USER    SLAVE                  SLAVE NODE      STATUS     CRAWL
      STATUS       LAST_SYNCED                  </tt><tt><br>
    </tt><tt>----------------------------------------------------------------------------------------------------------------------------------------------------</tt><tt><br>
    </tt><tt>gl-node1-int    mvol1         /brick1/mvol1   
      root          gl-node5-int::mvol1    N/A             Faulty    
      N/A                N/A                          </tt><tt><br>
    </tt><tt>gl-node3-int    mvol1         /brick1/mvol1   
      root          gl-node5-int::mvol1    gl-node7-int    Passive   
      N/A                N/A                          </tt><tt><br>
    </tt><tt>gl-node2-int    mvol1         /brick1/mvol1   
      root          gl-node5-int::mvol1    N/A             Faulty    
      N/A                N/A                          </tt><tt><br>
    </tt><tt>gl-node4-int    mvol1         /brick1/mvol1   
      root          gl-node5-int::mvol1    gl-node8-int    Active    
      Changelog Crawl    2018-03-12 13:56:28          </tt><tt><br>
    </tt><tt>root@gl-node1:~#</tt><br>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div><br>
                  </div>
                  The geo-rep error just says the it failed to create
                  the directory "Oracle_VM_Virtua<wbr>lBox_Extension" on
                  slave.<br>
                  Usually this would be because of gfid mismatch but I
                  don't see that in your case. So I am little more
                  interested<br>
                </div>
                in present state of the geo-rep. Is it still throwing
                same errors and same failure to sync the same directory.
                If<br>
              </div>
              so does the parent 'test1/b<wbr>1' exists on slave?<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    it is still throwing the same error as show below.<br>
    the directory 'test1/b1' is empty as expected and exist on master
    and slave.<br>
    <br>
    <br>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div>
            <div><br>
            </div>
            And doing ls on trashcan should not affect geo-rep. Is there
            a easy reproducer for this ?<br>
          </div>
        </div>
      </div>
    </blockquote>
    i have made several tests on 3.10.11 and 3.12.6 and i'm pretty sure
    there was one without activation of the trashcan feature on
    slave...with same / similiar problems.<br>
    i will come back with a more comprehensive and reproducible
    description of that issue...<br>
    <br>
    <blockquote type="cite"
cite="mid:CAPgWtC6hBXEd+LcggS_yYB9JLoOSidjyMyunb-TWJJebUcTsAQ@mail.gmail.com">
      <div dir="ltr">
        <div>
          <div><br>
            <br>
          </div>
          Thanks,<br>
        </div>
        Kotresh HR<br>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Mon, Mar 12, 2018 at 10:13 PM,
          Dietmar Putz <span dir="ltr">&lt;<a
              href="mailto:dietmar.putz@3qsdn.com" target="_blank"
              moz-do-not-send="true">dietmar.putz@3qsdn.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
            <br>
            in regard to<br>
            <a
              href="https://bugzilla.redhat.com/show_bug.cgi?id=1434066"
              rel="noreferrer" target="_blank" moz-do-not-send="true">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1434066</a><br>
            i have been faced to another issue when using the trashcan
            feature on a dist. repl. volume running a geo-replication.
            (gfs 3.12.6 on ubuntu 16.04.4)<br>
            for e.g. removing an entire directory with subfolders :<br>
            tron@gl-node1:/myvol-1/test1/b<wbr>1$ rm -rf *<br>
            <br>
            afterwards listing files in the trashcan :<br>
            tron@gl-node1:/myvol-1/test1$ ls -la
            /myvol-1/.trashcan/test1/b1/<br>
            <br>
            leads to an outage of the geo-replication.<br>
            error on master-01 and master-02 :<br>
            <br>
            [2018-03-12 13:37:14.827204] I
            [master(/brick1/mvol1):1385:cr<wbr>awl] _GMaster: slave's
            time stime=(1520861818, 0)<br>
            [2018-03-12 13:37:14.835535] E
            [master(/brick1/mvol1):784:log<wbr>_failures] _GMaster:
            ENTRY FAILED    data=({'uid': 0, 'gfid':
            'c38f75e3-194a-4d22-9094-50ac8<wbr>f8756e7', 'gid': 0,
            'mode': 16877, 'entry': '.gfid/5531bd64-ac50-462b-943e<wbr>-c0bf1c52f52c/Oracle_VM_Virtua<wbr>lBox_Extension',
            'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})<br>
            [2018-03-12 13:37:14.835911] E
            [syncdutils(/brick1/mvol1):299<wbr>:log_raise_exception]
            &lt;top&gt;: The above directory failed to sync. Please fix
            it to proceed further.<br>
            <br>
            <br>
            both gfid's of the directories as shown in the log :<br>
            brick1/mvol1/.trashcan/test1/b<wbr>1
            0x5531bd64ac50462b943ec0bf1c52<wbr>f52c<br>
            brick1/mvol1/.trashcan/test1/b<wbr>1/Oracle_VM_VirtualBox_Extensi<wbr>on
            0xc38f75e3194a4d22909450ac8f87<wbr>56e7<br>
            <br>
            the shown directory contains just one file which is stored
            on gl-node3 and gl-node4 while node1 and 2 are in geo
            replication error.<br>
            since the filesize limitation of the trashcan is obsolete
            i'm really interested to use the trashcan feature but i'm
            concerned it will interrupt the geo-replication entirely.<br>
            does anybody else have been faced with this situation...any
            hints, workarounds... ?<br>
            <br>
            best regards<br>
            Dietmar Putz<br>
            <br>
            <br>
            root@gl-node1:~/tmp# gluster volume info mvol1<br>
            <br>
            Volume Name: mvol1<br>
            Type: Distributed-Replicate<br>
            Volume ID: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
            Status: Started<br>
            Snapshot Count: 0<br>
            Number of Bricks: 2 x 2 = 4<br>
            Transport-type: tcp<br>
            Bricks:<br>
            Brick1: gl-node1-int:/brick1/mvol1<br>
            Brick2: gl-node2-int:/brick1/mvol1<br>
            Brick3: gl-node3-int:/brick1/mvol1<br>
            Brick4: gl-node4-int:/brick1/mvol1<br>
            Options Reconfigured:<br>
            changelog.changelog: on<br>
            geo-replication.ignore-pid-che<wbr>ck: on<br>
            geo-replication.indexing: on<br>
            features.trash-max-filesize: 2GB<br>
            features.trash: on<br>
            transport.address-family: inet<br>
            nfs.disable: on<br>
            performance.client-io-threads: off<br>
            <br>
            root@gl-node1:/myvol-1/test1# gluster volume geo-replication
            mvol1 gl-node5-int::mvol1 config<br>
            special_sync_mode: partial<br>
            gluster_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.<wbr>gluster.log<br>
            ssh_command: ssh -oPasswordAuthentication=no
            -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/secret.pem<br>
            change_detector: changelog<br>
            use_meta_volume: true<br>
            session_owner: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
            state_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.status<br>
            gluster_params: aux-gfid-mount acl<br>
            remote_gsyncd: /nonexistent/gsyncd<br>
            working_dir: /var/lib/misc/glusterfsd/mvol1<wbr>/ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1<br>
            state_detail_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1-detail.status<br>
            gluster_command_dir: /usr/sbin/<br>
            pid_file: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>monitor.pid<br>
            georep_session_working_dir: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<br>
            ssh_command_tar: ssh -oPasswordAuthentication=no
            -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/tar_ssh.pem<br>
            master.stime_xattr_name: trusted.glusterfs.a1c74931-568<wbr>c-4f40-8573-dd344553e557.d62bd<wbr>a3a-1396-492a-ad99-7c6238d93c6<wbr>a.stime<br>
            changelog_log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1-<wbr>changes.log<br>
            socketdir: /var/run/gluster<br>
            volume_id: a1c74931-568c-4f40-8573-dd3445<wbr>53e557<br>
            ignore_deletes: false<br>
            state_socket_unencoded: /var/lib/glusterd/geo-replicat<wbr>ion/mvol1_gl-node5-int_mvol1/<wbr>ssh%3A%2F%2Froot%40192.168.<wbr>178.65%3Agluster%3A%2F%2F127.<wbr>0.0.1%3Amvol1.socket<br>
            log_file: /var/log/glusterfs/geo-replica<wbr>tion/mvol1/ssh%3A%2F%2Froot%<wbr>40192.168.178.65%3Agluster%3A%<wbr>2F%2F127.0.0.1%3Amvol1.log<br>
            access_mount: true<br>
            root@gl-node1:/myvol-1/test1#<span class="HOEnZb"><font
                color="#888888"><br>
                <br>
                -- <br>
                <br>
                ______________________________<wbr>_________________<br>
                Gluster-users mailing list<br>
                <a href="mailto:Gluster-users@gluster.org"
                  target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                <a
                  href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                  rel="noreferrer" target="_blank"
                  moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></font></span></blockquote>
        </div>
        <br>
        <br clear="all">
        <br>
        -- <br>
        <div class="gmail_signature" data-smartmail="gmail_signature">
          <div dir="ltr">
            <div>Thanks and Regards,<br>
            </div>
            Kotresh H R<br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
 
Mobile:   +49 171 / 90 160 39
Mail:     <a class="moz-txt-link-abbreviated" href="mailto:dietmar.putz@3qsdn.com">dietmar.putz@3qsdn.com</a></pre>
  </body>
</html>