<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 06/30/2017 02:03 AM, Alastair Neil
      wrote:<br>
    </div>
    <blockquote
cite="mid:CA+SarwrLYcR+E38xEoe8qdDv41rEU=YeG8nXbMgR60Ox3+_Luw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>
          <div>Gluster 3.10.2<br>
            <br>
            I have a replica 3 (2+1) volume and I have just seen both
            data bricks go down (arbiter stayed up).  I had to disable
            trash feature to get the bricks to start.  I had a quick
            look on bugzilla but did not see anything that looked
            similar.  I  just wanted to check that I was not hitting
            some know issue and/or doing something stupid, before I open
            a bug. This is from the brick log:<br>
          </div>
        </div>
      </div>
    </blockquote>
    I don't think we have any known issues. Do you have a core file?
    Attach it to the BZ along with the brick and client logs and also
    the steps for a reproducer if you have one. <br>
    -Ravi<br>
    <blockquote
cite="mid:CA+SarwrLYcR+E38xEoe8qdDv41rEU=YeG8nXbMgR60Ox3+_Luw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>
          <div><br>
          </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">[2017-06-28
            17:38:43.565378] E [posix.c:3327:_fill_writev_xdata]
            (--&gt;/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x2bd3)
            [0x7ff81964ebd3]
            --&gt;/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x1e546)
            [0x7ff819e96546]
            --&gt;/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x1e2ff)
            [0x7ff819e962ff] <br>
            ) 0-homes-posix: fd: 0x7ff7b4121bf0 inode:
            0x7ff7b41222b0gfid:00000000-0000-0000-0000-000000000000
            [Invalid argument]<br>
            pending frames:<br>
            frame : type(0) op(24)<br>
            patchset: git://<a moz-do-not-send="true"
              href="http://git.gluster.org/glusterfs.git">git.gluster.org/glusterfs.git</a><br>
            signal received: 11<br>
            time of crash: <br>
            2017-06-28 17:38:49<br>
            configuration details:<br>
            argp 1<br>
            backtrace 1<br>
            dlfcn 1 </blockquote>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">libpthread 1<br>
            llistxattr 1<br>
            setfsid 1<br>
            spinlock 1<br>
            epoll.h 1<br>
            xattr.h 1<br>
            st_atim.tv_nsec 1<br>
            package-string: glusterfs 3.10.2<br>
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xa0)[0x7ff8274ed4d0]<br>
/lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7ff8274f6dd4]<br>
            /lib64/libc.so.6(+0x35250)[0x7ff825bd1250]<br>
            /lib64/libc.so.6(+0x163ea1)[0x7ff825cffea1]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x11c29)[0x7ff81965dc29]<br>
/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x7d5a)[0x7ff819e7fd5a]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x13676)[0x7ff81965f676]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/changetimerecorder.so(+0x810d)[0x7ff81943510d]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/changelog.so(+0xbf40)[0x7ff818d4ff40]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/bitrot-stub.so(+0xeafd)[0x7ff818924afd]<br>
/lib64/libglusterfs.so.0(default_ftruncate+0xc8)[0x7ff827568ec8]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/locks.so(+0x182a5)[0x7ff8184ea2a5]<br>
/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x7d5a)[0x7ff819e7fd5a]<br>
            /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]<br>
            /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]<br>
            /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/bitrot-stub.so(+0x9f4f)[0x7ff81891ff4f]<br>
            /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/locks.so(+0x7d8a)[0x7ff8184d9d8a]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/worm.so(+0x898e)[0x7ff8182cc98e]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/read-only.so(+0x2ca3)[0x7ff8180beca3]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/leases.so(+0xad5f)[0x7ff813df5d5f]<br>
/usr/lib64/glusterfs/3.10.2/xlator/features/upcall.so(+0x13209)[0x7ff813be3209]<br>
/lib64/libglusterfs.so.0(default_ftruncate_resume+0x1b7)[0x7ff827585d77]<br>
            /lib64/libglusterfs.so.0(call_resume+0x75)[0x7ff827511115]<br>
/usr/lib64/glusterfs/3.10.2/xlator/performance/io-threads.so(+0x4dd4)[0x7ff8139c9dd4]<br>
            /lib64/libpthread.so.0(+0x7dc5)[0x7ff82634edc5]<br>
            /lib64/libc.so.6(clone+0x6d)[0x7ff825c9376d]<br>
          </blockquote>
          <br>
        </div>
        <div>output from gluster volume info | sort :<br>
          <br>
        </div>
        <div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">auth.allow: 192.168.0.*<br>
            auto-delete: enable<br>
            Brick1: gluster2:/export/brick2/home<br>
            Brick2: gluster1:/export/brick2/home<br>
            Brick3: gluster0:/export/brick9/homes-arbiter (arbiter)<br>
            Bricks:<br>
            client.event-threads: 4<br>
            cluster.background-self-heal-count: 8<br>
            cluster.consistent-metadata: no<br>
            cluster.data-self-heal-algorithm: diff<br>
            cluster.data-self-heal: off<br>
            cluster.eager-lock: on<br>
            cluster.enable-shared-storage: enable<br>
            cluster.entry-self-heal: off<br>
            cluster.heal-timeout: 180<br>
            cluster.lookup-optimize: off<br>
            cluster.metadata-self-heal: off<br>
            cluster.min-free-disk: 5%<br>
            cluster.quorum-type: auto<br>
            cluster.readdir-optimize: on<br>
            cluster.read-hash-mode: 2<br>
            cluster.rebalance-stats: on<br>
            cluster.self-heal-daemon: on<br>
            cluster.self-heal-readdir-size: 64KB<br>
            cluster.self-heal-window-size: 4<br>
            cluster.server-quorum-ratio: 51%<br>
            diagnostics.brick-log-level: WARNING<br>
            diagnostics.client-log-level: ERROR<br>
            diagnostics.count-fop-hits: on<br>
            diagnostics.latency-measurement: off<br>
            features.barrier: disable<br>
            features.quota: off<br>
            features.show-snapshot-directory: enable<br>
            features.trash-internal-op: off<br>
            features.trash-max-filesize: 1GB<br>
            features.trash: off<br>
            features.uss: off<br>
            network.ping-timeout: 20<br>
            nfs.disable: on<br>
            nfs.export-dirs: on<br>
            nfs.export-volumes: on<br>
            nfs.rpc-auth-allow: 192.168.0.*<br>
            Number of Bricks: 1 x (2 + 1) = 3<br>
            Options Reconfigured:<br>
            performance.cache-size: 256MB<br>
            performance.client-io-threads: on<br>
            performance.io-thread-count: 16<br>
            performance.strict-write-ordering: off<br>
            performance.write-behind: off<br>
            server.allow-insecure: on<br>
            server.event-threads: 8<br>
            server.root-squash: off<br>
            server.statedump-path: /tmp<br>
            snap-activate-on-create: enable<br>
            Snapshot Count: 0<br>
            Status: Started<br>
            storage.linux-aio: off<br>
            transport.address-family: inet<br>
            Transport-type: tcp<br>
            Type: Replicate<br>
            user.cifs: disable<br>
            Volume ID: c1fbadcf-94bd-46d8-8186-f0dc4a197fb5<br>
            Volume Name: homes<br>
          </blockquote>
          <br>
          <br>
          <br>
        </div>
        -Regards,  Alastair<br>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>