<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">28.08.2018 10:43, Amar Tumballi пишет:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAHxyDdNPvBW-tkK0y-M63Tv6PRViw1Qk1_Kh31V=WCPjByetEw@mail.gmail.com">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Tue, Aug 28, 2018 at 11:24 AM,
            Dmitry Melekhov <span dir="ltr">&lt;<a
                href="mailto:dm@belkam.com" target="_blank"
                moz-do-not-send="true">dm@belkam.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote">Hello!<br>
              <br>
              <br>
              Yesterday we hit something like this on 4.1.2<br>
              <br>
              Centos 7.5.<br>
              <br>
              <br>
              Volume is replicated - two bricks and one arbiter.<br>
              <br>
              <br>
              We rebooted arbiter, waited for heal end,  and tried to
              live migrate VM to another node ( we run VMs on gluster
              nodes ):<br>
              <br>
              <br>
              [2018-08-27 09:56:22.085411] I [MSGID: 115029]
              [server-handshake.c:763:server<wbr>_setvolume]
              0-pool-server: accepted client from
              CTX_ID:b55f4a90-e241-48ce-bd4d<wbr>-268c8a956f4a-GRAPH_ID:0-PID:<wbr>8887-HOST:son-PC_NAME:pool-<br>
              client-6-RECON_NO:-0 (version: 4.1.2)<br>
              [2018-08-27 09:56:22.107609] I [MSGID: 115036]
              [server.c:483:server_rpc_notif<wbr>y] 0-pool-server:
              disconnecting connection from
              CTX_ID:b55f4a90-e241-48ce-bd4d<wbr>-268c8a956f4a-GRAPH_ID:0-PID:<wbr>8887-HOST:son-PC_NAME:pool-<br>
              client-6-RECON_NO:-0<br>
              [2018-08-27 09:56:22.107747] I [MSGID: 101055]
              [client_t.c:444:gf_client_unre<wbr>f] 0-pool-server:
              Shutting down connection CTX_ID:b55f4a90-e241-48ce-bd4d<wbr>-268c8a956f4a-GRAPH_ID:0-PID:<wbr>8887-HOST:son-PC_NAME:pool-<wbr>clien<br>
              t-6-RECON_NO:-0<br>
              [2018-08-27 09:58:37.905829] I [MSGID: 115036]
              [server.c:483:server_rpc_notif<wbr>y] 0-pool-server:
              disconnecting connection from
              CTX_ID:c3eb6cfc-2ef9-470a-89d1<wbr>-a87170d00da5-GRAPH_ID:0-PID:<wbr>30292-HOST:father-PC_NAME:p<br>
              ool-client-6-RECON_NO:-0<br>
              [2018-08-27 09:58:37.905926] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28c831d8bc550000}<br>
              [2018-08-27 09:58:37.905959] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=2870a7d6bc550000}<br>
              [2018-08-27 09:58:37.905979] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=2880a7d6bc550000}<br>
              [2018-08-27 09:58:37.905997] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28f031d8bc550000}<br>
              [2018-08-27 09:58:37.906016] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28b07dd5bc550000}<br>
              [2018-08-27 09:58:37.906034] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28e0a7d6bc550000}<br>
              [2018-08-27 09:58:37.906056] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28b845d8bc550000}<br>
              [2018-08-27 09:58:37.906079] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=2858a7d8bc550000}<br>
              [2018-08-27 09:58:37.906098] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=2868a8d7bc550000}<br>
              [2018-08-27 09:58:37.906121] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28f80bd7bc550000}<br>
              ...<br>
              <br>
              [2018-08-27 09:58:37.907375] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=28a8cdd6bc550000}<br>
              [2018-08-27 09:58:37.907393] W
              [inodelk.c:610:pl_inodelk_log_<wbr>cleanup] 0-pool-server:
              releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0<wbr>d9f318
              held by {client=0x7ffb58035bc0, pid=30292
              lk-owner=2880cdd6bc550000}<br>
              [2018-08-27 09:58:37.907476] I
              [socket.c:3837:socket_submit_r<wbr>eply]
              0-tcp.pool-server: not connected (priv-&gt;connected = -1)<br>
              [2018-08-27 09:58:37.907520] E
              [rpcsvc.c:1378:rpcsvc_submit_g<wbr>eneric] 0-rpc-service:
              failed to submit message (XID: 0xcb88cb, Program:
              GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to
              rpc-transport (tcp.pool-server)<br>
              [2018-08-27 09:58:37.910727] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.910814] E
              [rpcsvc.c:1378:rpcsvc_submit_g<wbr>eneric] 0-rpc-service:
              failed to submit message (XID: 0xcb88ce, Program:
              GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to
              rpc-transport (tcp.pool-server)<br>
              [2018-08-27 09:58:37.910861] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.910904] E
              [rpcsvc.c:1378:rpcsvc_submit_g<wbr>eneric] 0-rpc-service:
              failed to submit message (XID: 0xcb88cf, Program:
              GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to
              rpc-transport (tcp.pool-server)<br>
              [2018-08-27 09:58:37.910940] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.910979] E
              [rpcsvc.c:1378:rpcsvc_submit_g<wbr>eneric] 0-rpc-service:
              failed to submit message (XID: 0xcb88d1, Program:
              GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to
              rpc-transport (tcp.pool-server)<br>
              [2018-08-27 09:58:37.911012] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.911050] E
              [rpcsvc.c:1378:rpcsvc_submit_g<wbr>eneric] 0-rpc-service:
              failed to submit message (XID: 0xcb88d8, Program:
              GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to
              rpc-transport (tcp.pool-server)<br>
              [2018-08-27 09:58:37.911083] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.916217] E
              [server.c:137:server_submit_re<wbr>ply]
              (--&gt;/usr/lib64/glusterfs/4.1.2<wbr>/xlator/debug/io-stats.so(+0x2<wbr>0084)
              [0x7ffb64379084] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0x6<wbr>05<br>
              ba) [0x7ffb5fddf5ba] --&gt;/usr/lib64/glusterfs/4.1.2/<wbr>xlator/protocol/server.so(+0xa<wbr>fce)
              [0x7ffb5fd89fce] ) 0-: Reply submission failed<br>
              [2018-08-27 09:58:37.916520] I [MSGID: 115013]
              [server-helpers.c:286:do_fd_cl<wbr>eanup] 0-pool-server:
              fd cleanup on /balamak.img<br>
              <br>
              <br>
              after this I/O on  /balamak.img was blocked.<br>
              <br>
              <br>
              Only solution we found was to reboot all 3 nodes.<br>
              <br>
              <br>
              Is there any bug report in bugzilla we can add logs?<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>Not aware of such bugs!</div>
            <div> </div>
            <blockquote class="gmail_quote">
              Is it possible to turn of these locks?<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>Not sure, will get back on this one!</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    <br>
    btw, found this link<br>
<a class="moz-txt-link-freetext" href="https://docs.gluster.org/en/v3/Troubleshooting/troubleshooting-filelocks/">https://docs.gluster.org/en/v3/Troubleshooting/troubleshooting-filelocks/</a><br>
    <br>
    tried on another (test) cluster:<br>
    <br>
     [root@marduk ~]# gluster volume statedump pool<br>
    Segmentation fault (core dumped)<br>
     <br>
    <br>
    4.1.2 too...<br>
    <br>
    something is wrong here.<br>
    <br>
    <br>
    <blockquote type="cite"
cite="mid:CAHxyDdNPvBW-tkK0y-M63Tv6PRViw1Qk1_Kh31V=WCPjByetEw@mail.gmail.com">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div> </div>
            <blockquote class="gmail_quote">
              Thank you!<br>
              <br>
              <br>
              <br>
              <br>
              ______________________________<wbr>_________________<br>
              Gluster-users mailing list<br>
              <a href="mailto:Gluster-users@gluster.org" target="_blank"
                moz-do-not-send="true">Gluster-users@gluster.org</a><br>
              <a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a></blockquote>
          </div>
          <br>
          <br>
          <div><br>
          </div>
          -- <br>
          <div class="gmail_signature" data-smartmail="gmail_signature">
            <div dir="ltr">
              <div>
                <div dir="ltr">
                  <div>Amar Tumballi (amarts)<br>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>