<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><br>
      Hi,<br>
      <br>
      We have a cluster whose common storage is a gluster volume
      consisting of 5 bricks residing on 3 servers.<br>
    </p>
    <ul>
      <li>Gluster volume machines</li>
      <ul>
        <li>mseas-data2:  CentOS release 6.8 (Final)</li>
        <li>mseas-data3:  CentOS release 6.10 (Final)</li>
        <li>mseas-data4:  CentOS Linux release 7.9.2009 (Core)</li>
      </ul>
      <li>Client machines</li>
      <ul>
        <li>CentOS Linux release 7.9.2009 (Core)</li>
      </ul>
    </ul>
    <p>More details on the gluster volume are included below.<br>
      <br>
      We were recently trying to gunzip a file on the gluster volume and
      got  a "Transport endpoint is not connected" even though every
      test we try shows that gluster is fully up and running fine.  We
      traced the file to brick 3 in the server mseas-data3.  We have
      included the relevant portions of the various log files on the
      client (mseas) where we were running the gunzip command and the
      server hosting the file (mseas-data3) below the gluster
      information<br>
      <br>
      What can you suggest we do to further debug and/or solve this
      issue?<br>
      <br>
      Thanks<br>
      Pat<br>
      <br>
      <font face="monospace">============================================================<br>
        Gluster volume information<br>
        ============================================================<br>
        <br>
        ---------------------------------------------------<br>
        gluster volume info<br>
        -----------------------------------------<br>
         <br>
        Volume Name: data-volume<br>
        Type: Distribute<br>
        Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18<br>
        Status: Started<br>
        Number of Bricks: 5<br>
        Transport-type: tcp<br>
        Bricks:<br>
        Brick1: mseas-data2:/mnt/brick1<br>
        Brick2: mseas-data2:/mnt/brick2<br>
        Brick3: mseas-data3:/export/sda/brick3<br>
        Brick4: mseas-data3:/export/sdc/brick4<br>
        Brick5: mseas-data4:/export/brick5<br>
        Options Reconfigured:<br>
        diagnostics.client-log-level: ERROR<br>
        network.inode-lru-limit: 50000<br>
        performance.md-cache-timeout: 60<br>
        performance.open-behind: off<br>
        disperse.eager-lock: off<br>
        auth.allow: *<br>
        server.allow-insecure: on<br>
        nfs.exports-auth-enable: on<br>
        diagnostics.brick-sys-log-level: WARNING<br>
        performance.readdir-ahead: on<br>
        nfs.disable: on<br>
        nfs.export-volumes: off<br>
        cluster.min-free-disk: 1%<br>
        <br>
        ---------------------------------------------------<br>
        gluster volume status<br>
        --------------------------------------------<br>
         <br>
        Status of volume: data-volume<br>
        Gluster process                             TCP Port  RDMA Port 
        Online  Pid<br>
------------------------------------------------------------------------------<br>
        Brick mseas-data2:/mnt/brick1               49154     0         
        Y       15978<br>
        Brick mseas-data2:/mnt/brick2               49155     0         
        Y       15997<br>
        Brick mseas-data3:/export/sda/brick3        49153     0         
        Y       14221<br>
        Brick mseas-data3:/export/sdc/brick4        49154     0         
        Y       14240<br>
        Brick mseas-data4:/export/brick5            49152     0         
        Y       50569<br>
        <br>
        <br>
        ---------------------------------------------------<br>
        gluster peer status<br>
        -----------------------------------------<br>
         <br>
        Number of Peers: 2<br>
        <br>
        Hostname: mseas-data3<br>
        Uuid: b39d4deb-c291-437e-8013-09050c1fa9e3<br>
        State: Peer in Cluster (Connected)<br>
        <br>
        Hostname: mseas-data4<br>
        Uuid: 5c4d06eb-df89-4e5c-92e4-441fb401a9ef<br>
        State: Peer in Cluster (Connected)<br>
        <br>
        ---------------------------------------------------<br>
        glusterfs --version<br>
        --------------------------------------------<br>
         <br>
        glusterfs 3.7.11 built on Apr 18 2016 13:20:46<br>
        Repository revision: git://git.gluster.com/glusterfs.git<br>
        Copyright (c) 2006-2013 Red Hat, Inc.
        <a class="moz-txt-link-rfc2396E" href="http://www.redhat.com/"><http://www.redhat.com/></a><br>
        GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>
        It is licensed to you under your choice of the GNU Lesser<br>
        General Public License, version 3 or any later version (LGPLv3<br>
        or later), or the GNU General Public License, version 2 (GPLv2),<br>
        in all cases as published by the Free Software Foundation.<br>
        <br>
        ============================================================<br>
        Relevant sections from log files<br>
        ============================================================<br>
        <br>
        ---------------------------------------------------<br>
        mseas: gdata.log<br>
        -----------------------------------------<br>
        <br>
        [2022-06-15 14:51:17.263858] C
        [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired]
        0-data-volume-client-2: server 172.16.1.113:49153 has not
        responded in the last 42 seconds, disconnecting.<br>
        [2022-06-15 14:51:17.264522] E
        [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x172)[0x7f84886a0202]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_unwind+0x1c2)[0x7f848846c3e2]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f848846c4de]
        (-->
/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f848846dd2a]
        (-->
        /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f848846e538]
        ))))) 0-data-volume-client-2: forced unwinding frame
        type(GlusterFS 3.3) op(READ(12)) called at 2022-06-15
        14:49:52.113795 (xid=0xb4f49b)<br>
        [2022-06-15 14:51:17.264859] E
        [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x172)[0x7f84886a0202]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_unwind+0x1c2)[0x7f848846c3e2]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f848846c4de]
        (-->
/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f848846dd2a]
        (-->
        /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f848846e538]
        ))))) 0-data-volume-client-2: forced unwinding frame
        type(GF-DUMP) op(NULL(2)) called at 2022-06-15 14:49:53.251903
        (xid=0xb4f49c)<br>
        [2022-06-15 14:51:17.265111] E
        [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x172)[0x7f84886a0202]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_unwind+0x1c2)[0x7f848846c3e2]
        (-->
        /usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f848846c4de]
        (-->
/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f848846dd2a]
        (-->
        /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f848846e538]
        ))))) 0-data-volume-client-2: forced unwinding frame
        type(GlusterFS 3.3) op(FSTAT(25)) called at 2022-06-15
        14:50:00.103768 (xid=0xb4f49d)<br>
        [root@mseas glusterfs]# <br>
        <br>
        ---------------------------------------------------<br>
        mseas-data3:  cli.log<br>
        -----------------------------------------<br>
        <br>
        [2022-06-15 14:27:12.982510] I [cli.c:721:main] 0-cli: Started
        running gluster with version 3.7.11<br>
        [2022-06-15 14:27:13.206046] I [MSGID: 101190]
        [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
        thread with index 1<br>
        [2022-06-15 14:27:13.206152] I
        [socket.c:2356:socket_event_handler] 0-transport: disconnecting
        now<br>
        [2022-06-15 14:27:13.208711] I [input.c:36:cli_batch] 0-:
        Exiting with: 0<br>
        [2022-06-15 14:27:23.579669] I [cli.c:721:main] 0-cli: Started
        running gluster with version 3.7.11<br>
        [2022-06-15 14:27:23.711445] I [MSGID: 101190]
        [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
        thread with index 1<br>
        [2022-06-15 14:27:23.711551] I
        [socket.c:2356:socket_event_handler] 0-transport: disconnecting
        now<br>
        [2022-06-15 14:27:23.735073] I [input.c:36:cli_batch] 0-:
        Exiting with: 0<br>
        <br>
        ---------------------------------------------------<br>
        mseas-data3:  usr-local-etc-glusterfs-glusterd.vol.log<br>
        -----------------------------------------<br>
        <br>
        [2022-06-15 14:27:13.208084] I [MSGID: 106487]
        [glusterd-handler.c:1472:__glusterd_handle_cli_list_friends]
        0-glusterd: Received cli list req<br>
        [2022-06-15 14:27:23.721724] I [MSGID: 106499]
        [glusterd-handler.c:4331:__glusterd_handle_status_volume]
        0-management: Received status volume req for volume data-volume<br>
        [2022-06-15 14:27:23.732286] W [MSGID: 106217]
        [glusterd-op-sm.c:4630:glusterd_op_modify_op_ctx] 0-management:
        Failed uuid to hostname conversion<br>
        [2022-06-15 14:27:23.732328] W [MSGID: 106387]
        [glusterd-op-sm.c:4734:glusterd_op_modify_op_ctx] 0-management:
        op_ctx modification failed<br>
        <br>
        ---------------------------------------------------<br>
        mseas-data3:  bricks/export-sda-brick3.log<br>
        -----------------------------------------<br>
        [2022-06-15 14:50:42.588143] I [MSGID: 115036]
        [server.c:552:server_rpc_notify] 0-data-volume-server:
        disconnecting connection from
mseas.mit.edu-155483-2022/05/13-03:24:14:618694-data-volume-client-2-0-28<br>
        [2022-06-15 14:50:42.588220] I [MSGID: 115013]
        [server-helpers.c:294:do_fd_cleanup] 0-data-volume-server: fd
        cleanup on
/projects/posydon/Acoustics_ASA/MSEAS-ParEq-DO/Save/2D/Test_Cases/RI/DO_NAPE_JASA_Paper/Uncertain_Pekeris_Waveguide_DO_MC<br>
        [2022-06-15 14:50:42.588259] I [MSGID: 115013]
        [server-helpers.c:294:do_fd_cleanup] 0-data-volume-server: fd
        cleanup on
        /projects/dri_calypso/PE/2019/Apr09/Ens3R200deg001/pe_out.nc.gz<br>
        [2022-06-15 14:50:42.588288] I [MSGID: 101055]
        [client_t.c:420:gf_client_unref] 0-data-volume-server: Shutting
        down connection
mseas.mit.edu-155483-2022/05/13-03:24:14:618694-data-volume-client-2-0-28<br>
        [2022-06-15 14:50:53.605215] I [MSGID: 115029]
        [server-handshake.c:690:server_setvolume] 0-data-volume-server:
        accepted client from
mseas.mit.edu-155483-2022/05/13-03:24:14:618694-data-volume-client-2-0-29
        (version: 3.7.11)<br>
        [2022-06-15 14:50:42.588247] I [MSGID: 115013]
        [server-helpers.c:294:do_fd_cleanup] 0-data-volume-server: fd
        cleanup on
/projects/posydon/Acoustics_ASA/MSEAS-ParEq-DO/Save/2D/Test_Cases/RI/DO_NAPE_JASA_Paper/Uncertain_Pekeris_Waveguide_DO_MC<br>
      </font><br>
    </p>
    <pre class="moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre>
  </body>
</html>