<div dir="ltr">One of the host ( <a href="http://134.21.57.122:24007/" rel="noreferrer" target="_blank">134.21.57.122</a>) is not reachable from your network. Also checking at the IP, it would have gotten resolved to something else than expected. Can you check if &#39;diufnas22&#39; is properly resolved?<div><br></div><div>-Amar</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 14, 2019 at 3:44 PM DUCARROZ Birgit &lt;<a href="mailto:birgit.ducarroz@unifr.ch">birgit.ducarroz@unifr.ch</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thank you.<br>
I checked the logs but the information was not clear to me.<br>
<br>
I add the log of two different crashes. I will do an upgrade to <br>
glusterFS 6 in some weeks. Actually I cannot interrupt user activity on <br>
these servers since we are in the middle of the uni-semester.<br>
<br>
If these logfiles reveal something interesting to you, would be nice to <br>
get a hint.<br>
<br>
<br>
ol-data-client-2. Client process will keep trying to connect to glusterd <br>
until brick&#39;s port is available<br>
[2019-09-16 19:05:34.028164] E [rpc-clnt.c:348:saved_frames_unwind] (--&gt; <br>
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7ff167753ddb] <br>
(--&gt; /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xc021)[0x7ff167523021] <br>
(--&gt; /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xc14e)[0x7ff16752314e] <br>
(--&gt; <br>
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8e)[0x7ff1675246be] <br>
(--&gt; /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe268)[0x7ff167525268] <br>
))))) 0-vol-data-client-2: forced unwinding frame type(GlusterFS 4.x v1) <br>
op(FSTAT(25)) called at 2019-09-16 19:05:28.736873 (xid=0x113aecf)<br>
[2019-09-16 19:05:34.028206] W [MSGID: 114031] <br>
[client-rpc-fops_v2.c:1260:client4_0_fstat_cbk] 0-vol-data-client-2: <br>
remote operation failed [Transport endpoint is not connected]<br>
[2019-09-16 19:05:44.970828] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-09-16 19:05:44.971030] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-09-16 19:05:44.971165] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-2: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-09-16 19:05:47.971375] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
<br>
[2019-09-16 19:05:44.971200] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-2: disconnected from <br>
vol-data-client-2. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
<br>
<br>
<br>
[2019-09-17 07:43:44.807182] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-09-17 07:43:44.807217] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-0: disconnected from <br>
vol-data-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-09-17 07:43:44.807228] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
Final graph:<br>
+------------------------------------------------------------------------------+<br>
   1: volume vol-data-client-0<br>
   2:     type protocol/client<br>
   3:     option ping-timeout 42<br>
   4:     option remote-host diufnas20<br>
   5:     option remote-subvolume /bigdisk/brick1/vol-data<br>
   6:     option transport-type socket<br>
   7:     option transport.address-family inet<br>
   8:     option username a14ffa1b-b64e-410c-894d-435c18e81b2d<br>
   9:     option password 37ba4281-166d-40fd-9ef0-08a187d1107b<br>
  10:     option transport.tcp-user-timeout 0<br>
  11:     option transport.socket.keepalive-time 20<br>
  12:     option transport.socket.keepalive-interval 2<br>
  13:     option transport.socket.keepalive-count 9<br>
  14:     option send-gids true<br>
  15: end-volume<br>
  16:<br>
  17: volume vol-data-client-1<br>
  18:     type protocol/client<br>
  19:     option ping-timeout 42<br>
  20:     option remote-host diufnas21<br>
  21:     option remote-subvolume /bigdisk/brick2/vol-data<br>
  22:     option transport-type socket<br>
  23:     option transport.address-family inet<br>
  24:     option username a14ffa1b-b64e-410c-894d-435c18e81b2d<br>
  25:     option password 37ba4281-166d-40fd-9ef0-08a187d1107b<br>
  26:     option transport.tcp-user-timeout 0<br>
  27:     option transport.socket.keepalive-time 20<br>
29:     option transport.socket.keepalive-count 9<br>
  30:     option send-gids true<br>
  31: end-volume<br>
  32:<br>
  33: volume vol-data-client-2<br>
  34:     type protocol/client<br>
  35:     option ping-timeout 42<br>
  36:     option remote-host diufnas22<br>
  37:     option remote-subvolume /bigdisk/brick3/vol-data<br>
  38:     option transport-type socket<br>
  39:     option transport.address-family inet<br>
  40:     option username a14ffa1b-b64e-410c-894d-435c18e81b2d<br>
  41:     option password 37ba4281-166d-40fd-9ef0-08a187d1107b<br>
  42:     option transport.tcp-user-timeout 0<br>
  43:     option transport.socket.keepalive-time 20<br>
  44:     option transport.socket.keepalive-interval 2<br>
  45:     option transport.socket.keepalive-count 9<br>
  46:     option send-gids true<br>
  47: end-volume<br>
  48:<br>
49: volume vol-data-replicate-0<br>
  50:     type cluster/replicate<br>
  51:     option afr-pending-xattr <br>
vol-data-client-0,vol-data-client-1,vol-data-client-2<br>
  52:     option arbiter-count 1<br>
  53:     option use-compound-fops off<br>
  54:     subvolumes vol-data-client-0 vol-data-client-1 vol-data-client-2<br>
  55: end-volume<br>
  56:<br>
  57: volume vol-data-dht<br>
  58:     type cluster/distribute<br>
  59:     option min-free-disk 10%<br>
  60:     option lock-migration off<br>
  61:     option force-migration off<br>
  62:     subvolumes vol-data-replicate-0<br>
  63: end-volume<br>
  64:<br>
  65: volume vol-data-write-behind<br>
  66:     type performance/write-behind<br>
  67:     subvolumes vol-data-dht<br>
  68: end-volume<br>
  69:<br>
  70: volume vol-data-read-ahead<br>
  71:     type performance/read-ahead<br>
  72:     subvolumes vol-data-write-behind<br>
  73: end-volume<br>
  74:<br>
  75: volume vol-data-readdir-ahead<br>
  76:     type performance/readdir-ahead<br>
  77:     option parallel-readdir off<br>
  78:     option rda-request-size 131072<br>
  79:     option rda-cache-limit 10MB<br>
  80:     subvolumes vol-data-read-ahead<br>
  81: end-volume<br>
  82:<br>
  83: volume vol-data-io-cache<br>
  84:     type performance/io-cache<br>
  85:     option max-file-size 256MB<br>
  86:     option cache-size 28GB<br>
  87:     subvolumes vol-data-readdir-ahead<br>
  88: end-volume<br>
  89:<br>
  90: volume vol-data-quick-read<br>
  91:     type performance/quick-read<br>
  92:     option cache-size 28GB<br>
  93:     subvolumes vol-data-io-cache<br>
  94: end-volume<br>
  95:<br>
  96: volume vol-data-open-behind<br>
  97:     type performance/open-behind<br>
  98:     subvolumes vol-data-quick-read<br>
  99: end-volume<br>
100:<br>
101: volume vol-data-md-cache<br>
102:     type performance/md-cache<br>
103:     subvolumes vol-data-open-behind<br>
104: end-volume<br>
105:<br>
106: volume vol-data-io-threads<br>
107:     type performance/io-threads<br>
108:     subvolumes vol-data-md-cache<br>
109: end-volume<br>
110:<br>
111: volume vol-data<br>
112:     type debug/io-stats<br>
113:     option log-level INFO<br>
114:     option latency-measurement off<br>
115:     option count-fop-hits off<br>
116:     subvolumes vol-data-io-threads<br>
117: end-volume<br>
118:<br>
119: volume meta-autoload<br>
120:     type meta<br>
121:     subvolumes vol-data<br>
122: end-volume<br>
123:<br>
+------------------------------------------------------------------------------+<br>
[2019-09-17 07:43:47.249546] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-09-17 07:43:48.801700] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
<br>
<br>
<br>
root@nas20:/var/log/glusterfs# dmesg |grep error<br>
[    2.463658] i8042: probe of i8042 failed with error -5<br>
[    8.180404] EXT4-fs (sdb1): re-mounted. Opts: errors=remount-ro<br>
[   10.024111] EXT4-fs (sda): mounted filesystem with ordered data mode. <br>
Opts: errors=remount-ro<br>
[   64.432042] ureadahead[1478]: segfault at 7f4b99d3d2c0 ip <br>
00005629096fe2d1 sp 00007fff9dc98250 error 6 in <br>
ureadahead[5629096fa000+8000]<br>
<br>
<br>
root@nas20:/var/log/glusterfs# cat export-users.log | grep &quot;2019-10-08 20&quot;<br>
[2019-10-08 20:10:33.695082] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-users /export/users)<br>
[2019-10-08 20:10:33.712430] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-08 20:10:33.816594] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-08 20:10:33.820975] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-0: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:33.821257] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-1: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:33.821466] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-2: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:33.822271] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:33.822425] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:33.822484] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-users-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-08 20:10:33.822518] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-users-client-0: disconnected <br>
from vol-users-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-08 20:10:33.822528] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-users-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-08 20:10:36.387074] E [socket.c:2524:socket_connect_finish] <br>
0-vol-users-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-08 20:10:36.387120] E [socket.c:2524:socket_connect_finish] <br>
0-vol-users-client-1: connection to <a href="http://192.168.1.121:24007" rel="noreferrer" target="_blank">192.168.1.121:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-08 20:10:36.388236] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-08 20:10:36.388254] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
The message &quot;E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-users-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.&quot; repeated 2 times between [2019-10-08 <br>
20:10:33.822528] and [2019-10-08 20:10:36.387272]<br>
[2019-10-08 20:10:36.388596] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-users-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.388667] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-users-dht: dict is null<br>
[2019-10-08 20:10:36.388724] E [fuse-bridge.c:4362:fuse_first_lookup] <br>
0-fuse: first lookup on root failed (Transport endpoint is not connected)<br>
[2019-10-08 20:10:36.388847] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-users-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.388864] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-users-dht: dict is null<br>
[2019-10-08 20:10:36.388883] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.388893] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.391191] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-users-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.391218] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-users-dht: dict is null<br>
[2019-10-08 20:10:36.391241] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.391250] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.391317] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-users-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.391333] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-users-dht: dict is null<br>
[2019-10-08 20:10:36.391352] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.391360] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 4: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.406967] I [fuse-bridge.c:5199:fuse_thread_proc] <br>
0-fuse: initating unmount of /export/users<br>
[2019-10-08 20:10:36.407298] W [glusterfsd.c:1514:cleanup_and_exit] <br>
(--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f88cc59b6ba] <br>
--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55c01427f70d] <br>
--&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55c01427f524] ) 0-: <br>
received signum (15), shutting down<br>
[2019-10-08 20:10:36.407318] I [fuse-bridge.c:5981:fini] 0-fuse: <br>
Unmounting &#39;/export/users&#39;.<br>
[2019-10-08 20:10:36.407326] I [fuse-bridge.c:5986:fini] 0-fuse: Closing <br>
fuse connection to &#39;/export/users&#39;.<br>
[2019-10-08 20:10:43.925719] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-users /export/users)<br>
[2019-10-08 20:10:43.929529] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-08 20:10:43.933210] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-08 20:10:43.933789] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-0: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:43.934151] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-1: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:43.934174] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.934269] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.934331] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-users-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-08 20:10:43.934369] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-users-client-0: disconnected <br>
from vol-users-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-08 20:10:43.934379] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-users-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-08 20:10:43.934434] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-users-client-2: parent translators are ready, attempting connect <br>
on transport<br>
[2019-10-08 20:10:43.934574] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.934782] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.934859] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.934931] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-users-client-1: changing port to 49154 (from 0)<br>
[2019-10-08 20:10:43.935152] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.935286] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-users-client-2: changing port to 49154 (from 0)<br>
[2019-10-08 20:10:43.935314] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.935515] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.935711] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.935919] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:43.936354] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-users-client-1: <br>
Connected to vol-users-client-1, attached to remote volume <br>
&#39;/bigdisk/brick2/vol-users&#39;.<br>
[2019-10-08 20:10:43.936375] I [MSGID: 108005] <br>
[afr-common.c:5336:__afr_handle_child_up_event] 0-vol-users-replicate-0: <br>
Subvolume &#39;vol-users-client-1&#39; came back up; going online.<br>
[2019-10-08 20:10:43.936728] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-users-client-2: <br>
Connected to vol-users-client-2, attached to remote volume <br>
&#39;/bigdisk/brick3/vol-users&#39;.<br>
[2019-10-08 20:10:43.936742] I [MSGID: 108002] <br>
[afr-common.c:5611:afr_notify] 0-vol-users-replicate-0: Client-quorum is met<br>
[2019-10-08 20:10:43.937579] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-08 20:10:43.937595] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
[2019-10-08 20:10:43.939789] I [MSGID: 109005] <br>
[dht-selfheal.c:2342:dht_selfheal_directory] 0-vol-users-dht: Directory <br>
selfheal failed: Unable to form layout for directory /<br>
[2019-10-08 20:10:47.927439] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.927555] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.927627] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-users-client-0: changing port to 49152 (from 0)<br>
[2019-10-08 20:10:47.928087] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.928201] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-users-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.928717] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-users-client-0: <br>
Connected to vol-users-client-0, attached to remote volume <br>
&#39;/bigdisk/brick1/vol-users&#39;.<br>
root@nas20:/var/log/glusterfs# cat export-users.log | grep &quot;2019-10-08 22&quot;<br>
root@nas20:/var/log/glusterfs# cat export-users.log | grep &quot;2019-10-08 21&quot;<br>
root@nas20:/var/log/glusterfs# cat export-users.log | grep &quot;2019-10-08 23&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log.log | grep <br>
&quot;2019-10-08 23&quot;<br>
cat: export-data.log.log: No such file or directory<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 15&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 16&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 17&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 19&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 1&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-08 20&quot;<br>
[2019-10-08 20:10:33.695000] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-data /export/data)<br>
[2019-10-08 20:10:33.737302] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-08 20:10:33.816578] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-08 20:10:33.820946] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-0: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:33.821255] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-1: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:33.821467] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-2: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:33.822144] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:33.822243] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:33.822374] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-08 20:10:33.822412] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-0: disconnected from <br>
vol-data-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-08 20:10:33.822423] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-08 20:10:36.387062] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-08 20:10:36.387091] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-1: connection to <a href="http://192.168.1.121:24007" rel="noreferrer" target="_blank">192.168.1.121:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-08 20:10:36.388218] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-08 20:10:36.388237] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
The message &quot;E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.&quot; repeated 2 times between [2019-10-08 <br>
20:10:33.822423] and [2019-10-08 20:10:36.387268]<br>
[2019-10-08 20:10:36.388590] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.388630] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-08 20:10:36.388723] E [fuse-bridge.c:4362:fuse_first_lookup] <br>
0-fuse: first lookup on root failed (Transport endpoint is not connected)<br>
[2019-10-08 20:10:36.388855] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.388871] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-08 20:10:36.388892] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.388902] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.390447] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.390480] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-08 20:10:36.390503] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.390513] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.390580] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-08 20:10:36.390595] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-08 20:10:36.390614] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:36.390622] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 4: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:36.410905] I [fuse-bridge.c:5199:fuse_thread_proc] <br>
0-fuse: initating unmount of /export/data<br>
[2019-10-08 20:10:36.411091] W [glusterfsd.c:1514:cleanup_and_exit] <br>
(--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7ff189f586ba] <br>
--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55946f24b70d] <br>
--&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55946f24b524] ) 0-: <br>
received signum (15), shutting down<br>
[2019-10-08 20:10:36.411113] I [fuse-bridge.c:5981:fini] 0-fuse: <br>
Unmounting &#39;/export/data&#39;.<br>
[2019-10-08 20:10:36.411122] I [fuse-bridge.c:5986:fini] 0-fuse: Closing <br>
fuse connection to &#39;/export/data&#39;.<br>
[2019-10-08 20:10:36.845106] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-data /export/data)<br>
[2019-10-08 20:10:36.848865] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-08 20:10:36.852064] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-08 20:10:36.852477] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-0: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:36.852694] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-1: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:36.852773] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:36.852877] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:36.852917] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-2: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:36.852947] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-08 20:10:36.852980] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-0: disconnected from <br>
vol-data-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-08 20:10:36.852990] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-08 20:10:37.387355] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:37.387579] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:37.387706] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-1: changing port to 49156 (from 0)<br>
[2019-10-08 20:10:37.388065] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:37.388253] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:37.389087] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-1: <br>
Connected to vol-data-client-1, attached to remote volume <br>
&#39;/bigdisk/brick2/vol-data&#39;.<br>
[2019-10-08 20:10:37.389102] I [MSGID: 108005] <br>
[afr-common.c:5336:__afr_handle_child_up_event] 0-vol-data-replicate-0: <br>
Subvolume &#39;vol-data-client-1&#39; came back up; going online.<br>
[2019-10-08 20:10:39.387062] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-08 20:10:39.389703] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-08 20:10:39.389740] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
[2019-10-08 20:10:39.411859] I [glusterfsd-mgmt.c:53:mgmt_cbk_spec] <br>
0-mgmt: Volume file changed<br>
[2019-10-08 20:10:40.832633] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-08 20:10:40.832712] E [fuse-bridge.c:4362:fuse_first_lookup] <br>
0-fuse: first lookup on root failed (Transport endpoint is not connected)<br>
[2019-10-08 20:10:40.834248] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:40.834281] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:40.837624] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:40.837659] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:40.839468] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-08 20:10:40.839503] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 4: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-08 20:10:40.847013] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:40.847219] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:40.847368] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-2: changing port to 49158 (from 0)<br>
[2019-10-08 20:10:40.847725] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:40.847906] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
The message &quot;E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk] <br>
0-vol-data-dht: dict is null&quot; repeated 3 times between [2019-10-08 <br>
20:10:40.832633] and [2019-10-08 20:10:40.839454]<br>
[2019-10-08 20:10:40.848759] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-2: <br>
Connected to vol-data-client-2, attached to remote volume <br>
&#39;/bigdisk/brick3/vol-data&#39;.<br>
[2019-10-08 20:10:40.848785] I [MSGID: 108002] <br>
[afr-common.c:5611:afr_notify] 0-vol-data-replicate-0: Client-quorum is met<br>
[2019-10-08 20:10:40.874884] I [fuse-bridge.c:5199:fuse_thread_proc] <br>
0-fuse: initating unmount of /export/data<br>
[2019-10-08 20:10:40.875054] W [glusterfsd.c:1514:cleanup_and_exit] <br>
(--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fdc50b646ba] <br>
--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x563108ee670d] <br>
--&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x563108ee6524] ) 0-: <br>
received signum (15), shutting down<br>
[2019-10-08 20:10:40.875079] I [fuse-bridge.c:5981:fini] 0-fuse: <br>
Unmounting &#39;/export/data&#39;.<br>
[2019-10-08 20:10:40.875087] I [fuse-bridge.c:5986:fini] 0-fuse: Closing <br>
fuse connection to &#39;/export/data&#39;.<br>
[2019-10-08 20:10:47.464875] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-data /export/data)<br>
[2019-10-08 20:10:47.468743] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-08 20:10:47.472050] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-08 20:10:47.472465] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-0: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:47.472803] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-1: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:47.472865] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.472968] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.473036] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-0: changing port to 49156 (from 0)<br>
[2019-10-08 20:10:47.473121] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-2: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-08 20:10:47.473466] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.473511] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.473681] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.473850] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.473928] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.474019] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-1: changing port to 49156 (from 0)<br>
[2019-10-08 20:10:47.474072] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.474309] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-2: changing port to 49158 (from 0)<br>
[2019-10-08 20:10:47.474621] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-0: <br>
Connected to vol-data-client-0, attached to remote volume <br>
&#39;/bigdisk/brick1/vol-data&#39;.<br>
[2019-10-08 20:10:47.474638] I [MSGID: 108005] <br>
[afr-common.c:5336:__afr_handle_child_up_event] 0-vol-data-replicate-0: <br>
Subvolume &#39;vol-data-client-0&#39; came back up; going online.<br>
[2019-10-08 20:10:47.474750] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.474927] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.474958] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.475216] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-08 20:10:47.476030] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-1: <br>
Connected to vol-data-client-1, attached to remote volume <br>
&#39;/bigdisk/brick2/vol-data&#39;.<br>
[2019-10-08 20:10:47.476052] I [MSGID: 108002] <br>
[afr-common.c:5611:afr_notify] 0-vol-data-replicate-0: Client-quorum is met<br>
[2019-10-08 20:10:47.476152] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-2: <br>
Connected to vol-data-client-2, attached to remote volume <br>
&#39;/bigdisk/brick3/vol-data&#39;.<br>
[2019-10-08 20:10:47.477159] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-08 20:10:47.477210] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
[2019-10-08 20:10:47.478960] I [MSGID: 108031] <br>
[afr-common.c:2597:afr_local_discovery_cbk] 0-vol-data-replicate-0: <br>
selecting local read_child vol-data-client-0<br>
[2019-10-08 20:10:47.479971] I [MSGID: 108031] <br>
[afr-common.c:2597:afr_local_discovery_cbk] 0-vol-data-replicate-0: <br>
selecting local read_child vol-data-client-0<br>
[2019-10-08 20:10:47.480094] I [MSGID: 109005] <br>
[dht-selfheal.c:2342:dht_selfheal_directory] 0-vol-data-dht: Directory <br>
selfheal failed: Unable to form layout for directory /<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-09 1&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-09 7&quot;<br>
root@nas20:/var/log/glusterfs# cat export-data.log | grep &quot;2019-10-09 0&quot;<br>
[2019-10-09 04:25:02.165330] I [MSGID: 100011] <br>
[glusterfsd.c:1599:reincarnate] 0-glusterfsd: Fetching the volume file <br>
from server...<br>
[2019-10-09 04:25:02.191948] I [glusterfsd-mgmt.c:1953:mgmt_getspec_cbk] <br>
0-glusterfs: No change in volfile,continuing<br>
[2019-10-09 07:12:03.955619] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-data /export/data)<br>
[2019-10-09 07:12:03.981652] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-09 07:12:04.002485] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-09 07:12:04.003899] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-0: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:04.004147] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-1: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:04.004366] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-2: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:04.004628] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:04.004923] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:04.005244] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-09 07:12:04.005286] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-0: disconnected from <br>
vol-data-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-09 07:12:04.005297] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-09 07:12:06.690631] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-09 07:12:06.690792] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-1: connection to <a href="http://192.168.1.121:24007" rel="noreferrer" target="_blank">192.168.1.121:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-09 07:12:06.691746] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-09 07:12:06.691771] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
The message &quot;E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.&quot; repeated 2 times between [2019-10-09 <br>
07:12:04.005297] and [2019-10-09 07:12:06.690811]<br>
[2019-10-09 07:12:06.692647] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-09 07:12:06.692695] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-09 07:12:06.692807] E [fuse-bridge.c:4362:fuse_first_lookup] <br>
0-fuse: first lookup on root failed (Transport endpoint is not connected)<br>
[2019-10-09 07:12:06.692955] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-09 07:12:06.692980] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-09 07:12:06.693003] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-09 07:12:06.693013] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-09 07:12:06.695503] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-09 07:12:06.695526] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-09 07:12:06.695547] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-09 07:12:06.695556] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-09 07:12:06.695619] I [MSGID: 108006] <br>
[afr-common.c:5677:afr_local_init] 0-vol-data-replicate-0: no subvolumes up<br>
[2019-10-09 07:12:06.695633] E [MSGID: 101046] <br>
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol-data-dht: dict is null<br>
[2019-10-09 07:12:06.695650] W <br>
[fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse: <br>
00000000-0000-0000-0000-000000000001: failed to resolve (Transport <br>
endpoint is not connected)<br>
[2019-10-09 07:12:06.695658] E [fuse-bridge.c:928:fuse_getattr_resume] <br>
0-glusterfs-fuse: 4: GETATTR 1 (00000000-0000-0000-0000-000000000001) <br>
resolution failed<br>
[2019-10-09 07:12:06.714499] I [fuse-bridge.c:5199:fuse_thread_proc] <br>
0-fuse: initating unmount of /export/data<br>
[2019-10-09 07:12:06.714753] W [glusterfsd.c:1514:cleanup_and_exit] <br>
(--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f133ffef6ba] <br>
--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x562b2312c70d] <br>
--&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x562b2312c524] ) 0-: <br>
received signum (15), shutting down<br>
[2019-10-09 07:12:06.714773] I [fuse-bridge.c:5981:fini] 0-fuse: <br>
Unmounting &#39;/export/data&#39;.<br>
[2019-10-09 07:12:06.714779] I [fuse-bridge.c:5986:fini] 0-fuse: Closing <br>
fuse connection to &#39;/export/data&#39;.<br>
[2019-10-09 07:12:07.109206] I [MSGID: 100030] [glusterfsd.c:2741:main] <br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.8 <br>
(args: /usr/sbin/glusterfs --process-name fuse <br>
--volfile-server=localhost --volfile-id=/vol-data /export/data)<br>
[2019-10-09 07:12:07.112870] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 1<br>
[2019-10-09 07:12:07.116011] I [MSGID: 101190] <br>
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread <br>
with index 2<br>
[2019-10-09 07:12:07.116421] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-0: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:07.116655] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-1: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:07.116676] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:07.116767] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:07.116833] E [MSGID: 114058] <br>
[client-handshake.c:1442:client_query_portmap_cbk] 0-vol-data-client-0: <br>
failed to get the port number for remote subvolume. Please run &#39;gluster <br>
volume status&#39; on server to see if brick process is running.<br>
[2019-10-09 07:12:07.116835] I [MSGID: 114020] [client.c:2328:notify] <br>
0-vol-data-client-2: parent translators are ready, attempting connect on <br>
transport<br>
[2019-10-09 07:12:07.116887] I [MSGID: 114018] <br>
[client.c:2254:client_rpc_notify] 0-vol-data-client-0: disconnected from <br>
vol-data-client-0. Client process will keep trying to connect to <br>
glusterd until brick&#39;s port is available<br>
[2019-10-09 07:12:07.116898] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-09 07:12:07.691005] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:09.690613] E [socket.c:2524:socket_connect_finish] <br>
0-vol-data-client-2: connection to <a href="http://134.21.57.122:24007" rel="noreferrer" target="_blank">134.21.57.122:24007</a> failed (No route <br>
to host); disconnecting socket<br>
[2019-10-09 07:12:11.111975] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.112083] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.112200] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.112397] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.112518] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-2: changing port to 49158 (from 0)<br>
[2019-10-09 07:12:11.112820] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.113013] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-2: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:09.690664] E [MSGID: 108006] <br>
[afr-common.c:5413:__afr_handle_child_down_event] <br>
0-vol-data-replicate-0: All subvolumes are down. Going offline until <br>
atleast one of them comes back up.<br>
[2019-10-09 07:12:11.114003] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-2: <br>
Connected to vol-data-client-2, attached to remote volume <br>
&#39;/bigdisk/brick3/vol-data&#39;.<br>
[2019-10-09 07:12:11.114045] I [MSGID: 108005] <br>
[afr-common.c:5336:__afr_handle_child_up_event] 0-vol-data-replicate-0: <br>
Subvolume &#39;vol-data-client-2&#39; came back up; going online.<br>
[2019-10-09 07:12:11.290914] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.291239] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-1: changing port to 49156 (from 0)<br>
[2019-10-09 07:12:11.291676] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.291919] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-1: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:11.293266] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-1: <br>
Connected to vol-data-client-1, attached to remote volume <br>
&#39;/bigdisk/brick2/vol-data&#39;.<br>
[2019-10-09 07:12:11.293306] I [MSGID: 108002] <br>
[afr-common.c:5611:afr_notify] 0-vol-data-replicate-0: Client-quorum is met<br>
[2019-10-09 07:12:11.295955] I [fuse-bridge.c:4294:fuse_init] <br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 <br>
kernel 7.23<br>
[2019-10-09 07:12:11.296014] I [fuse-bridge.c:4927:fuse_graph_sync] <br>
0-fuse: switched to graph 0<br>
[2019-10-09 07:12:11.299181] I [MSGID: 109005] <br>
[dht-selfheal.c:2342:dht_selfheal_directory] 0-vol-data-dht: Directory <br>
selfheal failed: Unable to form layout for directory /<br>
[2019-10-09 07:12:14.112691] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:14.112772] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:17.113224] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:17.113319] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:20.113917] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:20.114031] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:24.393064] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:24.393253] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:26.393776] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:26.393880] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:29.394504] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:29.394614] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:32.395375] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:32.395534] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:35.395920] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:35.396027] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:38.396531] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:38.396618] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:41.397419] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:41.397526] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:44.398189] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:44.398312] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:47.399045] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:47.399166] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:50.399735] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:50.399855] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:53.400507] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:53.400616] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:56.401284] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:56.401402] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:59.402080] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:12:59.402200] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:02.402863] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:02.402984] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:05.404125] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:05.404320] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:08.404977] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:08.405172] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:11.405694] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:11.405884] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:14.406443] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:14.406629] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:17.407255] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:17.407445] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:20.408092] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:20.408277] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:23.409546] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:23.409735] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:26.410420] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:26.410600] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:29.411353] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:29.411528] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:32.412325] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:32.412505] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:35.413311] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:35.413491] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:38.414345] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:38.414540] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:41.415407] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:41.415597] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:44.416490] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:44.416672] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:47.417664] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:47.417851] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:50.418814] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:50.419005] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:53.419982] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:53.420166] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:56.421200] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:56.421388] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:59.422450] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:13:59.422630] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:02.423757] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:02.423952] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:05.425051] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:05.425243] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:08.425832] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:08.426011] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:11.426636] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:11.426846] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:16.310279] I [glusterfsd-mgmt.c:53:mgmt_cbk_spec] <br>
0-mgmt: Volume file changed<br>
[2019-10-09 07:14:19.393266] I [glusterfsd-mgmt.c:53:mgmt_cbk_spec] <br>
0-mgmt: Volume file changed<br>
[2019-10-09 07:14:19.465709] I [glusterfsd-mgmt.c:1953:mgmt_getspec_cbk] <br>
0-glusterfs: No change in volfile,continuing<br>
[2019-10-09 07:14:19.467466] I [glusterfsd-mgmt.c:1953:mgmt_getspec_cbk] <br>
0-glusterfs: No change in volfile,continuing<br>
[2019-10-09 07:14:29.457122] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:29.457312] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:29.457431] I [rpc-clnt.c:2105:rpc_clnt_reconfig] <br>
0-vol-data-client-0: changing port to 49157 (from 0)<br>
[2019-10-09 07:14:29.458078] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:29.458264] W [rpc-clnt.c:1753:rpc_clnt_submit] <br>
0-vol-data-client-0: error returned while attempting to connect to <br>
host:(null), port:0<br>
[2019-10-09 07:14:29.459212] I [MSGID: 114046] <br>
[client-handshake.c:1095:client_setvolume_cbk] 0-vol-data-client-0: <br>
Connected to vol-data-client-0, attached to remote volume <br>
&#39;/bigdisk/brick1/vol-data&#39;.<br>
<br>
Regards,<br>
Birgit<br>
<br>
On 13/10/19 08:13, Amar Tumballi wrote:<br>
&gt; &#39;Transport endpoint not connected&#39; (ie, ENOTCONN) comes when the n/w <br>
&gt; connection is not established between client and the server. I recommend <br>
&gt; checking the logs for particular reason. Specially the brick (server <br>
&gt; side) logs will have some hints on this.<br>
&gt; <br>
&gt; About the crash, we treat it as a bug. Considering there is no specific <br>
&gt; backtrace, or logs shared with the email, it is hard to tell if it is <br>
&gt; already fixed in higher version or not.<br>
&gt; <br>
&gt; Considering you are in 4.1.8 version, and there are many releases done <br>
&gt; after that, upgrading also can be an option.<br>
&gt; <br>
&gt; Regards,<br>
&gt; Amar<br>
&gt; <br>
&gt; <br>
&gt; On Fri, Oct 11, 2019 at 4:13 PM DUCARROZ Birgit <br>
&gt; &lt;<a href="mailto:birgit.ducarroz@unifr.ch" target="_blank">birgit.ducarroz@unifr.ch</a> &lt;mailto:<a href="mailto:birgit.ducarroz@unifr.ch" target="_blank">birgit.ducarroz@unifr.ch</a>&gt;&gt; wrote:<br>
&gt; <br>
&gt;     Hi list,<br>
&gt; <br>
&gt;     Does anyone know what I can do to avoid &quot;Transport Endpoint not<br>
&gt;     connected&quot; (and then to get a blocked server) when writing a lot of<br>
&gt;     small files on a volume?<br>
&gt; <br>
&gt;     I&#39;m running glusterfs 4.1.8 on 6 servers. With 3 servers I never have<br>
&gt;     problems, but the other 3 servers are acting as HA storage for people<br>
&gt;     who write sometimes a thousands of small files. This seems to provoke a<br>
&gt;     crash of the gluster daemon.<br>
&gt; <br>
&gt;     I have 3 bricks whereas the 3rd brick acts as arbiter.<br>
&gt; <br>
&gt; <br>
&gt;     # Location of the bricks:<br>
&gt;     #-------$HOST1-------  -------$HOST3-------<br>
&gt;     # brick1            |  | brick3           | brick3 = arbiter<br>
&gt;     #                   |  |                  |<br>
&gt;     #-------$HOST2-------  --------------------<br>
&gt;     # brick2            |<br>
&gt;     #--------------------<br>
&gt; <br>
&gt;     Checked:<br>
&gt;     The underlying ext4 filesystem and the HD&#39;s seem to be without errors.<br>
&gt;     The ports in the firewall should not be the problem since it occurs<br>
&gt;     also<br>
&gt;     when the firewall is disabled.<br>
&gt; <br>
&gt;     Any help appreciated!<br>
&gt;     Kind regards,<br>
&gt;     Birgit<br>
&gt;     ________<br>
&gt; <br>
&gt;     Community Meeting Calendar:<br>
&gt; <br>
&gt;     APAC Schedule -<br>
&gt;     Every 2nd and 4th Tuesday at 11:30 AM IST<br>
&gt;     Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
&gt; <br>
&gt;     NA/EMEA Schedule -<br>
&gt;     Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
&gt;     Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
&gt; <br>
&gt;     Gluster-users mailing list<br>
&gt;     <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;<br>
&gt;     <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
&gt; <br>
</blockquote></div>