<div dir="ltr">I've been seeing the same thing happen, and in our case, it's because of running a script that checks gluster from time to time (<a href="https://github.com/jtopjian/scripts/blob/master/gluster/gluster-status.sh">https://github.com/jtopjian/scripts/blob/master/gluster/gluster-status.sh</a> in our case).<div><br></div><div>Do you have a job that runs and
periodically
checks for gluster health?<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8px" target="_blank">APK Mirror</a><span style="font-size:12.8px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 14, 2020 at 3:10 AM Christian Reiss <<a href="mailto:email@christian-reiss.de">email@christian-reiss.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hey folks,<br>
<br>
my logs are constantly (every few secs, continuously) swamped with<br>
<br>
[2020-02-14 11:05:20.258542] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-0: <br>
Connected to ssd_storage-client-0, attached to remote volume <br>
'/gluster_bricks/<a href="http://node01.company.com/gluster" rel="noreferrer" target="_blank">node01.company.com/gluster</a>'.<br>
[2020-02-14 11:05:20.258559] I [MSGID: 108005] <br>
[afr-common.c:5280:__afr_handle_child_up_event] <br>
0-ssd_storage-replicate-0: Subvolume 'ssd_storage-client-0' came back <br>
up; going online.<br>
[2020-02-14 11:05:20.258920] I [rpc-clnt.c:1963:rpc_clnt_reconfig] <br>
0-ssd_storage-client-2: changing port to 49152 (from 0)<br>
[2020-02-14 11:05:20.259132] I [socket.c:864:__socket_shutdown] <br>
0-ssd_storage-client-2: intentional socket shutdown(11)<br>
[2020-02-14 11:05:20.260010] I [MSGID: 114057] <br>
[client-handshake.c:1376:select_server_supported_programs] <br>
0-ssd_storage-client-1: Using Program GlusterFS 4.x v1, Num (1298437), <br>
Version (400)<br>
[2020-02-14 11:05:20.261077] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-1: <br>
Connected to ssd_storage-client-1, attached to remote volume <br>
'/gluster_bricks/<a href="http://node02.company.com/gluster" rel="noreferrer" target="_blank">node02.company.com/gluster</a>'.<br>
[2020-02-14 11:05:20.261089] I [MSGID: 108002] <br>
[afr-common.c:5647:afr_notify] 0-ssd_storage-replicate-0: Client-quorum <br>
is met<br>
[2020-02-14 11:05:20.262005] I [MSGID: 114057] <br>
[client-handshake.c:1376:select_server_supported_programs] <br>
0-ssd_storage-client-2: Using Program GlusterFS 4.x v1, Num (1298437), <br>
Version (400)<br>
[2020-02-14 11:05:20.262685] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-2: <br>
Connected to ssd_storage-client-2, attached to remote volume <br>
'/gluster_bricks/<a href="http://node03.company.com/gluster" rel="noreferrer" target="_blank">node03.company.com/gluster</a>'.<br>
[2020-02-14 11:05:20.263909] I [MSGID: 108031] <br>
[afr-common.c:2580:afr_local_discovery_cbk] 0-ssd_storage-replicate-0: <br>
selecting local read_child ssd_storage-client-0<br>
[2020-02-14 11:05:20.264124] I [MSGID: 104041] <br>
[glfs-resolve.c:954:__glfs_active_subvol] 0-ssd_storage: switched to <br>
graph 6e6f6465-3031-2e64-632d-6475732e6461 (0)<br>
 [2020-02-14 11:05:22.407851] I [MSGID: 114007] <br>
[client.c:2478:client_check_remote_host] 0-ssd_storage-snapd-client: <br>
Remote host is not set. Assuming the volfile server as remote host <br>
[Invalid argument]<br>
[2020-02-14 11:05:22.409711] I [MSGID: 104045] [glfs-master.c:80:notify] <br>
0-gfapi: New graph 6e6f6465-3031-2e64-632d-6475732e6461 (0) coming up<br>
[2020-02-14 11:05:22.409738] I [MSGID: 114020] [client.c:2436:notify] <br>
0-ssd_storage-client-0: parent translators are ready, attempting connect <br>
on transport<br>
[2020-02-14 11:05:22.412949] I [MSGID: 114020] [client.c:2436:notify] <br>
0-ssd_storage-client-1: parent translators are ready, attempting connect <br>
on transport<br>
[2020-02-14 11:05:22.413130] I [rpc-clnt.c:1963:rpc_clnt_reconfig] <br>
0-ssd_storage-client-0: changing port to 49152 (from 0)<br>
[2020-02-14 11:05:22.413154] I [socket.c:864:__socket_shutdown] <br>
0-ssd_storage-client-0: intentional socket shutdown(10)<br>
[2020-02-14 11:05:22.415534] I [MSGID: 114020] [client.c:2436:notify] <br>
0-ssd_storage-client-2: parent translators are ready, attempting connect <br>
on transport<br>
[2020-02-14 11:05:22.417836] I [MSGID: 114057] <br>
[client-handshake.c:1376:select_server_supported_programs] <br>
0-ssd_storage-client-0: Using Program GlusterFS 4.x v1, Num (1298437), <br>
Version (400)<br>
[2020-02-14 11:05:22.418036] I [rpc-clnt.c:1963:rpc_clnt_reconfig] <br>
0-ssd_storage-client-1: changing port to 49152 (from 0)<br>
[2020-02-14 11:05:22.418095] I [socket.c:864:__socket_shutdown] <br>
0-ssd_storage-client-1: intentional socket shutdown(12)<br>
[2020-02-14 11:05:22.420029] I [MSGID: 114020] [client.c:2436:notify] <br>
0-ssd_storage-snapd-client: parent translators are ready, attempting <br>
connect on transport<br>
[2020-02-14 11:05:22.420533] E [MSGID: 101075] <br>
[common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed <br>
(family:2) (Name or service not known)<br>
[2020-02-14 11:05:22.420545] E <br>
[name.c:266:af_inet_client_get_remote_sockaddr] <br>
0-ssd_storage-snapd-client: DNS resolution failed on host <br>
/var/run/glusterd.socket<br>
Final graph:<br>
+------------------------------------------------------------------------------+<br>
  1: volume ssd_storage-client-0<br>
  2:   type protocol/client<br>
  3:   option opversion 70000<br>
  4:   option clnt-lk-version 1<br>
  5:   option volfile-checksum 0<br>
  6:   option volfile-key ssd_storage<br>
  7:   option client-version 7.0<br>
  8:   option process-name gfapi.glfsheal<br>
  9:   option process-uuid <br>
CTX_ID:50cec79e-6028-4e6f-b8ed-dda9db36b2d0-GRAPH_ID:0-PID:24926-HOST:node01.company.com-PC_NAME:ssd_storage-client-0-RECON_NO:-0<br>
 10:   option fops-version 1298437<br>
 11:   option ping-timeout 42<br>
 12:   option remote-host <a href="http://node01.company.com" rel="noreferrer" target="_blank">node01.company.com</a><br>
 13:   option remote-subvolume /gluster_bricks/<a href="http://node01.company.com/gluster" rel="noreferrer" target="_blank">node01.company.com/gluster</a><br>
 14:   option transport-type socket<br>
 15:   option transport.address-family inet<br>
 16:   option username 96bcf4d4-932f-4654-86c3-470a081d5021<br>
 17:   option password 069e7ee9-b17d-4228-a612-b0f33588a9ec<br>
 18:   option transport.socket.ssl-enabled off<br>
 19:   option transport.tcp-user-timeout 0<br>
 20:   option transport.socket.keepalive-time 20<br>
 21:   option transport.socket.keepalive-interval 2<br>
 22:   option transport.socket.keepalive-count 9<br>
 23:   option send-gids true<br>
 24: end-volume<br>
 25:<br>
 26: volume ssd_storage-client-1<br>
 27:   type protocol/client<br>
 28:   option ping-timeout 42<br>
 29:   option remote-host <a href="http://node02.company.com" rel="noreferrer" target="_blank">node02.company.com</a><br>
 30:   option remote-subvolume /gluster_bricks/<a href="http://node02.company.com/gluster" rel="noreferrer" target="_blank">node02.company.com/gluster</a><br>
 31:   option transport-type socket<br>
 32:   option transport.address-family inet<br>
 33:   option username 96bcf4d4-932f-4654-86c3-470a081d5021<br>
 34:   option password 069e7ee9-b17d-4228-a612-b0f33588a9ec<br>
 35:   option transport.socket.ssl-enabled off<br>
 36:   option transport.tcp-user-timeout 0<br>
 37:   option transport.socket.keepalive-time 20<br>
 38:   option transport.socket.keepalive-interval 2<br>
 39:   option transport.socket.keepalive-count 9<br>
 40:   option send-gids true<br>
 41: end-volume<br>
 42:<br>
 43: volume ssd_storage-client-2<br>
 44:   type protocol/client<br>
 45:   option ping-timeout 42<br>
 46:   option remote-host <a href="http://node03.company.com" rel="noreferrer" target="_blank">node03.company.com</a><br>
 47:   option remote-subvolume /gluster_bricks/<a href="http://node03.company.com/gluster" rel="noreferrer" target="_blank">node03.company.com/gluster</a><br>
 48:   option transport-type socket<br>
 49:   option transport.address-family inet<br>
 50:   option username 96bcf4d4-932f-4654-86c3-470a081d5021<br>
 51:   option password 069e7ee9-b17d-4228-a612-b0f33588a9ec<br>
 52:   option transport.socket.ssl-enabled off<br>
 53:   option transport.tcp-user-timeout 0<br>
 54:   option transport.socket.keepalive-time 20<br>
 55:   option transport.socket.keepalive-interval 2<br>
 56:   option transport.socket.keepalive-count 9<br>
 57:   option send-gids true<br>
 58: end-volume<br>
 59:<br>
 60: volume ssd_storage-replicate-0<br>
 61:   type cluster/replicate<br>
 62:   option background-self-heal-count 0<br>
 63:   option afr-pending-xattr <br>
ssd_storage-client-0,ssd_storage-client-1,ssd_storage-client-2<br>
 64:   option metadata-self-heal on<br>
 65:   option data-self-heal on<br>
 66:   option entry-self-heal on<br>
 67:   option data-self-heal-algorithm full<br>
 68:   option use-compound-fops off<br>
 69:   subvolumes ssd_storage-client-0 ssd_storage-client-1 <br>
ssd_storage-client-2<br>
 70: end-volume<br>
 71:<br>
 72: volume ssd_storage-dht<br>
 73:   type cluster/distribute<br>
 74:   option readdir-optimize on<br>
 75:   option lock-migration off<br>
 76:   option force-migration off<br>
 77:   subvolumes ssd_storage-replicate-0<br>
 78: end-volume<br>
 79:<br>
 80: volume ssd_storage-utime<br>
 81:   type features/utime<br>
 82:   option noatime on<br>
 83:   subvolumes ssd_storage-dht<br>
 84: end-volume<br>
 85:<br>
 86: volume ssd_storage-write-behind<br>
 87:   type performance/write-behind<br>
 88:   subvolumes ssd_storage-utime<br>
 89: end-volume<br>
 90:<br>
 91: volume ssd_storage-read-ahead<br>
 92:   type performance/read-ahead<br>
 93:   subvolumes ssd_storage-write-behind<br>
 94: end-volume<br>
 95:<br>
 96: volume ssd_storage-readdir-ahead<br>
 97:   type performance/readdir-ahead<br>
 98:   option parallel-readdir off<br>
 99:   option rda-request-size 131072<br>
100:Â Â Â option rda-cache-limit 10MB<br>
101:Â Â Â subvolumes ssd_storage-read-ahead<br>
102: end-volume<br>
103:<br>
104: volume ssd_storage-io-cache<br>
105:Â Â Â type performance/io-cache<br>
106:Â Â Â subvolumes ssd_storage-readdir-ahead<br>
107: end-volume<br>
108:<br>
109: volume ssd_storage-open-behind<br>
110:Â Â Â type performance/open-behind<br>
111:Â Â Â subvolumes ssd_storage-io-cache<br>
112: end-volume<br>
113:<br>
114: volume ssd_storage-quick-read<br>
115:Â Â Â type performance/quick-read<br>
116:Â Â Â subvolumes ssd_storage-open-behind<br>
117: end-volume<br>
118:<br>
119: volume ssd_storage-md-cache<br>
120:Â Â Â type performance/md-cache<br>
121:Â Â Â subvolumes ssd_storage-quick-read<br>
122: end-volume<br>
123:<br>
124: volume ssd_storage-snapd-client<br>
125:Â Â Â type protocol/client<br>
126:Â Â Â option remote-host /var/run/glusterd.socket<br>
127:Â Â Â option ping-timeout 42<br>
128:Â Â Â option remote-subvolume snapd-ssd_storage<br>
129:Â Â Â option transport-type socket<br>
130:Â Â Â option transport.address-family inet<br>
131:Â Â Â option username 96bcf4d4-932f-4654-86c3-470a081d5021<br>
132:Â Â Â option password 069e7ee9-b17d-4228-a612-b0f33588a9ec<br>
133:Â Â Â option transport.socket.ssl-enabled off<br>
134:Â Â Â option transport.tcp-user-timeout 0<br>
135:Â Â Â option transport.socket.keepalive-time 20<br>
136:Â Â Â option transport.socket.keepalive-interval 2<br>
137:Â Â Â option transport.socket.keepalive-count 9<br>
138:Â Â Â option send-gids true<br>
139: end-volume<br>
140:<br>
141: volume ssd_storage-snapview-client<br>
142:Â Â Â type features/snapview-client<br>
143:Â Â Â option snapshot-directory .snaps<br>
144:Â Â Â option show-snapshot-directory on<br>
145:Â Â Â subvolumes ssd_storage-md-cache ssd_storage-snapd-client<br>
146: end-volume<br>
147:<br>
148: volume ssd_storage<br>
149:Â Â Â type debug/io-stats<br>
150:Â Â Â option log-level INFO<br>
151:Â Â Â option threads 16<br>
152:Â Â Â option latency-measurement off<br>
153:Â Â Â option count-fop-hits off<br>
154:Â Â Â option global-threading off<br>
155:Â Â Â subvolumes ssd_storage-snapview-client<br>
156: end-volume<br>
157:<br>
158: volume meta-autoload<br>
159:Â Â Â type meta<br>
160:Â Â Â subvolumes ssd_storage<br>
161: end-volume<br>
162:<br>
+------------------------------------------------------------------------------+<br>
[2020-02-14 11:05:22.421366] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-0: <br>
Connected to ssd_storage-client-0, attached to remote volume <br>
'/gluster_bricks/<a href="http://node01.company.com/gluster" rel="noreferrer" target="_blank">node01.company.com/gluster</a>'.<br>
[2020-02-14 11:05:22.421379] I [MSGID: 108005] <br>
[afr-common.c:5280:__afr_handle_child_up_event] <br>
0-ssd_storage-replicate-0: Subvolume 'ssd_storage-client-0' came back <br>
up; going online.<br>
[2020-02-14 11:05:22.421669] I [rpc-clnt.c:1963:rpc_clnt_reconfig] <br>
0-ssd_storage-client-2: changing port to 49152 (from 0)<br>
[2020-02-14 11:05:22.421686] I [socket.c:864:__socket_shutdown] <br>
0-ssd_storage-client-2: intentional socket shutdown(11)<br>
[2020-02-14 11:05:22.422460] I [MSGID: 114057] <br>
[client-handshake.c:1376:select_server_supported_programs] <br>
0-ssd_storage-client-1: Using Program GlusterFS 4.x v1, Num (1298437), <br>
Version (400)<br>
[2020-02-14 11:05:22.423377] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-1: <br>
Connected to ssd_storage-client-1, attached to remote volume <br>
'/gluster_bricks/<a href="http://node02.company.com/gluster" rel="noreferrer" target="_blank">node02.company.com/gluster</a>'.<br>
[2020-02-14 11:05:22.423391] I [MSGID: 108002] <br>
[afr-common.c:5647:afr_notify] 0-ssd_storage-replicate-0: Client-quorum <br>
is met<br>
[2020-02-14 11:05:22.424586] I [MSGID: 114057] <br>
[client-handshake.c:1376:select_server_supported_programs] <br>
0-ssd_storage-client-2: Using Program GlusterFS 4.x v1, Num (1298437), <br>
Version (400)<br>
[2020-02-14 11:05:22.425323] I [MSGID: 114046] <br>
[client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-2: <br>
Connected to ssd_storage-client-2, attached to remote volume <br>
'/gluster_bricks/<a href="http://node03.company.com/gluster" rel="noreferrer" target="_blank">node03.company.com/gluster</a>'.<br>
[2020-02-14 11:05:22.426613] I [MSGID: 108031] <br>
[afr-common.c:2580:afr_local_discovery_cbk] 0-ssd_storage-replicate-0: <br>
selecting local read_child ssd_storage-client-0<br>
[2020-02-14 11:05:22.426758] I [MSGID: 104041] <br>
[glfs-resolve.c:954:__glfs_active_subvol] 0-ssd_storage: switched to <br>
graph 6e6f6465-3031-2e64-632d-6475732e6461 (0)<br>
<br>
<br>
Can you guys make any sense out of this? 5 unsynced entries remain.<br>
<br>
-- <br>
with kind regards,<br>
mit freundlichen Gruessen,<br>
<br>
Christian Reiss<br>
<br>
________<br>
<br>
Community Meeting Calendar:<br>
<br>
APAC Schedule -<br>
Every 2nd and 4th Tuesday at 11:30 AM IST<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
NA/EMEA Schedule -<br>
Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>