<div dir="ltr">Hi,<div><br></div><div>I am having significant issues with glustershd with releases 8.4 and 9.1.</div><div><br></div><div>My oVirt clusters are using gluster storage backends, and were running fine with Gluster 7.x (shipped with earlier versions of oVirt Node 4.4.x). Recently the oVirt project moved to Gluster 8.4 for the nodes, and hence I have moved to this release when upgrading my clusters.</div><div><br></div><div>Since then I am having issues whenever one of the nodes is brought down; when the nodes come back up online the bricks are typically back up and working, but some (random) glustershd processes in the various nodes seem to have issues connecting to some of them.</div><div><br></div><div>Typically when this happens the files are not getting healed</div><div><br></div><div><font face="monospace">VM_Storage_1<br>    Distributed_replicate          Started (UP) - 27/27 Bricks Up<br>                                   Capacity: (27.10% used) 2.00 TiB/8.00 TiB (used/total)<br>                                   Self-Heal:<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick (5 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (5 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick (10 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (10 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (4 File(s) to heal).<br>                                      lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick (9 File(s) to heal).<br>                                      lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (9 File(s) to heal).</font><br></div><div><font face="monospace"><br></font></div><div>(They will never heal; the number of files to heal however changes).</div><div><br></div><div>In the glustershd.log files, I can see the following continuously:</div><div><font face="monospace">[2021-05-17 10:27:30.531561 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49154 (from 0)<br>[2021-05-17 10:27:30.533709 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)<br>[2021-05-17 10:27:30.534211 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] <br>[2021-05-17 10:27:30.534514 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-3: failed to set the volume [{errno=2}, {error=No such file or directory}] <br>The message "I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-3: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-3}]" repeated 4 times between [2021-05-17 10:27:18.510668 +0000] and [2021-05-17 10:27:30.534569 +0000]<br>[2021-05-17 10:27:30.536254 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] <br>[2021-05-17 10:27:30.536620 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to set the volume [{errno=2}, {error=No such file or directory}] <br>[2021-05-17 10:27:30.536638 +0000] W [MSGID: 114007] [client-handshake.c:752:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to get from reply dict [{process-uuid}, {errno=22}, {error=Invalid argument}] <br>[2021-05-17 10:27:30.536651 +0000] E [MSGID: 114044] [client-handshake.c:757:client_setvolume_cbk] 2-VM_Storage_1-client-7: SETVOLUME on remote-host failed [{remote-error=Brick not found}, {errno=2}, {error=No such file or directory}] <br>[2021-05-17 10:27:30.536660 +0000] I [MSGID: 114051] [client-handshake.c:879:client_setvolume_cbk] 2-VM_Storage_1-client-7: sending CHILD_CONNECTING event [] <br>[2021-05-17 10:27:30.536686 +0000] I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-7: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-7}] <br>[2021-05-17 10:27:33.537589 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49154 (from 0)<br>[2021-05-17 10:27:33.539554 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)</font><br></div><div><br></div><div>From my understanding the process is trying to connect to the brick on the wrong port</div><div><font face="monospace">lab-cnvirt-h03-storage:-bricks-vm_b2_vol-brick:8:brick-id=VM_Storage_1-client-7</font><br></div><div><font face="monospace">Brick lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick                                   <b>49169     </b>0          Y       1600469</font><br></div><div><font face="monospace"><br></font></div><div><font face="monospace">lab-cnvirt-h03-storage:-bricks-vm_b1_vol-brick:8:brick-id=VM_Storage_1-client-3<br></font></div><div><font face="monospace">Brick lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick                                   <b>49168     </b>0          Y       1600460<br></font></div><div><font face="monospace"><br></font></div><div>Typically to resolve this I have to manually kill the affected glusterfsd process (in this case the two processes above) and then issue a <font face="monospace">gluster volume start VM_Storage_1 </font>force to restart them.</div><div>As soon as I do that, the process is able to re-connect and start the healing:</div><div><br></div><div><font face="monospace">[2021-05-17 10:46:12.513706 +0000] I [MSGID: 100041] [glusterfsd-mgmt.c:1035:glusterfs_handle_svc_attach] 0-glusterfs: received attach request for volfile [{volfile-id=shd/VM_Storage_1}] <br>[2021-05-17 10:46:12.513847 +0000] I [MSGID: 100040] [glusterfsd-mgmt.c:109:mgmt_process_volfile] 0-glusterfs: No change in volfile, countinuing [] <br>[2021-05-17 10:46:14.626397 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49157 (from 0)<br>[2021-05-17 10:46:14.628468 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49156 (from 0)<br>[2021-05-17 10:46:14.628927 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] <br>[2021-05-17 10:46:14.629633 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-7: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-7}, {remote_subvol=/bricks/vm_b2_vol/brick}] <br>[2021-05-17 10:46:14.631212 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] <br>[2021-05-17 10:46:14.631949 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-3: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-3}, {remote_subvol=/bricks/vm_b1_vol/brick}] <br>[2021-05-17 10:46:14.705116 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 399bdfe5-01b7-46f9-902b-9351420debc9 <br>[2021-05-17 10:46:14.705214 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 3543a4c7-4a68-4193-928a-c9f7ef08ce4e <br></font></div><div><br></div><div>Am I doing something wrong here?</div><div><br></div><div>I have also tried to upgrade to 9.1 in my test cluster (logs are from the 9.1) -- but I have the exact same issue.</div><div>Do you need any specific information? </div><div><br></div><div>It is happening with all my volumes, but info for the one above is listed below:</div><div><br></div><div><br><font face="monospace">Volume Name: VM_Storage_1<br>Type: Distributed-Replicate<br>Volume ID: 1a4e23db-1c98-4d89-b888-b4ae2e0ad5fc<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 9 x (2 + 1) = 27<br>Transport-type: tcp<br>Bricks:<br>Brick1: lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick<br>Brick2: lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick<br>Brick3: lab-cnvirt-h03-storage:/bricks/vm_b1_arb/brick (arbiter)<br>Brick4: lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick<br>Brick5: lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick<br>Brick6: lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (arbiter)<br>Brick7: lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick<br>Brick8: lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick<br>Brick9: lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (arbiter)<br>Brick10: lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick<br>Brick11: lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick<br>Brick12: lab-cnvirt-h03-storage:/bricks/vm_b3_arb/brick (arbiter)<br>Brick13: lab-cnvirt-h03-storage:/bricks/vm_b3_vol/brick<br>Brick14: lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick<br>Brick15: lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (arbiter)<br>Brick16: lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick<br>Brick17: lab-cnvirt-h03-storage:/bricks/vm_b4_vol/brick<br>Brick18: lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (arbiter)<br>Brick19: lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick<br>Brick20: lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick<br>Brick21: lab-cnvirt-h03-storage:/bricks/vm_b5_arb/brick (arbiter)<br>Brick22: lab-cnvirt-h03-storage:/bricks/vm_b5_vol/brick<br>Brick23: lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick<br>Brick24: lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (arbiter)<br>Brick25: lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick<br>Brick26: lab-cnvirt-h03-storage:/bricks/vm_b6_vol/brick<br>Brick27: lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (arbiter)<br>Options Reconfigured:<br>storage.owner-uid: 36<br>storage.owner-gid: 36<br>performance.strict-o-direct: on<br>cluster.read-hash-mode: 3<br>performance.client-io-threads: on<br>server.event-threads: 4<br>client.event-threads: 4<br>cluster.choose-local: off<br>user.cifs: off<br>features.shard: on<br>cluster.shd-wait-qlength: 10000<br>cluster.shd-max-threads: 8<br>cluster.locking-scheme: granular<br>cluster.data-self-heal-algorithm: full<br>cluster.server-quorum-type: server<br>cluster.quorum-type: auto<br>cluster.eager-lock: enable<br>network.remote-dio: off<br>performance.low-prio-threads: 32<br>performance.io-cache: off<br>performance.read-ahead: off<br>performance.quick-read: off<br>storage.fips-mode-rchecksum: on<br>nfs.disable: on<br>transport.address-family: inet<br>cluster.self-heal-daemon: enable</font><br></div><div><font face="monospace"><br></font></div><div>Regards,</div><div>Marco</div><div><br></div></div>