Did you try with latest 9.X ? Based on the release notes that should be 9.3 .<br> <br>Best Regards,<div id="yMail_cursorElementTracker_1627013341769">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1627013345053"><br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div id="yMail_cursorElementTracker_1627013335934">On Fri, Jul 23, 2021 at 3:06, Artem Russakovskii</div><div><archon810@gmail.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv9305552341"><div dir="ltr">Hi all,<div><br></div><div>I just filed this ticket <a rel="nofollow noopener noreferrer" target="_blank" href="https://github.com/gluster/glusterfs/issues/2648">https://github.com/gluster/glusterfs/issues/2648</a>, and wanted to bring it to your attention. Any feedback would be appreciated.</div><div><br></div><div><p style="margin-bottom:16px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;margin-top:0px;"><span style="font-weight:600;">Description of problem:</span><br style="">We have a 4-node replicate cluster running gluster 7.9. I'm currently setting up a new cluster on a new set of machines and went straight for gluster 9.1.</p><p style="margin-bottom:16px;margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;">However, I was unable to probe any servers due to this error:</p><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style="color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;"><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:30.5em;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">[2021-07-17 00:31:05.228609 +0000] I [MSGID: 106487] [glusterd-handler.c:1160:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req nexus2 24007
[2021-07-17 00:31:05.229727 +0000] E [MSGID: 101075] [common-utils.c:3657:gf_is_local_addr] 0-management: error in getaddrinfo [{ret=Name or service not known}]
[2021-07-17 00:31:05.230785 +0000] E [MSGID: 106408] [glusterd-peer-utils.c:217:glusterd_peerinfo_find_by_hostname] 0-management: error in getaddrinfo: Name or service not known
 [Unknown error -2]
[2021-07-17 00:31:05.353971 +0000] I [MSGID: 106128] [glusterd-handler.c:3719:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: nexus2 (24007)
[2021-07-17 00:31:05.375871 +0000] W [MSGID: 106061] [glusterd-handler.c:3488:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2021-07-17 00:31:05.375903 +0000] I [rpc-clnt.c:1010:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2021-07-17 00:31:05.377021 +0000] E [MSGID: 101075] [common-utils.c:520:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=10}, {ret=Name or service not known}]
[2021-07-17 00:31:05.377043 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host nexus2
[2021-07-17 00:31:05.377147 +0000] I [MSGID: 106498] [glusterd-handler.c:3648:glusterd_friend_add] 0-management: connect returned 0
[2021-07-17 00:31:05.377201 +0000] I [MSGID: 106004] [glusterd-handler.c:6427:__glusterd_peer_rpc_notify] 0-management: Peer <nexus2> (<00000000-0000-0000-0000-000000000000>), in state <Establishing Connection>, has disconnected from glusterd.
[2021-07-17 00:31:05.377453 +0000] E [MSGID: 101032] [store.c:464:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/<a rel="nofollow noopener noreferrer" target="_blank" href="http://glusterd.info">glusterd.info</a>. [No such file or directory]
</code></pre></div><p style="margin-bottom:16px;margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;">I then wiped the <code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;border-radius:6px;margin:0px;padding:0.2em 0.4em;">/var/lib/glusterd</code> dir to start clean and downgraded to 7.9, then attempted to peer probe again. This time, it worked fine, proving 7.9 is working, same as it is on prod.</p><p style="margin-bottom:16px;margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;">At this point, I made a volume, started it, and played around with testing to my satisfaction. Then I decided to see what would happen if I tried to upgrade this working volume from 7.9 to 9.1.</p><p style="margin-bottom:16px;margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;">The end result is:</p><ul style="margin-bottom:16px;margin-top:0px;padding-left:2em;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;"><li style="margin-left:0px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;border-radius:6px;margin:0px;padding:0.2em 0.4em;">gluster volume status</code> is only showing the local gluster node and not any of the remote nodes</li><li style="margin-top:0.25em;margin-left:0px;">data does seem to replicate, so the connection between the servers is actually established</li><li style="margin-top:0.25em;margin-left:0px;">logs are now filled with constantly repeating messages like so:</li></ul><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style="color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;"><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:30.5em;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">[2021-07-22 23:29:31.039004 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host nexus2
[2021-07-22 23:29:31.039212 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host citadel
[2021-07-22 23:29:31.039304 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host hive
The message "E [MSGID: 101075] [common-utils.c:520:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=10}, {ret=Name or service not known}]" repeated 119 times between [2021-07-22 23:27:34.025983 +0000] and [2021-07-22 23:29:31.039302 +0000]
[2021-07-22 23:29:34.039369 +0000] E [MSGID: 101075] [common-utils.c:520:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=10}, {ret=Name or service not known}]
[2021-07-22 23:29:34.039441 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host nexus2
[2021-07-22 23:29:34.039558 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host citadel
[2021-07-22 23:29:34.039659 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host hive
[2021-07-22 23:29:37.039741 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host nexus2
[2021-07-22 23:29:37.039921 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host citadel
[2021-07-22 23:29:37.040015 +0000] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host hive
</code></pre></div><p style="margin-bottom:16px;margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;">When I issue a command in cli:</p><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style="color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;"><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:30.5em;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">==> cli.log <==
[2021-07-22 23:38:11.802596 +0000] I [cli.c:840:main] 0-cli: Started running gluster with version 9.1
**[2021-07-22 23:38:11.804007 +0000] W [socket.c:3434:socket_connect] 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not supported"**
[2021-07-22 23:38:11.906865 +0000] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}]
</code></pre></div>**Mandatory info:** **- The output of the `gluster volume info` command**:<div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style=""><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:none;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">gluster volume info
 
Volume Name: ap
Type: Replicate
Volume ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: nexus2:/mnt/nexus2_block1/ap
Brick2: forge:/mnt/forge_block1/ap
Brick3: hive:/mnt/hive_block1/ap
Brick4: citadel:/mnt/citadel_block1/ap
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.self-heal-daemon: enable
client.event-threads: 4
cluster.data-self-heal-algorithm: full
cluster.lookup-optimize: on
cluster.quorum-count: 1
cluster.quorum-type: fixed
cluster.readdir-optimize: on
cluster.heal-timeout: 1800
disperse.eager-lock: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
network.inode-lru-limit: 500000
network.ping-timeout: 7
network.remote-dio: enable
performance.cache-invalidation: on
performance.cache-size: 1GB
performance.io-thread-count: 4
performance.md-cache-timeout: 600
performance.rda-cache-limit: 256MB
performance.read-ahead: off
performance.readdir-ahead: on
performance.stat-prefetch: on
performance.write-behind-window-size: 32MB
server.event-threads: 4
cluster.background-self-heal-count: 1
performance.cache-refresh-timeout: 10
features.ctime: off
cluster.granular-entry-heal: enable
</code></pre></div><p style="margin-bottom:16px;margin-top:0px;"><span style="font-weight:600;">- The output of the <code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;border-radius:6px;margin:0px;padding:0.2em 0.4em;">gluster volume status</code> command</span>:</p><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style=""><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:none;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">gluster volume status
Status of volume: ap
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick forge:/mnt/forge_block1/ap            49152     0          Y       2622 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
 
Task Status of Volume ap
------------------------------------------------------------------------------
There are no active volume tasks
</code></pre><div class="yiv9305552341gmail-zeroclipboard-container yiv9305552341gmail-position-absolute yiv9305552341gmail-right-0 yiv9305552341gmail-top-0" style=""><span class="yiv9305552341gmail-ClipboardButton yiv9305552341gmail-btn yiv9305552341gmail-js-clipboard-copy yiv9305552341gmail-m-2 yiv9305552341gmail-p-0 yiv9305552341gmail-tooltipped-no-delay" style="border-width:1px;border-style:solid;border-radius:6px;display:inline-block;line-height:20px;vertical-align:middle;white-space:nowrap;padding:0px;margin:8px;"></span></div></div><p style="margin-bottom:16px;margin-top:0px;"><span style="font-weight:600;">- The output of the <code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;border-radius:6px;margin:0px;padding:0.2em 0.4em;">gluster volume heal</code> command</span>:</p><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style=""><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:none;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">gluster volume heal ap enable
Enable heal on volume ap has been successful 
</code></pre></div><div class="yiv9305552341gmail-snippet-clipboard-content yiv9305552341gmail-position-relative" style=""><pre style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;font-size:11.9px;margin-bottom:16px;margin-top:0px;max-height:none;overflow:auto;border-radius:6px;line-height:1.45;padding:16px;"><code style="font-family:ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;background:transparent;border-radius:6px;margin:0px;padding:0px;border:0px;display:inline;line-height:inherit;overflow:visible;">gluster volume heal ap 
Launching heal operation to perform index self heal on volume ap has been unsuccessful:
Self-heal daemon is not running. Check self-heal daemon log file.
</code></pre></div><p style="margin-top:0px;color:rgb(36,41,46);font-family:-apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif, Color UI;font-size:14px;margin-bottom:0px;"><span style="font-weight:600;">- The operating system / glusterfs version</span>:<br style="">OpenSUSE 15.2, glusterfs 9.1.</p></div><div><br clear="all"><div><div dir="ltr" class="yiv9305552341gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a rel="nofollow noopener noreferrer" target="_blank" href="http://www.androidpolice.com">Android Police</a>, <a rel="nofollow noopener noreferrer" target="_blank" href="http://www.apkmirror.com/" style="font-size:12.8px;">APK Mirror</a><span style="font-size:12.8px;">, Illogical Robot LLC</span></div><div dir="ltr"><a rel="nofollow noopener noreferrer" target="_blank" href="http://beerpla.net/">beerpla.net</a> | <a rel="nofollow noopener noreferrer" target="_blank" href="http://twitter.com/ArtemR">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div>________<br><br><br><br>Community Meeting Calendar:<br><br>Schedule -<br>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>Gluster-users mailing list<br><a ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br> </div> </blockquote></div>