[Gluster-users] Upgrade 10.4 -> 11.1 making problems

Hu Bert revirii at googlemail.com
Mon Jan 15 08:16:45 UTC 2024

just upgraded some gluster servers from version 10.4 to version 11.1.
Debian bullseye & bookworm. When only installing the packages: good,
servers, volumes etc. work as expected.

But one needs to test if the systems work after a daemon and/or server
restart. Well, did a reboot, and after that the rebooted/restarted
system is "out". Log message from working node:

[2024-01-15 08:02:21.585694 +0000] I [MSGID: 106163]
0-management: using the op-version 100000
[2024-01-15 08:02:21.589601 +0000] I [MSGID: 106490]
0-glusterd: Received probe from uuid:
[2024-01-15 08:02:23.608349 +0000] E [MSGID: 106010]
[glusterd-utils.c:3824:glusterd_compare_friend_volume] 0-management:
Version of Cksums sourceimages differ. local cksum = 2204642525,
remote cksum = 1931483801 on peer gluster190
[2024-01-15 08:02:23.608584 +0000] I [MSGID: 106493]
[glusterd-handler.c:3819:glusterd_xfer_friend_add_resp] 0-glusterd:
Responded to gluster190 (0), ret: 0, op_ret: -1
[2024-01-15 08:02:23.613553 +0000] I [MSGID: 106493]
[glusterd-rpc-ops.c:467:__glusterd_friend_add_cbk] 0-glusterd:
Received RJT from uuid: b71401c3-512a-47cb-ac18-473c4ba7776e, host:
gluster190, port: 0

peer status from rebooted node:

root at gluster190 ~ # gluster peer status
Number of Peers: 2

Hostname: gluster189
Uuid: 50dc8288-aa49-4ea8-9c6c-9a9a926c67a7
State: Peer Rejected (Connected)

Hostname: gluster188
Uuid: e15a33fe-e2f7-47cf-ac53-a3b34136555d
State: Peer Rejected (Connected)

So the rebooted gluster190 is not accepted anymore. And thus does not
appear in "gluster volume status". I then followed this guide:


Remove everything under /var/lib/glusterd/ (except glusterd.info) and
restart glusterd service etc. Data get copied from other nodes,
'gluster peer status' is ok again - but the volume info is missing,
/var/lib/glusterd/vols is empty. When syncing this dir from another
node, the volume then is available again, heals start etc.

Well, and just to be sure that everything's working as it should,
rebooted that node again - the rebooted node is kicked out again, and
you have to restart bringing it back again.

Sry, but did i miss anything? Has someone experienced similar
problems? I'll probably downgrade to 10.4 again, that version was


More information about the Gluster-users mailing list