[Gluster-users] 2 issues after upgrade 9.4 -> 10.1
Hu Bert
revirii at googlemail.com
Tue Mar 1 10:59:34 UTC 2022
Hi Nikhil,
there are only 2 replicate 3 volumes, and yes, i did the
online-upgrade - a couple of times now, all worked well :-)
I should've been more precise: online upgrade on all 6 servers, on 5
servers the ports stayed the same, and only on one server the ports
changed - without reboot. First i thought that smth might have gone
wrong, rebooted the one server, but ports stayed different. So
directly after the online upgrade on that one server the ports were
different. But as ports don't matter anymore: issue solved :-)
And the mount issue solved as well, excellent! Thank you!
Best regards,
Hubert
Am Di., 1. März 2022 um 11:40 Uhr schrieb Nikhil Ladha <nladha at redhat.com>:
>
> Hi Hu Bert,
>
> Do you have a distributed-replicate volume and you followed an online upgrade procedure? If so, then that is the reason that the ports are different on only 1 server as you mentioned you did a reboot on it.
> Secondly, the glusterfs mount issue is already fixed (check patch #3211) and will be available in the next release.
>
> --
> Thanks and Regards,
> NiKHIL LADHA
>
>
> On Tue, Mar 1, 2022 at 3:10 PM Hu Bert <revirii at googlemail.com> wrote:
>>
>> Hey,
>>
>> ok, i think i found the reason for the port issue:
>>
>> https://docs.gluster.org/en/latest/release-notes/10.0/
>> https://github.com/gluster/glusterfs/issues/786
>>
>> Should've look closer... mea culpa. But, quite interesting, that
>> happened only on one server, i upgraded 6 servers in total.
>>
>> So only the issue with the glusterfs mount and backup-volfile-servers remains.
>>
>>
>> Thx,
>> Hubert
>>
>> Am Di., 1. März 2022 um 06:19 Uhr schrieb Hu Bert <revirii at googlemail.com>:
>> >
>> > Good morning,
>> >
>> > just did an upgrade of 3 gluster volumes and x clients from 9.4 to
>> > 10.1. In principle the upgrade went fine, just 2 smaller issues
>> > appeared.
>> >
>> > 1) on one of the servers the ports are screwed up.
>> >
>> > gluster volume status
>> > Status of volume: workdata
>> > Gluster process TCP Port RDMA Port Online Pid
>> > ------------------------------------------------------------------------------
>> > Brick glusterpub1:/gluster/md3/workdata 49152 0 Y 1452
>> > Brick glusterpub2:/gluster/md3/workdata 49152 0 Y 1839
>> > Brick glusterpub3:/gluster/md3/workdata 54105 0 Y 1974
>> > Brick glusterpub1:/gluster/md4/workdata 49153 0 Y 1459
>> > Brick glusterpub2:/gluster/md4/workdata 49153 0 Y 1849
>> > Brick glusterpub3:/gluster/md4/workdata 58177 0 Y 1997
>> > Brick glusterpub1:/gluster/md5/workdata 49154 0 Y 1468
>> > Brick glusterpub2:/gluster/md5/workdata 49154 0 Y 1857
>> > Brick glusterpub3:/gluster/md5/workdata 59071 0 Y 2003
>> > Brick glusterpub1:/gluster/md6/workdata 49155 0 Y 1481
>> > Brick glusterpub2:/gluster/md6/workdata 49155 0 Y 1868
>> > Brick glusterpub3:/gluster/md6/workdata 53309 0 Y 2008
>> > Brick glusterpub1:/gluster/md7/workdata 49156 0 Y 1490
>> > Brick glusterpub2:/gluster/md7/workdata 49156 0 Y 1878
>> > Brick glusterpub3:/gluster/md7/workdata 54310 0 Y 2027
>> > Self-heal Daemon on localhost N/A N/A Y 2108
>> > Self-heal Daemon on glusterpub1 N/A N/A Y 1210749
>> > Self-heal Daemon on glusterpub2 N/A N/A Y 950871
>> >
>> > Task Status of Volume workdata
>> > ------------------------------------------------------------------------------
>> > There are no active volume tasks
>> >
>> > glusterpub3 has different ports. I know, this is no problem, the
>> > volume is good, but even after a reboot the ports stay like this.
>> >
>> > glustershd.log:
>> > [2022-03-01 04:58:13.993349 +0000] I
>> > [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-workdata-client-0: changing port
>> > to 49152 (from 0)
>> > [2022-03-01 04:58:13.993410 +0000] I [socket.c:834:__socket_shutdown]
>> > 0-workdata-client-0: intentional socket shutdown(13)
>> > [............]
>> > [2022-03-01 04:58:14.008111 +0000] I
>> > [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-workdata-client-1: changing port
>> > to 49152 (from 0)
>> > [2022-03-01 04:58:14.008148 +0000] I [socket.c:834:__socket_shutdown]
>> > 0-workdata-client-1: intentional socket shutdown(14)
>> > [............]
>> > [2022-03-01 04:58:14.011416 +0000] I
>> > [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-workdata-client-2: changing port
>> > to 54105 (from 0)
>> > [2022-03-01 04:58:14.011469 +0000] I [socket.c:834:__socket_shutdown]
>> > 0-workdata-client-2: intentional socket shutdown(13)
>> >
>> > same for the other 4 bricks. Probably some more related message,
>> > unsure which ones to c+p. And some error messages like these (appear
>> > on all servers):
>> >
>> > [2022-03-01 04:58:14.012523 +0000] E
>> > [rpc-clnt.c:331:saved_frames_unwind] (-->
>> > /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x195)[0x7f4cec48c2a5]
>> > (--> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x729c)[0x7f4cec42529c] (-->
>> > /lib/
>> > x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x10f)[0x7f4cec42d20f]
>> > (--> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x10118)[0x7f4cec42e118]
>> > (--> /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f4cec429646]
>> > )))
>> > )) 0-workdata-client-5: forced unwinding frame type(GF-DUMP)
>> > op(DUMP(1)) called at 2022-03-01 04:58:14.011943 +0000 (xid=0x5)
>> >
>> > very strange.
>> >
>> > 2) when mounting on the clients (after upgrade):
>> >
>> > /sbin/mount.glusterfs: 90: [: glusterpub2 glusterpub3 SyntaxOK:
>> > unexpected operator
>> > /sbin/mount.glusterfs: 366: [: SyntaxOK: unexpected operator
>> >
>> > Syntax ok, but unexpected operator? Has the mount syntax changed?
>> >
>> > glusterpub1:/workdata /data/repository/shared/public glusterfs
>> > defaults,_netdev,attribute-timeout=0,entry-timeout=0,backup-volfile-servers=glusterpub2:glusterpub3
>> > 0 0
>> >
>> >
>> > thx,
>> > Hubert
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list