[Gluster-users] Gluster 11.0 upgrade

Xavi Hernandez jahernan at redhat.com
Mon Feb 20 06:29:20 UTC 2023


Hi Marcus,

these errors shouldn't prevent the bricks from starting. Isn't there any
other error or warning ?

Regards,

Xavi

On Fri, Feb 17, 2023 at 3:06 PM Marcus Pedersén <marcus.pedersen at slu.se>
wrote:

> Hi all,
> I started an upgrade to gluster 11.0 from 10.3 on one of my clusters.
> OS: Debian bullseye
>
> Volume Name: gds-common
> Type: Replicate
> Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: urd-gds-031:/urd-gds/gds-common
> Brick2: urd-gds-032:/urd-gds/gds-common
> Brick3: urd-gds-030:/urd-gds/gds-common (arbiter)
> Options Reconfigured:
> cluster.granular-entry-heal: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> I started with the arbiter node, stopped all of gluster
> upgraded to 11.0 and all went fine.
> After upgrade I was able to see the other nodes and
> all nodes were connected.
> After a reboot on the arbiter nothing works the way it should.
> Both brick1 and brick2 has connection but no connection
> with the arbiter.
> On the arbiter glusterd has started and is listening on port 24007,
> the problem seems to be glusterfsd, it never starts!
>
> If I run: gluster volume status
>
> Status of volume: gds-common
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
>
> ------------------------------------------------------------------------------
> Brick urd-gds-030:/urd-gds/gds-common       N/A       N/A        N
>  N/A
> Self-heal Daemon on localhost               N/A       N/A        N
>  N/A
>
> Task Status of Volume gds-common
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
> In glusterd.log I find the following errors (arbiter node):
> [2023-02-17 12:30:40.519585 +0000] E [gf-io-uring.c:404:gf_io_uring_setup]
> 0-io: [MSGID:101240] Function call failed <{function=io_uring_setup()},
> {error=12 (Cannot allocate memory)}>
> [2023-02-17 12:30:40.678031 +0000] E [MSGID: 106061]
> [glusterd.c:597:glusterd_crt_georep_folders] 0-glusterd: Dict get failed
> [{Key=log-group}, {errno=2}, {error=No such file or directory}]
>
> In brick/urd-gds-gds-common.log I find the following error:
> [2023-02-17 12:30:43.550753 +0000] E [gf-io-uring.c:404:gf_io_uring_setup]
> 0-io: [MSGID:101240] Function call failed <{function=io_uring_setup()},
> {error=12 (Cannot allocate memory)}>
>
> I enclose both logfiles.
>
> How do I resolve this issue??
>
> Many thanks in advance!!
>
> Marcus
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här <
> https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here <
> https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230220/ffe8b41a/attachment.html>


More information about the Gluster-users mailing list