[Gluster-users] replacing brick failed

Strahil Nikolov hunter86_bg at yahoo.com
Thu May 26 18:52:48 UTC 2022


I would install gluster v6 on the 3rd node, join it and wait for the heal to finish (don't forget to trigger full heal).
Once you have a replica 3, upgrade one of the nodes and follow a rolling update approach.
Keep in mind that you have to check the release notes as some options were deprecated.
I would go 6-> 7-> 8 -> 9 -> 10 as this is the most tested scenario but reading the release notes can help you identify which versions you can skip.
According to https://docs.gluster.org/en/main/Upgrade-Guide/upgrade-to-9/ upgrade from 6.Latest to 9.x should be possible but without debug logs , it's hard to understand why it happened.

 Best Regards,Strahil Nikolov
 
  On Wed, May 25, 2022 at 9:58, Stefan Kania<stefan at kania-online.de> wrote:   Hello,

we have a gluster volume (replica 3). We removed one node and detached
the peer, running gluster6 on ubuntu 18.04.
gluster v remove-brick gv1 replica 2 c3:/gluster/brick  force
gluster peer detach c3

We installed ubuntu 20.04 and gluster9
We than had a volume with two nodes up ad running. We replaced the HDs
with SSDs reformated the disk with xfs. Then we did a
gluster peer probe c3

to add the brick, that works "gluster peer status" and "gluster pool
list" is showing the brick. Trying to add the brick with:
gluster v add-brick gv1 replica 3 c3:/gluster/brick

gives us the following error:
volume add-brick: failed: Pre Validation failed on c3. Please check log
file for details.

But we only found the same errormesage in the log. Then we checked the
status of the volume and got:

gluster v status gv1
Staging failed on c3. Error: Volume gv1 does not exist

So the volume crashes. A "gluster v info" is showing:
root at fs002010:~# gluster v info

Volume Name: gv1
Type: Replicate
Volume ID: b93f1432-083b-42c1-870a-1e9faebc4a7d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: c1:/gluster/brick
Brick2: c2:/gluster/brick
Options Reconfigured:
nfs.disable: on

As soon as we detach c3 the volume status is OK, without restarting glusterd

What might be the problem? Adding gluster9 to a gluster6 volume? The
name resolution is using /etc/hosts so all 3 hosts in all 3 /etc/hosts
file and can be reached via ping.

Any help?

Stefan

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220526/a2a957d9/attachment.html>


More information about the Gluster-users mailing list