[Gluster-users] Replacing Failed Server Failing

Strahil Nikolov hunter86_bg at yahoo.com
Mon Jan 1 14:21:30 UTC 2024


Hi,
I think you might be mixing the approach.Basically you have 2 options:
Create brand new system, use different hostname and then add it to the TSP (Trusted Storage Pool).Then you need to remove the bricks(server + directory combination ) owned by the previous system and then add the new bricks . 
Use the same hostname as the old system and restore from backup the gluster directories (both the one in '/etc' and in '/var/lib').If your gluster storage was also affected, you will need to recover the bricks from backup or remove the old ones from the volume and recreate them.
Can you describe what you have done so far (logically) ?
Best Regards,Strahil Nikolov  
 
  On Mon, Jan 1, 2024 at 6:59, duluxoz<duluxoz at gmail.com> wrote:   Hi All (and Happy New Year),

We had to replace one of our Gluster Servers in our Trusted Pool this 
week (node1).

The new server is now built, with empty folders for the bricks, peered 
to the old Nodes (node2 & node3).

We basically followed this guide: 
https://docs.rackspace.com/docs/recover-from-a-failed-server-in-a-glusterfs-array

We are using the same/old IP address.

So when we try to do a `gluster volume sync node2 all` we get a `volume 
sync node2 all : FAILED : Staging failed on node2. Please check log file 
for details.`

The logs all *seem* to be complaining the there are no volumes on node1 
- which makes sense (I think) because there *are* no volumes on node1.

If we try to create a volume on node1 the system complains that the 
volume already exists (on nodes 2& 3) - again, yes, this is correct.

So, what are we doing wrong?

Thanks in advance

Dulux-Oz

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20240101/b9639694/attachment.html>


More information about the Gluster-users mailing list