[Gluster-users] Return previously broken server to gluster cluster
Marcus Pedersén
marcus.pedersen at slu.se
Tue Nov 3 21:00:14 UTC 2020
Hello all,
I have a gluster cluster like this:
Volume Name: gds-home
Type: Replicate
Volume ID: 3d9d7182-47a8-43ac-8cd1-6a090bb4b8b9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: urd-gds-021:/urd-gds/gds-home
Brick2: urd-gds-022:/urd-gds/gds-home
Brick3: urd-gds-020:/urd-gds/gds-home (arbiter)
Options Reconfigured:
features.barrier: disable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Brick 1 and 2 are both configured the same way.
The have a separate OS disk and the rest of the disks are all in one raid.
On top of this is a thin lvm created and the gluster brick lies on the lvm.
On brick1 the backplane to the disks crached and the OS disk crashed,
this has been fixed and I have managed to recreate the raid and the lvm,
so all data on the brick is intact.
The peer is still disconnected.
How do I reconfigure brick2 to be a part of the gluster cluster again?
I assume that when you do peer probe and volume create config
data is written to the OS disk.
Guessing that gluster peer probe urd-gds-021, does not work as it is
already configured.
Do I do the following:
gluster peer detach urd-gds-021
gluster peer probe urd-gds-021
gluster volume replace-brick gds-home urd-gds-021:/brick urd-gds-021/brick
I just want to be sure before I enter any commands so I do not destroy
instead if repairing.
Many thanks in advance!!
Best regards
Marcus
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click here <https://www.slu.se/en/about-slu/contact-slu/personal-data/>
More information about the Gluster-users
mailing list