[Gluster-users] Reinstall OS while keeping bricks intact

Prasun Gera prasun.gera at gmail.com
Thu Jul 30 08:58:59 UTC 2015

One of my nodes in an RHS 3.0 3x2 dist+replicated pool is down and not
likely to recover. The machine doesn't have IPMI and I have limited access.
Standard steps to recover it didn't work, and at this point the easiest
option seems to get help in reinstalling the OS. I believe that the brick
and other config files are intact. From RHS documentation on upgrading from
an ISO, this is what I got:

1. Backup (/var/lib/glusterd, /etc/swift, /etc/samba, /etc/ctdb,
/etc/glusterfs. /var/lib/samba, /var/lib/ctdb) . Backup entire /etc for
selective restoration.

2. Stop the volume and all services everywhere. Install the OS on the
affected node without touching the brick. Stop glusterd on this node too.

3. Backup /var/lib/glusterd from the newly installed OS.

4. Copy back /var/lib/glusterd and /etc/glusterfs from step 1. to the newly
installed OS.

5. Copy back the latest hooks scripts (from step 3) to
/var/lib/glusterd/hooks. This is probably not required since the steps were
written for an upgrade whereas my version is the same. Right ?

6. glusterd --xlator-option *.upgrade=yes -N. Is this needed in my case ?
It's not an upgrade.

7. Restart services and volume.

Do these steps sound all right ? Should I also restore /etc/nagios ? Or
would nagios have to be reconfigured for the entire cluster ?

The reason for this failure was a botched kernel upgrade and a combination
of some other factors which i'm not sure yet. And I wasn't able to generate
working initramfs using dracut in recovery. Interestingly, I noticed the
following line in the new RHS 3.1 documentation. "If dracut packages are
previously installed, then exclude the dracut packages while updating to
Red Hat Gluster Storage 3.1 during offline ISO update using the following
# yum update -x dracut -x dracut-kernel" . Is there some sort of a known
issue ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150730/f23e2374/attachment.html>

More information about the Gluster-users mailing list