[Gluster-users] Suggested method for replacing an entire node

Gene Liverman gliverma at westga.edu
Thu Oct 8 16:22:05 UTC 2015


 Thanks for all the replies! Just to make sure I have this right, the
following should work for *both* machines with and machines without a
currently populated brick if the name and IP stay the same:

   - reinstall os
   - reinstall gluster software
   - start gluster

Do I need to do any peer probing or anything else? Do I need to do any
brick removal / adding (I'm thinking no but want to make sure)?




Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
gliverma at westga.edu

ITS: Making Technology Work for You!



On Thu, Oct 8, 2015 at 9:52 AM, Alastair Neil <ajneil.tech at gmail.com> wrote:

> Ahh that is good to know.
>
> On 8 October 2015 at 09:50, Atin Mukherjee <atin.mukherjee83 at gmail.com>
> wrote:
>
>> -Atin
>> Sent from one plus one
>> On Oct 8, 2015 7:17 PM, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
>> >
>> > I think you should back up /var/lib/glusterd and then restore it after
>> the reinstall and installation of glusterfs packages.  Assuming the node
>> will have the same hostname and ip addresses and you are installing the
>> same version gluster bits, I think it should be fine.  I am assuming you
>> are not using ssl for the connections if so you will need to back up the
>> keys for that too.
>> If the same machine is used with out hostname/ IP change, backing up
>> glusterd configuration *is not* needed as syncing the configuration will be
>> taken care peer handshaking.
>>
>> >
>> > -Alastair
>> >
>> > On 8 October 2015 at 00:12, Atin Mukherjee <amukherj at redhat.com> wrote:
>> >>
>> >>
>> >>
>> >> On 10/07/2015 10:28 PM, Gene Liverman wrote:
>> >> > I want to replace my existing CentOS 6 nodes with CentOS 7 ones. Is
>> >> > there a recommended way to go about this from the perspective of
>> >> > Gluster? I am running a 3 node replicated cluster (3 servers each
>> with 1
>> >> > brick). In case it makes a difference, my bricks are on separate
>> drives
>> >> > formatted as XFS so it is possible that I can do my OS reinstall
>> without
>> >> > wiping out the data on two nodes (the third had a hardware failure
>> so it
>> >> > will be fresh from the ground up).
>> >> That's possible. You could do the re-installation one at a time. Once
>> >> the node comes back online self heal daemon will take care of healing
>> >> the data. AFR team can correct me if I am wrong.
>> >>
>> >> Thanks,
>> >> Atin
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > Thanks,
>> >> > *Gene Liverman*
>> >> > Systems Integration Architect
>> >> > Information Technology Services
>> >> > University of West Georgia
>> >> > gliverma at westga.edu <mailto:gliverma at westga.edu>
>> >> >
>> >> > ITS: Making Technology Work for You!
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Gluster-users mailing list
>> >> > Gluster-users at gluster.org
>> >> > http://www.gluster.org/mailman/listinfo/gluster-users
>> >> >
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users at gluster.org
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151008/31b76178/attachment.html>


More information about the Gluster-users mailing list