[Gluster-users] Change underlying brick on node

David Gossage dgossage at carouselchecks.com
Tue Aug 9 02:23:45 UTC 2016


On Mon, Aug 8, 2016 at 9:15 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> On 9 August 2016 at 07:23, Joe Julian <joe at julianfamily.org> wrote:
> > Just kill (-15) the brick process. That'll close the TCP connections and
> the
> > clients will just go right on functioning off the remaining replica. When
> > you format and recreate your filesystem, it'll be missing the volume-id
> > extended attributes so to start it you'll need to force it:
> >
> >    gluster volume start $volname start force
>
>
> Just to clarify I'm interpreting this correctly, to replace a brick
> and preserve its mount point you can:
>
> 1. kill the brick process (glusterfsd)
>
> 2. Do your disk maintenance. Eventually you have a clean (erased) brick
> mount
>
> 3. Force the bricks process start. This will recreate all the meta
> data and start a full heal that will replicate all the data from the
> other bricks.
>
> Looks like the easiest way to replace a brick to me :)
>
>
> Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy
to roll back so will just wait till 3.8.2 is released in few days that
fixes the bugs mentioned to me and then test this few times on my dev.

Would be nice if I could get to a point where I could have one brick dead
and doing a full heal and not have every VM pause while shards heal, but I
may be asking too much when dealing with a rather heavy recovery.



> thanks,
>
> --
> Lindsay
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160808/4d4013bb/attachment.html>


More information about the Gluster-users mailing list