[Gluster-users] Change underlying brick on node

Joe Julian joe at julianfamily.org
Mon Aug 8 21:23:20 UTC 2016



On 08/08/2016 01:39 PM, David Gossage wrote:
> So now that I have my cluster on 3.7.14 and sharded and working I am 
> of course looking for what to break next.
>
> Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on 
> mirrored ssd), which I am thinking is more protection than I may need 
> with a 3 way replica.  I was going to one by one change them to 
> basically raid10 letting it heal in between.
>
> Is best way to do that a systemctl stop glusterd, should I just kill 
> the brick process to simulate a brick dying, or is their an actual 
> brick maintenance command?

Just kill (-15) the brick process. That'll close the TCP connections and 
the clients will just go right on functioning off the remaining replica. 
When you format and recreate your filesystem, it'll be missing the 
volume-id extended attributes so to start it you'll need to force it:

    gluster volume start $volname start force

>
> If /etc/glusterfs is unchanged and /var/lib/glusterd is unchanged will 
> doing a heal full after reboot or restarting glusterd take care of 
> everything if I recreate the expected brick path first?

Once started, perform a full heal to re-replicate.

>
> Are the improvements in 3.8 for sharding significant enough I should 
> first look at updating to 3.8.2 when released in few days?

Yes.

>
>
> */David Gossage/*/*
> */
> //*Carousel Checks Inc.| System Administrator*
> *Office*708.613.2284
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160808/08fd000a/attachment.html>


More information about the Gluster-users mailing list