[Gluster-users] Change underlying brick on node

Lindsay Mathieson lindsay.mathieson at gmail.com
Mon Aug 8 21:06:28 UTC 2016


On 9/08/2016 6:39 AM, David Gossage wrote:
> Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on 
> mirrored ssd), which I am thinking is more protection than I may need 
> with a 3 way replica.  I was going to one by one change them to 
> basically raid10 letting it heal in between.

Wouldn't RAID10 be more protection than Raidz6? not that there is 
anything wrong with that, all my bricks are on top of a RAIDZ10 pool, as 
much for the improved IOPS as the redundancy, though it does ease the 
maintance of bricks quite a bit. Have had two drive failures where I 
just hotswapped the drive, 0 downtime.

As a matter of curiosity what SSD's are you using for the ZIL and what 
size are they?

Do you have compression enabled? lz4?


>
> Is best way to do that a systemctl stop glusterd, should I just kill 
> the brick process to simulate a brick dying, or is their an actual 
> brick maintenance command?

There is a gluster replace brick command:

    volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}

One annoyance is the new brick mount can't be the same as the old one. 
If you can I'd setup a test volume and try it out first.


-- 
Lindsay Mathieson



More information about the Gluster-users mailing list