[Gluster-users] Change underlying brick on node

David Gossage dgossage at carouselchecks.com
Mon Aug 8 21:34:59 UTC 2016


On Mon, Aug 8, 2016 at 4:06 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> On 9/08/2016 6:39 AM, David Gossage wrote:
>
>> Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
>> mirrored ssd), which I am thinking is more protection than I may need with
>> a 3 way replica.  I was going to one by one change them to basically raid10
>> letting it heal in between.
>>
>
> Wouldn't RAID10 be more protection than Raidz6? not that there is anything
> wrong with that, all my bricks are on top of a RAIDZ10 pool, as much for
> the improved IOPS as the redundancy, though it does ease the maintance of
> bricks quite a bit. Have had two drive failures where I just hotswapped the
> drive, 0 downtime.
>

RAID10 you can lose as many drives as mirror pairs set as long as they
aren't in same mirror set.  Raidz6/raid6 you can lose any 2 drives and
still stay up regardless of position so it's less crossing fingers if
multiple drives fail back to back.  However performance is better for
raid10.  So I am basically looking at slightly increasing chance of one
brick/node dropping if I had 2 drives die that happened to be in same
mirror set, in order to squeeze a little more performance out of setup.

>
> As a matter of curiosity what SSD's are you using for the ZIL and what
> size are they?
>

Samsung Pro 850's.  small lvm's partitioned to mirror for zil, other 2
larger partitions as l2arc.  Im seeing same you are though with poort hit
ratio and may just drop their use.

>
> Do you have compression enabled? lz4?
>

No, I wasn't that concerned with space usage.  WD Red's are fairly cheap
and I have 12-14 drive bays free in the 4U servers used if I want to expand
storage

>
>
>
>> Is best way to do that a systemctl stop glusterd, should I just kill the
>> brick process to simulate a brick dying, or is their an actual brick
>> maintenance command?
>>
>
> There is a gluster replace brick command:
>
>    volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
>
> One annoyance is the new brick mount can't be the same as the old one. If
> you can I'd setup a test volume and try it out first.


That's what I used when replacing the server with a bad nic short while
ago, but wasn't certain if it would just heal whole brick since gluster
config and directories would still consider it part of the volume just with
no data in folder.

My single server dev could likely test it.  I'd guess I'd kill brick
process delete that whole brick layout directory to remove all files and
directories.  Recreate brick path.  restart gluster or server and see what
happens.  If heal kicks off or if I need to just give it a new directory
path and do a replace-brick on it.


>
>
> --
> Lindsay Mathieson
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160808/85279da6/attachment.html>


More information about the Gluster-users mailing list