[Gluster-users] Replica 3 scale out and ZFS bricks

Strahil Nikolov hunter86_bg at yahoo.com
Sat Sep 19 06:14:09 UTC 2020


It is not usual to add a single node , as there will be a lot of rebuilding which takes a lot of time for large bricks.
Usually RH recommend having a brick like this:
- 12 disks (2-3TB) in  RAID6 
- 10 disks in RAID10

I see many users use 10TB+ disks but this leads to very long healing times , so keep that in mind.

You can check https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance for more details.

Best Regards,
Strahil Nikolov






В петък, 18 септември 2020 г., 17:01:30 Гринуич+3, Alexander Iliev <ailiev+gluster at mamul.org> написа: 





On 9/17/20 4:47 PM, Strahil Nikolov wrote:

>   I guess I misunderstood you - if I decode the diagram correctly it should be OK , you will always have at least 2 bricks available after a node get's down.
> 
> It would be way simpler if you add a 5th node (VM probably) as an arbiter and switch to 'replica 3 arbiter 1'.


Yep, I would add an arbiter node in this case.

What I wanted to make sure was my understanding of the way GlusterFS is 
able to scale is correct. Specifically expanding a volume by adding one 
storage node to the current setup.

Thanks, Strahil.

Best regards,
--
alexander iliev



More information about the Gluster-users mailing list