[Gluster-users] one brick vs multiple brick on the same ZFS zpool.

Dung Le vic_le at icloud.com
Mon Mar 6 23:12:25 UTC 2017


Hi,

How about hardware raid with XFS? I assuming it would be faster than ZFS raid since it has physical cache on raid controller for reads and writes.

Thanks,


> On Mar 6, 2017, at 3:08 PM, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote:
> 
> Hardware raid with ZFS should avoided
> ZFS needs direct access to disks and with hardware raid you have a controller in the middle
> 
> If you need ZFS, skip the hardware raid and use ZFS raid
> 
> Il 6 mar 2017 9:23 PM, "Dung Le" <vic_le at icloud.com <mailto:vic_le at icloud.com>> ha scritto:
> Hi,
> 
> Since I am new with Gluster, need your advices. I have 2 different Gluster configuration:
> 
> Purpose: Need to create 5 Gluster volumes. I am running the gluster version is 3.9.0.
> 
> Config #1: 5 bricks from one zpool
> 3 storage nodes.
> Using hardware raid to create one array with raid5 (9+1) per storage node 
> Create a zpool on top of the array per storage node
> Create 5 ZFS shares (each share is a brick) per storage node
> Create 5 volumes with replica of 3 using 5 different bricks.
> 
> Config #2: 1 brick from one zpool
> 3 storage nodes.
> Using hardware raid to create one array with raid5 (9+1) per storage node
> Create a zpool on top of the array per storage node
> Create 1 ZFS shares per storage node. Using the share as brick.
> Create 5 volumes with replica of 3 with same share.
> 
> 1) Is there any different on the performance on both config? 
> 2) Will the single brick be handling parallel writing vs multiple brick?
> 3) Since I am using hardware raid controller, any option that I need to enable or disable for the gluster volume?
> 
> Best Regards,
> ~ Vic Le
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170306/4043b903/attachment.html>


More information about the Gluster-users mailing list