[Gluster-users] Brick layout question

Strahil Nikolov hunter86_bg at yahoo.com
Sun Nov 28 08:03:22 UTC 2021


Hi,
I doubt that dispersed volume will be faster as it needs to 'encode' some data during the writes.

Actually, if you don't need any redundancy and you work with large files -> you can test sharding xlator. Usually it's used for VMs and in your case the files will be sharded into small pieces (shards) spread among multiple bricks. Give it a try on a test volume first.
About the brick layout , I prefer to use 1 HW raid = 1 brick. For fast disks like NVMEs -> 1 NVME = 1 brick.
WARNING: ONCE SHARDING IS ENABLED, NEVER EVER DISABLE IT !


Best Regards,Strahil Nikolov

 
 
  On Sun, Nov 28, 2021 at 3:59, Patrick Nixon<pnixon at gmail.com> wrote:   Hello Glusters!

I've been running a multi-node single brick per node distributed array with the bricks being between 6 and 14TB each and getting okay performance.

I was reading some documentation and saw distributed dispersed as an option and was considering setting up a test array to see if that improved the performance.    I don't need replicas / redundancy at all for this array, just bulk storage.

My question, primarily, is about how to layout the bricks across six nodes with the ability to add additional nodes/drives as necessary.
Option 1:Single Brick Per Node

Option 2:
Multiple Bricks Per Node 
- Bricks a consistent size (1T each, left over disk as it's own brick)
- Bricks a fraction of the total disk (1/4 or 1/2)

Thank you for any suggestions/tips (links to additional documentation that would help educate me are welcome as well).________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211128/fb66da51/attachment.html>


More information about the Gluster-users mailing list