[Gluster-users] First Gluster Volume deploy: recommended configuration and suggestions?
Mauro Tridici
mauro.tridici at cmcc.it
Wed Sep 6 12:21:43 UTC 2017
Dear users,
I just started my first Gluster test volume using 3 servers (each server contains 12 hdd).
I would like to create a "distributed disperse volume” but I’m a little bit confused about the right configuration schema that I should use.
Should I use JBOD disks? How many bricks to be defined? Ideal redundancy value? Ideal disperse-data count value? 6x(4+2) or 3x(8+4) volume configuration? Which one is the best one and why?
I’m trying to refer to the admin guides that I have found on internet but, in my case, it’s very difficult understand details because of some missing information.
Do you have some useful suggestions about the right configuration to be selected in order to reach an optimal fault tolerance and performance level?
This is my first test configuration (see the gluster volume info command output below), what do you think about it?
I created it using gdeploy and I used all the bricks, is it ok? Or should I use only part of them and in a specific order?
Volume Name: coldtier
Type: Distributed-Disperse
Volume ID: fd0f34e6-58f8-42ec-92fe-139bcf3263a8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (8 + 4) = 36
Transport-type: tcp
Bricks:
Brick1: glu01-stg:/gluster/mnt1/brick
Brick2: glu02-stg:/gluster/mnt1/brick
Brick3: glu03-stg:/gluster/mnt1/brick
Brick4: glu01-stg:/gluster/mnt2/brick
Brick5: glu02-stg:/gluster/mnt2/brick
Brick6: glu03-stg:/gluster/mnt2/brick
Brick7: glu01-stg:/gluster/mnt3/brick
Brick8: glu02-stg:/gluster/mnt3/brick
Brick9: glu03-stg:/gluster/mnt3/brick
Brick10: glu01-stg:/gluster/mnt4/brick
Brick11: glu02-stg:/gluster/mnt4/brick
Brick12: glu03-stg:/gluster/mnt4/brick
Brick13: glu01-stg:/gluster/mnt5/brick
Brick14: glu02-stg:/gluster/mnt5/brick
Brick15: glu03-stg:/gluster/mnt5/brick
Brick16: glu01-stg:/gluster/mnt6/brick
Brick17: glu02-stg:/gluster/mnt6/brick
Brick18: glu03-stg:/gluster/mnt6/brick
Brick19: glu01-stg:/gluster/mnt7/brick
Brick20: glu02-stg:/gluster/mnt7/brick
Brick21: glu03-stg:/gluster/mnt7/brick
Brick22: glu01-stg:/gluster/mnt8/brick
Brick23: glu02-stg:/gluster/mnt8/brick
Brick24: glu03-stg:/gluster/mnt8/brick
Brick25: glu01-stg:/gluster/mnt9/brick
Brick26: glu02-stg:/gluster/mnt9/brick
Brick27: glu03-stg:/gluster/mnt9/brick
Brick28: glu01-stg:/gluster/mnt10/brick
Brick29: glu02-stg:/gluster/mnt10/brick
Brick30: glu03-stg:/gluster/mnt10/brick
Brick31: glu01-stg:/gluster/mnt11/brick
Brick32: glu02-stg:/gluster/mnt11/brick
Brick33: glu03-stg:/gluster/mnt11/brick
Brick34: glu01-stg:/gluster/mnt12/brick
Brick35: glu02-stg:/gluster/mnt12/brick
Brick36: glu03-stg:/gluster/mnt12/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Thank you very much for your patience.
Mauro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170906/e06e8997/attachment.html>
More information about the Gluster-users
mailing list