[Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Ashish Pandey
aspandey at redhat.com
Wed Sep 20 07:33:50 UTC 2017
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes.
It depends on the way you are going to add new bricks on the existing volume 'vol"
I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down.
When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add.
Suppose you want to add 6 bricks and all the 6 bricks are on 3 new nodes (2 each) then with respect to that sub volume you can tolerate 1 node going down.
If you are creating a 12 * (4+2) volume from the scratch and providing 12 bricks from each server then in that case even 2 nodes can go down without any issue.
I think You should focus more on the number of Hard Drive in a sub volume. You should ask yourself "How many bricks (HD) with in a sub volume will be unavailable if 1 or 2 nodes are going down?"
Ashish
----- Original Message -----
From: "Mauro Tridici" <mauro.tridici at cmcc.it>
To: gluster-users at gluster.org
Sent: Tuesday, September 19, 2017 1:09:06 AM
Subject: [Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Dear All,
I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware:
- 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network)
Now, we need to add 3 new servers with the same hardware configuration respecting the current volume topology.
My question is: in the current volume configuration, only 2 bricks per subvolume or one host could be down without losing data. What it will happen in the next configuration? How many hosts could be down without losing data?
Thank you very much.
Mauro Tridici
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170920/9a0c029a/attachment.html>
More information about the Gluster-users
mailing list