[Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?
mauro.tridici at cmcc.it
Wed Sep 20 15:53:53 UTC 2017
Dear Sunil Kumar Acharya,
yes, I can confirm that I placed 2 bricks per subvolume per host.
Thank you very much for your support.
> Il giorno 20 set 2017, alle ore 09:34, Sunil Kumar Heggodu Gopala Acharya <sheggodu at redhat.com> ha scritto:
> Hi Mauro Tridici,
> From the information provided it appears like you have placed 2 bricks of a subvolume on one host. Please confirm.
> The number of hosts that could go down without losing access to data can be derived based on the brick configuration/distribution. Please let us know the brick distribution plan.
> SUNIL KUMAR ACHARYA
> SENIOR SOFTWARE ENGINEER
> Red Hat
> T: +91-8067935170 <http://redhatemailsignature-marketing.itos.redhat.com/>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> On Tue, Sep 19, 2017 at 1:09 AM, Mauro Tridici <mauro.tridici at cmcc.it <mailto:mauro.tridici at cmcc.it>> wrote:
> Dear All,
> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware:
> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network)
> Now, we need to add 3 new servers with the same hardware configuration respecting the current volume topology.
> If I'm right, we will obtain a DITRIBUTED DISPERSED gluster volume with 12 subvolumes, each volume will contain (4+2) bricks, that is a [12x(4+2)] volume.
> My question is: in the current volume configuration, only 2 bricks per subvolume or one host could be down without losing data. What it will happen in the next configuration? How many hosts could be down without losing data?
> Thank you very much.
> Mauro Tridici
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users