<div dir="auto">How can I ensure that each parity brick is stored on a different server ?</div><div class="gmail_extra"><br><div class="gmail_quote">Il 30 mar 2017 6:50 AM, "Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>> ha scritto:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:times new roman,new york,times,serif;font-size:12pt;color:#000000"><div>Hi Terry,<br></div><div><br></div><div>There is not constraint on number of nodes for erasure coded volumes. <br></div><div>However, there are some suggestions to keep in mind.<br></div><div><br></div><div>If you have 4+2 configuration, that means you can loose maximum 2 bricks at a time without loosing your volume for IO. <br></div><div>These bricks may fail because of node crash or node disconnection. That is why it is always good to have all the 6 bricks on 6 different nodes. If you have 3 bricks on one node and this nodes goes down then you<br></div><div>will loose the volume and it will be inaccessible.</div><div>So just keep in mind that you should not loose more than redundancy bricks even if any one node goes down.<br></div><div><br></div><div>----<br></div><div>Ashish<br></div><div> <br></div><div><br></div><hr id="m_-5042678447821529420zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Terry McGuire" <<a href="mailto:tmcguire@ualberta.ca" target="_blank">tmcguire@ualberta.ca</a>><br><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Sent: </b>Wednesday, March 29, 2017 11:59:32 PM<br><b>Subject: </b>[Gluster-users] Node count constraints with EC?<br><div><br></div>Hello list. Newbie question: I’m building a low-performance/low-cost storage service with a starting size of about 500TB, and want to use Gluster with erasure coding. I’m considering subvolumes of maybe 4+2, or 8+3 or 4. I was thinking I’d spread these over 4 nodes, and add single nodes over time, with subvolumes rearranged over new nodes to maintain protection from whole node failures.<div><br></div><div>However, reading through some RedHat-provided documentation, they seem to suggest that node counts should be a multiple of 3, 6 or 12, depending on subvolume config. Is this actually a requirement, or is it only a suggestion for best performance or something?</div><div><br></div><div>Can anyone comment on node count constraints with erasure coded subvolumes?</div><div><br></div><div>Thanks in advance for anyone’s reply,</div><div>Terry</div><div><br></div><div><div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">_____________________________<br>Terry McGuire<br>Information Services and Technology (IST)<br>University of Alberta<br>Edmonton, Alberta, Canada T6G 2H1<br>Phone: <a href="tel:(780)%20492-9422" value="+17804929422" target="_blank">780-492-9422</a></div><div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br></div></div><br>______________________________<wbr>_________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></div><div><br></div></div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div></div>