<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div>While creating volume just provide bricks which are hosted on different servers.<br></div><div><br></div><div>gluster v create <voluem name> redundancy 2 server-1:/brick1 server-2:/brick2 server-3:/brick3 server-4:/brick4 server-5:/brick5 server-6:/brick6<br></div><div><br></div><div>At present you can not differentiate between data bricks and parity bricks. That is , in above command you can not say which bricks out of brick 1 to brick6 would be parity brick.<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Gandalf Corvotempesta" <gandalf.corvotempesta@gmail.com><br><b>To: </b>"Ashish Pandey" <aspandey@redhat.com><br><b>Cc: </b>gluster-users@gluster.org<br><b>Sent: </b>Friday, March 31, 2017 12:19:58 PM<br><b>Subject: </b>Re: [Gluster-users] Node count constraints with EC?<br><div><br></div><div dir="auto">How can I ensure that each parity brick is stored on a different server ?</div><div class="gmail_extra"><br><div class="gmail_quote">Il 30 mar 2017 6:50 AM, "Ashish Pandey" <<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a>> ha scritto:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:times new roman,new york,times,serif;font-size:12pt;color:#000000"><div>Hi Terry,<br></div><div><br></div><div>There is not constraint on number of nodes for erasure coded volumes. <br></div><div>However, there are some suggestions to keep in mind.<br></div><div><br></div><div>If you have 4+2 configuration, that means you can loose maximum 2 bricks at a time without loosing your volume for IO. <br></div><div>These bricks may fail because of node crash or node disconnection. That is why it is always good to have all the 6 bricks on 6 different nodes. If you have 3 bricks on one node and this nodes goes down then you<br></div><div>will loose the volume and it will be inaccessible.</div><div>So just keep in mind that you should not loose more than redundancy bricks even if any one node goes down.<br></div><div><br></div><div>----<br></div><div>Ashish<br></div><div> <br></div><div><br></div><hr id="m_-5042678447821529420zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Terry McGuire" <<a href="mailto:tmcguire@ualberta.ca" target="_blank">tmcguire@ualberta.ca</a>><br><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Sent: </b>Wednesday, March 29, 2017 11:59:32 PM<br><b>Subject: </b>[Gluster-users] Node count constraints with EC?<br><div><br></div>Hello list. Newbie question: I’m building a low-performance/low-cost storage service with a starting size of about 500TB, and want to use Gluster with erasure coding. I’m considering subvolumes of maybe 4+2, or 8+3 or 4. I was thinking I’d spread these over 4 nodes, and add single nodes over time, with subvolumes rearranged over new nodes to maintain protection from whole node failures.<div><br></div><div>However, reading through some RedHat-provided documentation, they seem to suggest that node counts should be a multiple of 3, 6 or 12, depending on subvolume config. Is this actually a requirement, or is it only a suggestion for best performance or something?</div><div><br></div><div>Can anyone comment on node count constraints with erasure coded subvolumes?</div><div><br></div><div>Thanks in advance for anyone’s reply,</div><div>Terry</div><div><br></div><div><div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">_____________________________<br>Terry McGuire<br>Information Services and Technology (IST)<br>University of Alberta<br>Edmonton, Alberta, Canada T6G 2H1<br>Phone: <a href="tel:(780)%20492-9422" target="_blank">780-492-9422</a><br data-mce-bogus="1"></div><div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br></div></div><br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br data-mce-bogus="1"></div><div><br></div></div></div><br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div></div>
<br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>