<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">It is the over all time, 8TB data disk healed 2x faster in 8+2 configuration.<br></blockquote><div><br></div><div>Wow, that is counter intuitive for me. I will need to explore about this to find out why that could be. Thanks a lot for this feedback!<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri<br>
<<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
><br>
><br>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> Healing gets slower as you increase m in m+n configuration.<br>
>> We are using 16+4 configuration without any problems other then heal<br>
>> speed.<br>
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on<br>
>> 8+2 is faster by 2x.<br>
><br>
><br>
> As you increase number of nodes that are participating in an EC set number<br>
> of parallel heals increase. Is the heal speed you saw improved per file or<br>
> the over all time it took to heal the data?<br>
><br>
>><br>
>><br>
>><br>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>> wrote:<br>
>> ><br>
>> > 8+2 and 8+3 configurations are not the limitation but just suggestions.<br>
>> > You can create 16+3 volume without any issue.<br>
>> ><br>
>> > Ashish<br>
>> ><br>
>> > ______________________________<wbr>__<br>
>> > From: "Alastair Neil" <<a href="mailto:ajneil.tech@gmail.com">ajneil.tech@gmail.com</a>><br>
>> > To: "gluster-users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>> > Sent: Friday, May 5, 2017 2:23:32 AM<br>
>> > Subject: [Gluster-users] disperse volume brick counts limits in RHES<br>
>> ><br>
>> ><br>
>> > Hi<br>
>> ><br>
>> > we are deploying a large (24node/45brick) cluster and noted that the<br>
>> > RHES<br>
>> > guidelines limit the number of data bricks in a disperse set to 8. Is<br>
>> > there<br>
>> > any reason for this. I am aware that you want this to be a power of 2,<br>
>> > but<br>
>> > as we have a large number of nodes we were planning on going with 16+3.<br>
>> > Dropping to 8+2 or 8+3 will be a real waste for us.<br>
>> ><br>
>> > Thanks,<br>
>> ><br>
>> ><br>
>> > Alastair<br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>