<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It is the over all time, 8TB data disk healed 2x faster in 8+2 configuration.<br></blockquote><div><br></div></span><div>Wow, that is counter intuitive for me. I will need to explore about this to find out why that could be. Thanks a lot for this feedback!<br></div></div></div></div></blockquote><div><br></div><div>From memory I remember you said you have a lot of small files hosted on the volume, right? It could be because of the bug <a href="https://review.gluster.org/17151">https://review.gluster.org/17151</a> is fixing. That is the only reason I could guess right now. We will try to test this kind of case if you could give us a bit more details about average file-size/depth of directories etc to simulate similar looking directory structure.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><div><div class="gmail-h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail-m_8929469300211474524HOEnZb"><div class="gmail-m_8929469300211474524h5"><br>
On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri<br>
<<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br>
><br>
><br>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> Healing gets slower as you increase m in m+n configuration.<br>
>> We are using 16+4 configuration without any problems other then heal<br>
>> speed.<br>
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on<br>
>> 8+2 is faster by 2x.<br>
><br>
><br>
> As you increase number of nodes that are participating in an EC set number<br>
> of parallel heals increase. Is the heal speed you saw improved per file or<br>
> the over all time it took to heal the data?<br>
><br>
>><br>
>><br>
>><br>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a>> wrote:<br>
>> ><br>
>> > 8+2 and 8+3 configurations are not the limitation but just suggestions.<br>
>> > You can create 16+3 volume without any issue.<br>
>> ><br>
>> > Ashish<br>
>> ><br>
>> > ______________________________<wbr>__<br>
>> > From: "Alastair Neil" <<a href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a>><br>
>> > To: "gluster-users" <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br>
>> > Sent: Friday, May 5, 2017 2:23:32 AM<br>
>> > Subject: [Gluster-users] disperse volume brick counts limits in RHES<br>
>> ><br>
>> ><br>
>> > Hi<br>
>> ><br>
>> > we are deploying a large (24node/45brick) cluster and noted that the<br>
>> > RHES<br>
>> > guidelines limit the number of data bricks in a disperse set to 8. Is<br>
>> > there<br>
>> > any reason for this. I am aware that you want this to be a power of 2,<br>
>> > but<br>
>> > as we have a large number of nodes we were planning on going with 16+3.<br>
>> > Dropping to 8+2 or 8+3 will be a real waste for us.<br>
>> ><br>
>> > Thanks,<br>
>> ><br>
>> ><br>
>> > Alastair<br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
</div></div></blockquote></div></div></div><span class="gmail-HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="gmail-m_8929469300211474524gmail_signature"><div dir="ltr">Pranith<br></div></div>
</font></span></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>