<div dir="ltr"><div>so the bottleneck is that computations with 16x20 matrix require ~4 times the cycles? It seems then that there is ample room for improvement, as there are many linear algebra packages out there that scale better than O(nxm). Is the healing time dominated by the EC compute time? If Serkan saw a hard 2x scaling then it seems likely.<br><br></div><div>-Alastair<br><br></div><div><br></div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 8 May 2017 at 03:02, Xavier Hernandez <span dir="ltr"><<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/05/17 13:49, Pranith Kumar Karampuri wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
<br>
On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a><br></span><span class="">
<mailto:<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><wbr>> wrote:<br>
<br>
It is the over all time, 8TB data disk healed 2x faster in 8+2<br>
configuration.<br>
<br>
<br>
Wow, that is counter intuitive for me. I will need to explore about this<br>
to find out why that could be. Thanks a lot for this feedback!<br>
</span></blockquote>
<br>
Matrix multiplication for encoding/decoding of 8+2 is 4 times faster than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8), however each matrix operation on a 16+4 configuration takes twice the amount of data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4.<br>
<br>
An 8+2 also uses bigger blocks on each brick, processing the same amount of data in less I/O operations and bigger network packets.<br>
<br>
Probably these are the reasons why 16+4 is slower than 8+2.<br>
<br>
See my other email for more detailed description.<br>
<br>
Xavi<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
<br>
<br>
On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri<br></span><span class="">
<<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a> <mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>> wrote:<br>
><br>
><br>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban<br></span><span class="">
<<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a> <mailto:<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><wbr>> wrote:<br>
>><br>
>> Healing gets slower as you increase m in m+n configuration.<br>
>> We are using 16+4 configuration without any problems other then heal<br>
>> speed.<br>
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on<br>
>> 8+2 is faster by 2x.<br>
><br>
><br>
> As you increase number of nodes that are participating in an EC<br>
set number<br>
> of parallel heals increase. Is the heal speed you saw improved per<br>
file or<br>
> the over all time it took to heal the data?<br>
><br>
>><br>
>><br>
>><br>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey<br></span><span class="">
<<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a> <mailto:<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a>>> wrote:<br>
>> ><br>
>> > 8+2 and 8+3 configurations are not the limitation but just<br>
suggestions.<br>
>> > You can create 16+3 volume without any issue.<br>
>> ><br>
>> > Ashish<br>
>> ><br>
>> > ______________________________<wbr>__<br>
>> > From: "Alastair Neil" <<a href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a><br></span>
<mailto:<a href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a>><wbr>><br>
>> > To: "gluster-users" <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>>><span class=""><br>
>> > Sent: Friday, May 5, 2017 2:23:32 AM<br>
>> > Subject: [Gluster-users] disperse volume brick counts limits in<br>
RHES<br>
>> ><br>
>> ><br>
>> > Hi<br>
>> ><br>
>> > we are deploying a large (24node/45brick) cluster and noted<br>
that the<br>
>> > RHES<br>
>> > guidelines limit the number of data bricks in a disperse set to<br>
8. Is<br>
>> > there<br>
>> > any reason for this. I am aware that you want this to be a<br>
power of 2,<br>
>> > but<br>
>> > as we have a large number of nodes we were planning on going<br>
with 16+3.<br>
>> > Dropping to 8+2 or 8+3 will be a real waste for us.<br>
>> ><br>
>> > Thanks,<br>
>> ><br>
>> ><br>
>> > Alastair<br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br></span>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><span class=""><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a>><br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br></span>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><span class=""><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a>><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br></span>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><span class=""><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a>><br>
><br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
<br>
<br>
<br>
<br>
--<br>
Pranith<br>
<br>
<br></span><span class="">
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
<br>
</span></blockquote><div class="HOEnZb"><div class="h5">
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>