<div dir="ltr"><div>I can ask our other engineer but I don't have those figues.<br><br></div>-Alastair<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 30 June 2017 at 13:52, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Did you test healing by increasing disperse.shd-max-threads?<br>
What is your heal times per brick now?<br>
<div class="HOEnZb"><div class="h5"><br>
On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <<a href="mailto:ajneil.tech@gmail.com">ajneil.tech@gmail.com</a>> wrote:<br>
> We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the<br>
> rebuild time are bottlenecked by matrix operations which scale as the square<br>
> of the number of data stripes. There are some savings because of larger<br>
> data chunks but we ended up using 8+3 and heal times are about half compared<br>
> to 16+3.<br>
><br>
> -Alastair<br>
><br>
> On 30 June 2017 at 02:22, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> >Thanks for the reply. We will mainly use this for archival - near-cold<br>
>> > storage.<br>
>> Archival usage is good for EC<br>
>><br>
>> >Anything, from your experience, to keep in mind while planning large<br>
>> > installations?<br>
>> I am using 3.7.11 and only problem is slow rebuild time when a disk<br>
>> fails. It takes 8 days to heal a 8TB disk.(This might be related with<br>
>> my EC configuration 16+4)<br>
>> 3.9+ versions has some improvements about this but I cannot test them<br>
>> yet...<br>
>><br>
>> On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>> wrote:<br>
>> > Thanks for the reply. We will mainly use this for archival - near-cold<br>
>> > storage.<br>
>> ><br>
>> ><br>
>> > Anything, from your experience, to keep in mind while planning large<br>
>> > installations?<br>
>> ><br>
>> ><br>
>> > Sent from my Verizon, Samsung Galaxy smartphone<br>
>> ><br>
>> > -------- Original message --------<br>
>> > From: Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> > Date: 6/29/17 4:39 AM (GMT-05:00)<br>
>> > To: Jason Kiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>><br>
>> > Cc: Gluster Users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>> > Subject: Re: [Gluster-users] Multi petabyte gluster<br>
>> ><br>
>> > I am currently using 10PB single volume without problems. 40PB is on<br>
>> > the way. EC is working fine.<br>
>> > You need to plan ahead with large installations like this. Do complete<br>
>> > workload tests and make sure your use case is suitable for EC.<br>
>> ><br>
>> ><br>
>> > On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>><br>
>> > wrote:<br>
>> >> Has anyone scaled to a multi petabyte gluster setup? How well does<br>
>> >> erasure<br>
>> >> code do with such a large setup?<br>
>> >><br>
>> >> Thanks<br>
>> >><br>
>> >> ______________________________<wbr>_________________<br>
>> >> Gluster-users mailing list<br>
>> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
><br>
</div></div></blockquote></div><br></div>