<div dir="ltr"><div>We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3.<br><br></div>-Alastair<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 30 June 2017 at 02:22, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">>Thanks for the reply. We will mainly use this for archival - near-cold storage.<br>
</span>Archival usage is good for EC<br>
<span class=""><br>
>Anything, from your experience, to keep in mind while planning large installations?<br>
</span>I am using 3.7.11 and only problem is slow rebuild time when a disk<br>
fails. It takes 8 days to heal a 8TB disk.(This might be related with<br>
my EC configuration 16+4)<br>
3.9+ versions has some improvements about this but I cannot test them yet...<br>
<div class="HOEnZb"><div class="h5"><br>
On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>> wrote:<br>
> Thanks for the reply. We will mainly use this for archival - near-cold<br>
> storage.<br>
><br>
><br>
> Anything, from your experience, to keep in mind while planning large<br>
> installations?<br>
><br>
><br>
> Sent from my Verizon, Samsung Galaxy smartphone<br>
><br>
> -------- Original message --------<br>
> From: Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
> Date: 6/29/17 4:39 AM (GMT-05:00)<br>
> To: Jason Kiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>><br>
> Cc: Gluster Users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> Subject: Re: [Gluster-users] Multi petabyte gluster<br>
><br>
> I am currently using 10PB single volume without problems. 40PB is on<br>
> the way. EC is working fine.<br>
> You need to plan ahead with large installations like this. Do complete<br>
> workload tests and make sure your use case is suitable for EC.<br>
><br>
><br>
> On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <<a href="mailto:jkiebzak@gmail.com">jkiebzak@gmail.com</a>> wrote:<br>
>> Has anyone scaled to a multi petabyte gluster setup? How well does erasure<br>
>> code do with such a large setup?<br>
>><br>
>> Thanks<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></div></div></blockquote></div><br></div>