<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"></head><body><div>Thanks for the reply. We will mainly use this for archival - near-cold storage.</div><div><br></div><div><br></div><div>Anything, from your experience, to keep in mind while planning large installations?</div><div><br></div><div><br></div><div id="composer_signature"><div style="font-size:85%;color:#575757" dir="auto">Sent from my Verizon, Samsung Galaxy smartphone</div></div><div><br></div><div style="font-size:100%;color:#000000"><!-- originalMessage --><div>-------- Original message --------</div><div>From: Serkan Çoban <cobanserkan@gmail.com> </div><div>Date: 6/29/17 4:39 AM (GMT-05:00) </div><div>To: Jason Kiebzak <jkiebzak@gmail.com> </div><div>Cc: Gluster Users <gluster-users@gluster.org> </div><div>Subject: Re: [Gluster-users] Multi petabyte gluster </div><div><br></div></div>I am currently using 10PB single volume without problems. 40PB is on<br>the way. EC is working fine.<br>You need to plan ahead with large installations like this. Do complete<br>workload tests and make sure your use case is suitable for EC.<br><br><br>On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <jkiebzak@gmail.com> wrote:<br>> Has anyone scaled to a multi petabyte gluster setup? How well does erasure<br>> code do with such a large setup?<br>><br>> Thanks<br>><br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> http://lists.gluster.org/mailman/listinfo/gluster-users<br></body></html>