[Gluster-users] Multi petabyte gluster

Serkan Çoban cobanserkan at gmail.com
Fri Jun 30 06:22:47 UTC 2017


>Thanks for the reply. We will mainly use this for archival - near-cold storage.
Archival usage is good for EC

>Anything, from your experience, to keep in mind while planning large installations?
I am using 3.7.11 and only problem is slow rebuild time when a disk
fails. It takes 8 days to heal a 8TB disk.(This might be related with
my EC configuration 16+4)
3.9+ versions has some improvements about this but I cannot test them yet...

On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <jkiebzak at gmail.com> wrote:
> Thanks for the reply. We will mainly use this for archival - near-cold
> storage.
>
>
> Anything, from your experience, to keep in mind while planning large
> installations?
>
>
> Sent from my Verizon, Samsung Galaxy smartphone
>
> -------- Original message --------
> From: Serkan Çoban <cobanserkan at gmail.com>
> Date: 6/29/17 4:39 AM (GMT-05:00)
> To: Jason Kiebzak <jkiebzak at gmail.com>
> Cc: Gluster Users <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] Multi petabyte gluster
>
> I am currently using 10PB single volume without problems. 40PB is on
> the way. EC is working fine.
> You need to plan ahead with large installations like this. Do complete
> workload tests and make sure your use case is suitable for EC.
>
>
> On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <jkiebzak at gmail.com> wrote:
>> Has anyone scaled to a multi petabyte gluster setup? How well does erasure
>> code do with such a large setup?
>>
>> Thanks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list