[Gluster-users] Multi petabyte gluster

Serkan Çoban cobanserkan at gmail.com
Fri Jun 30 17:52:01 UTC 2017


Did you test healing by increasing disperse.shd-max-threads?
What is your heal times per brick now?

On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote:
> We are using 3.10 and have a 7 PB cluster.  We decided against 16+3 as the
> rebuild time are bottlenecked by matrix operations which scale as the square
> of the number of data stripes.  There are some savings because of larger
> data chunks but we ended up using 8+3 and heal times are about half compared
> to 16+3.
>
> -Alastair
>
> On 30 June 2017 at 02:22, Serkan Çoban <cobanserkan at gmail.com> wrote:
>>
>> >Thanks for the reply. We will mainly use this for archival - near-cold
>> > storage.
>> Archival usage is good for EC
>>
>> >Anything, from your experience, to keep in mind while planning large
>> > installations?
>> I am using 3.7.11 and only problem is slow rebuild time when a disk
>> fails. It takes 8 days to heal a 8TB disk.(This might be related with
>> my EC configuration 16+4)
>> 3.9+ versions has some improvements about this but I cannot test them
>> yet...
>>
>> On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <jkiebzak at gmail.com> wrote:
>> > Thanks for the reply. We will mainly use this for archival - near-cold
>> > storage.
>> >
>> >
>> > Anything, from your experience, to keep in mind while planning large
>> > installations?
>> >
>> >
>> > Sent from my Verizon, Samsung Galaxy smartphone
>> >
>> > -------- Original message --------
>> > From: Serkan Çoban <cobanserkan at gmail.com>
>> > Date: 6/29/17 4:39 AM (GMT-05:00)
>> > To: Jason Kiebzak <jkiebzak at gmail.com>
>> > Cc: Gluster Users <gluster-users at gluster.org>
>> > Subject: Re: [Gluster-users] Multi petabyte gluster
>> >
>> > I am currently using 10PB single volume without problems. 40PB is on
>> > the way. EC is working fine.
>> > You need to plan ahead with large installations like this. Do complete
>> > workload tests and make sure your use case is suitable for EC.
>> >
>> >
>> > On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <jkiebzak at gmail.com>
>> > wrote:
>> >> Has anyone scaled to a multi petabyte gluster setup? How well does
>> >> erasure
>> >> code do with such a large setup?
>> >>
>> >> Thanks
>> >>
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users at gluster.org
>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>


More information about the Gluster-users mailing list