[Gluster-users] disperse volume brick counts limits in RHES

Pranith Kumar Karampuri pkarampu at redhat.com
Fri May 5 13:54:42 UTC 2017


Wondering if Xavi knows something.

On Fri, May 5, 2017 at 7:24 PM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:

>
>
> On Fri, May 5, 2017 at 7:21 PM, Serkan Çoban <cobanserkan at gmail.com>
> wrote:
>
>> In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
>> volume with 16+4 EC configuration, each brick in a sub-volume is on
>> different host.
>> We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
>> files/folder. Our gluster version is 3.7.11.
>> Heal speed in this environment is 8-10MB/sec/brick.
>>
>> I did some tests for parallel self heal feature with version 3.9, two
>> servers 26 bricks each, 8+2 and 16+4 EC configuration.
>> This was a small test environment and the results are as I said 8+2 is
>> 2x faster then 16+4 with parallel self heal threads set to 2/4.
>> In 1-2 months our new servers arriving, I will do detailed tests for
>> heal performance for 8+2 and 16+4 and inform you the results.
>>
>
> In that case I still don't know why this is the case. Thanks for the
> inputs. I will also try to find out how long a 2GB file takes in 8+2 vs
> 16+4 and see if there is something I need to look closely.
>
>
>>
>>
>> On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
>> <pkarampu at redhat.com> wrote:
>> >
>> >
>> > On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
>> > <pkarampu at redhat.com> wrote:
>> >>
>> >>
>> >>
>> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <cobanserkan at gmail.com>
>> >> wrote:
>> >>>
>> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>> >>> configuration.
>> >>
>> >>
>> >> Wow, that is counter intuitive for me. I will need to explore about
>> this
>> >> to find out why that could be. Thanks a lot for this feedback!
>> >
>> >
>> > From memory I remember you said you have a lot of small files hosted on
>> the
>> > volume, right? It could be because of the bug
>> > https://review.gluster.org/17151 is fixing. That is the only reason I
>> could
>> > guess right now. We will try to test this kind of case if you could
>> give us
>> > a bit more details about average file-size/depth of directories etc to
>> > simulate similar looking directory structure.
>> >
>> >>
>> >>
>> >>>
>> >>>
>> >>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>> >>> <pkarampu at redhat.com> wrote:
>> >>> >
>> >>> >
>> >>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <
>> cobanserkan at gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> Healing gets slower as you increase m in m+n configuration.
>> >>> >> We are using 16+4 configuration without any problems other then
>> heal
>> >>> >> speed.
>> >>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals
>> on
>> >>> >> 8+2 is faster by 2x.
>> >>> >
>> >>> >
>> >>> > As you increase number of nodes that are participating in an EC set
>> >>> > number
>> >>> > of parallel heals increase. Is the heal speed you saw improved per
>> file
>> >>> > or
>> >>> > the over all time it took to heal the data?
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <aspandey at redhat.com
>> >
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > 8+2 and 8+3 configurations are not the limitation but just
>> >>> >> > suggestions.
>> >>> >> > You can create 16+3 volume without any issue.
>> >>> >> >
>> >>> >> > Ashish
>> >>> >> >
>> >>> >> > ________________________________
>> >>> >> > From: "Alastair Neil" <ajneil.tech at gmail.com>
>> >>> >> > To: "gluster-users" <gluster-users at gluster.org>
>> >>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>> >>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
>> RHES
>> >>> >> >
>> >>> >> >
>> >>> >> > Hi
>> >>> >> >
>> >>> >> > we are deploying a large (24node/45brick) cluster and noted that
>> the
>> >>> >> > RHES
>> >>> >> > guidelines limit the number of data bricks in a disperse set to
>> 8.
>> >>> >> > Is
>> >>> >> > there
>> >>> >> > any reason for this.  I am aware that you want this to be a
>> power of
>> >>> >> > 2,
>> >>> >> > but
>> >>> >> > as we have a large number of nodes we were planning on going with
>> >>> >> > 16+3.
>> >>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> >
>> >>> >> >
>> >>> >> > Alastair
>> >>> >> >
>> >>> >> >
>> >>> >> > _______________________________________________
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users at gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >
>> >>> >> >
>> >>> >> > _______________________________________________
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users at gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> _______________________________________________
>> >>> >> Gluster-users mailing list
>> >>> >> Gluster-users at gluster.org
>> >>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Pranith
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Pranith
>> >
>> >
>> >
>> >
>> > --
>> > Pranith
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170505/5d6513f2/attachment.html>


More information about the Gluster-users mailing list