[Gluster-users] disperse volume brick counts limits in RHES
Pranith Kumar Karampuri
pkarampu at redhat.com
Fri May 5 11:49:11 UTC 2017
On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <cobanserkan at gmail.com> wrote:
> It is the over all time, 8TB data disk healed 2x faster in 8+2
> configuration.
>
Wow, that is counter intuitive for me. I will need to explore about this to
find out why that could be. Thanks a lot for this feedback!
>
> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
> <pkarampu at redhat.com> wrote:
> >
> >
> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <cobanserkan at gmail.com>
> wrote:
> >>
> >> Healing gets slower as you increase m in m+n configuration.
> >> We are using 16+4 configuration without any problems other then heal
> >> speed.
> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
> >> 8+2 is faster by 2x.
> >
> >
> > As you increase number of nodes that are participating in an EC set
> number
> > of parallel heals increase. Is the heal speed you saw improved per file
> or
> > the over all time it took to heal the data?
> >
> >>
> >>
> >>
> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <aspandey at redhat.com>
> wrote:
> >> >
> >> > 8+2 and 8+3 configurations are not the limitation but just
> suggestions.
> >> > You can create 16+3 volume without any issue.
> >> >
> >> > Ashish
> >> >
> >> > ________________________________
> >> > From: "Alastair Neil" <ajneil.tech at gmail.com>
> >> > To: "gluster-users" <gluster-users at gluster.org>
> >> > Sent: Friday, May 5, 2017 2:23:32 AM
> >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
> >> >
> >> >
> >> > Hi
> >> >
> >> > we are deploying a large (24node/45brick) cluster and noted that the
> >> > RHES
> >> > guidelines limit the number of data bricks in a disperse set to 8. Is
> >> > there
> >> > any reason for this. I am aware that you want this to be a power of
> 2,
> >> > but
> >> > as we have a large number of nodes we were planning on going with
> 16+3.
> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
> >> >
> >> > Thanks,
> >> >
> >> >
> >> > Alastair
> >> >
> >> >
> >> > _______________________________________________
> >> > Gluster-users mailing list
> >> > Gluster-users at gluster.org
> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >> >
> >> >
> >> > _______________________________________________
> >> > Gluster-users mailing list
> >> > Gluster-users at gluster.org
> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > --
> > Pranith
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170505/00cc4719/attachment.html>
More information about the Gluster-users
mailing list