[Gluster-users] ec heal questions

Pranith Kumar Karampuri pkarampu at redhat.com
Thu Aug 11 06:55:15 UTC 2016


I don't think these will help. We need to trigger parallel heals, I gave
the command as a reply to one of your earlier threads. Sorry again for the
delay :-(.

On Tue, Aug 9, 2016 at 3:53 PM, Serkan Çoban <cobanserkan at gmail.com> wrote:

> Does increasing any of below values helps ec heal speed?
>
> performance.io-thread-count 16
> performance.high-prio-threads 16
> performance.normal-prio-threads 16
> performance.low-prio-threads 16
> performance.least-prio-threads 1
> client.event-threads 8
> server.event-threads 8
>
>
> On Mon, Aug 8, 2016 at 2:48 PM, Ashish Pandey <aspandey at redhat.com> wrote:
> > Serkan,
> >
> > Heal for 2 different files could be parallel but not for a single file
> and
> > different chunks.
> > I think you are referring your previous mail in which you had to remove
> one
> > complete disk.
> >
> > In this case heal starts automatically but it scans through each and
> every
> > file/dir
> > to decide if it needs heal or not. No doubt it is more time taking
> process
> > as compared to index heal.
> > If the data is 900GB then it might take lot of time.
> >
> > What configuration to choose depends a lot on your storage requirement,
> > hardware capability and
> > probability of failure of disk and network.
> >
> > For example : A small configuration  like 4+2 could help you in this
> > scenario. You can have distributed disp volume of 4+2 config.
> > In this case each sub vol have a comparatively less data. If a brick
> fails
> > in that sub vol, it will have to heal only that much data and that too
> from
> > reading 4 bricks only.
> >
> > dist-disp-vol
> >
> > subvol-1            subvol-2                subvol-3
> > 4+2                        4+2                    4+2
> > 4GB                    4GB                    4GB
> > ^^^
> > If a brick in this subvol-1 fails, it will be local to this subvol only
> and
> > will require only 4GB of data to be healed which will require reading
> from 4
> > disk only.
> >
> > I am keeping Pranith in CC to take his input too.
> >
> > Ashish
> >
> >
> > ________________________________
> > From: "Serkan Çoban" <cobanserkan at gmail.com>
> > To: "Ashish Pandey" <aspandey at redhat.com>
> > Cc: "Gluster Users" <gluster-users at gluster.org>
> > Sent: Monday, August 8, 2016 4:47:02 PM
> > Subject: Re: [Gluster-users] ec heal questions
> >
> >
> > Is reading the good copies to construct the bad chunk is a parallel or
> > sequential operation?
> > Should I revert my 16+4 ec cluster to 8+2 because it takes nearly 7
> > days to heal just one broken 8TB disk which has only 800GB of data?
> >
> > On Mon, Aug 8, 2016 at 1:56 PM, Ashish Pandey <aspandey at redhat.com>
> wrote:
> >>
> >> Hi,
> >>
> >> Considering all the other factor same for both the configuration, yes
> >> small
> >> configuration
> >> would take less time. To read good copies, it will take less time.
> >>
> >> I think, multi threaded shd is the only enhancement in near future.
> >>
> >> Ashish
> >>
> >> ________________________________
> >> From: "Serkan Çoban" <cobanserkan at gmail.com>
> >> To: "Gluster Users" <gluster-users at gluster.org>
> >> Sent: Monday, August 8, 2016 4:02:22 PM
> >> Subject: [Gluster-users] ec heal questions
> >>
> >>
> >> Hi,
> >>
> >> Assume we have 8+2 and 16+4 ec configurations and we just replaced a
> >> broken disk in each configuration  which has 100GB of data. In which
> >> case heal completes faster? Does heal speed has anything related with
> >> ec configuration?
> >>
> >> Assume we are in 16+4 ec configuration. When heal starts it reads 16
> >> chunks from other bricks recompute our chunks and writes it to just
> >> replaced disk. Am I correct?
> >>
> >> If above assumption is true then small ec configurations heals faster
> >> right?
> >>
> >> Is there any improvements in 3.7.14+ that makes ec heal faster?(Other
> >> than multi-thread shd for ec)
> >>
> >> Thanks,
> >> Serkan
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >>
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160811/5d88c657/attachment.html>


More information about the Gluster-users mailing list