[Gluster-devel] Throttling xlator on the bricks

Raghavendra Bhat rabhat at redhat.com
Wed Jan 27 23:13:29 UTC 2016


There is already a patch submitted for moving TBF part to libglusterfs. It
is under review.
http://review.gluster.org/#/c/12413/


Regards,
Raghavendra

On Mon, Jan 25, 2016 at 2:26 AM, Venky Shankar <vshankar at redhat.com> wrote:

> On Mon, Jan 25, 2016 at 11:06:26AM +0530, Ravishankar N wrote:
> > Hi,
> >
> > We are planning to introduce a throttling xlator on the server (brick)
> > process to regulate FOPS. The main motivation is to solve complaints
> about
> > AFR selfheal taking too much of CPU resources. (due to too many fops for
> > entry
> > self-heal, rchecksums for data self-heal etc.)
> >
> > The throttling is achieved using the Token Bucket Filter algorithm (TBF).
> > TBF
> > is already used by bitrot's bitd signer (which is a client process) in
> > gluster to regulate the CPU intensive check-sum calculation. By putting
> the
> > logic on the brick side, multiple clients- selfheal, bitrot, rebalance or
> > even the mounts themselves can avail the benefits of throttling.
>
>   [Providing current TBF implementation link for completeness]
>
>
> https://github.com/gluster/glusterfs/blob/master/xlators/features/bit-rot/src/bitd/bit-rot-tbf.c
>
> Also, it would be beneficial to have the core TBF implementation as part of
> libglusterfs so as to be consumable by the server side xlator component to
> throttle dispatched FOPs and for daemons to throttle anything that's
> outside
> "brick" boundary (such as cpu, etc..).
>
> >
> > The TBF algorithm in a nutshell is as follows: There is a bucket which is
> > filled
> > at a steady (configurable) rate with tokens. Each FOP will need a fixed
> > amount
> > of tokens to be processed. If the bucket has that many tokens, the FOP is
> > allowed and that many tokens are removed from the bucket. If not, the
> FOP is
> > queued until the bucket is filled.
> >
> > The xlator will need to reside above io-threads and can have different
> > buckets,
> > one per client. There has to be a communication mechanism between the
> client
> > and
> > the brick (IPC?) to tell what FOPS need to be regulated from it, and the
> no.
> > of
> > tokens needed etc. These need to be re configurable via appropriate
> > mechanisms.
> > Each bucket will have a token filler thread which will fill the tokens in
> > it.
> > The main thread will enqueue heals in a list in the bucket if there
> aren't
> > enough tokens. Once the token filler detects some FOPS can be serviced,
> it
> > will
> > send a cond-broadcast to a dequeue thread which will process (stack wind)
> > all
> > the FOPS that have the required no. of tokens from all buckets.
> >
> > This is just a high level abstraction: requesting feedback on any aspect
> of
> > this feature. what kind of mechanism is best between the client/bricks
> for
> > tuning various parameters? What other requirements do you foresee?
> >
> > Thanks,
> > Ravi
>
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160127/f9bc30ef/attachment-0001.html>


More information about the Gluster-devel mailing list