[Gluster-devel] Shared resource pool for libgfapi

Poornima Gurusiddaiah pgurusid at redhat.com
Tue Jun 9 06:57:50 UTC 2015


Initially ctx had a one to one mapping with process and volume/mount, but with libgfapi
and libgfchangelog, ctx has lost the one to one association with process.
The question is do we want to retain one to one mapping between ctx and process or
ctx and volume.

Having one ctx per process:
=========================== 
Shared ctx mandates that any vol specific information moves to a different
structure, as suggested glfs_t.

The complication is that currently the definition of ctx is common across
all gluster processes (cli, glusterfs, glusterfsd, gsyncd etc.). The options
are:
- To have common ctx for all gluster processes which means a new struct equivalent
  to glfs_t in all processes.
- To have separate definition of ctx only for libgfapi, which means changing the
  whole of the stack which references ctx, to be aware of this difference.

Or another approach will be to have libgfapi implemented in the lines of gnfs,
which ensures ctx is common for all the volumes and all the mounts.

Having one ctx per volume:
==========================
The problem is some of the resources like threads and mem pools should be shared
across ctxs. To deal with this, have a resource pool which provides threads and mem
pools, and have ctx get and put these pools.
The resulting abstract will be, common resource pool in gluster which caters to all
the resource requirements of the process. And the ctx will be associated with volume
instead of process.

The two choices are:
- Resource pool and disassociate ctx with process
- libgfapi in lines of gnfs and retain one ctx per process, but the complexity and the
  change involved is high.

Let us know your comments.

Regards,
Poornima


----- Original Message -----
> From: "Vijay Bellur" <vbellur at redhat.com>
> To: "Jeff Darcy" <jdarcy at redhat.com>, "Poornima Gurusiddaiah" <pgurusid at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Monday, June 8, 2015 11:24:38 PM
> Subject: Re: [Gluster-devel] Shared resource pool for libgfapi
> 
> On 06/08/2015 05:21 PM, Jeff Darcy wrote:
> >> Every resource(thread, mem pools) is associated with glusterfs_ctx,
> >> hence as the ctxs in the process grows the resource utilization also
> >> grows (most of it being unused).  This mostly is an issue with any
> >> libgfapi application: USS, NFS Ganesha, Samba, vdsm, qemu.  It is
> >> normal in any of the libgfapi application to have multiple
> >> mounts(ctxs) in the same process, we have seen the number of threads
> >> scale from 10s-100s in these applications.
> >
> >> Solution:
> >> ======
> >> Have a shared resource pool, threads and mem pools. Since they are
> >> shared
> >
> > Looking at it from a different perspective...
> >
> > As I understand it, the purpose of glusterfs_ctx is to be a container
> > for these resources.  Therefore, the problem is not that the resources
> > aren't shared within a context but that the contexts aren't shared
> > among glfs objects.  This happens because we unconditionally call
> > glusterfs_ctx_new from within glfs_new.  To be honest, this looks a
> > bit like rushed code to me - a TODO in early development that never
> > got DONE later.  Perhaps the right thing to do is to let glfs_new
> > share an existing glusterfs_ctx instead of always creating a new one.
> > It would even be possible to make this the default behavior (so that
> > existing apps can benefit without change) but it might be better for
> > it to be a new call.  As a potential future enhancement, we could
> > provide granular control over which resources are shared and which
> > are private, much like clone(2) does with threads.
> 
> +1. In the pre gfapi days, ctx was intended to be a global resource -
> one per process and was available to all translators.  It makes sense to
> retain the same behavior in gfapi by having a single ctx that can be
> shared across multiple glfs instances.
> 
> -Vijay
> 


More information about the Gluster-devel mailing list