[Gluster-devel] Huge memory consumption with quota-marker
Raghavendra Gowdappa
rgowdapp at redhat.com
Thu Jul 2 06:14:47 UTC 2015
----- Original Message -----
> From: "Krishnan Parthasarathi" <kparthas at redhat.com>
> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>
> Cc: "Pranith Kumar Karampuri" <pkarampu at redhat.com>, "Vijay Bellur" <vbellur at redhat.com>, "Vijaikumar M"
> <vmallika at redhat.com>, "Gluster Devel" <gluster-devel at gluster.org>, "Nagaprasad Sathyanarayana"
> <nsathyan at redhat.com>
> Sent: Thursday, July 2, 2015 11:27:34 AM
> Subject: Re: Huge memory consumption with quota-marker
>
> Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be
> spawned for a given
> syncenv.
So, if we create a new syncenv with smaller stack-size, threads spawned in that syncenv will add to the number of threads in the process. However, if you create synctasks with stacksize different from the default env->stacksize, tasks will have lesser stack size but utilizing same threads of default syncenv.
>
> ----- Original Message -----
> >
> >
> > ----- Original Message -----
> > > From: "Krishnan Parthasarathi" <kparthas at redhat.com>
> > > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> > > Cc: "Vijay Bellur" <vbellur at redhat.com>, "Vijaikumar M"
> > > <vmallika at redhat.com>, "Gluster Devel"
> > > <gluster-devel at gluster.org>, "Raghavendra Gowdappa"
> > > <rgowdapp at redhat.com>,
> > > "Nagaprasad Sathyanarayana"
> > > <nsathyan at redhat.com>
> > > Sent: Thursday, July 2, 2015 10:54:44 AM
> > > Subject: Re: Huge memory consumption with quota-marker
> > >
> > > Yes, we could take synctask size as an argument for synctask_create.
> > > The increase in synctask threads is not really a problem, it can't
> > > grow more than 16 (SYNCENV_PROC_MAX).
> >
> > That is it cannot grow more than PROC_MAX in _single_ syncenv I suppose.
> >
> > >
> > > ----- Original Message -----
> > > >
> > > >
> > > > On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
> > > > >
> > > > > ----- Original Message -----
> > > > >> On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote:
> > > > >>> Hi,
> > > > >>>
> > > > >>> The new marker xlator uses syncop framework to update quota-size in
> > > > >>> the
> > > > >>> background, it uses one synctask per write FOP.
> > > > >>> If there are 100 parallel writes with all different inodes but on
> > > > >>> the
> > > > >>> same directory '/dir', there will be ~100 txn waiting in queue to
> > > > >>> acquire a lock on on its parent i.e '/dir'.
> > > > >>> Each of this txn uses a syntack and each synctask allocates stack
> > > > >>> size
> > > > >>> of 2M (default size), so total 0f 200M usage. This usage can
> > > > >>> increase
> > > > >>> depending on the load.
> > > > >>>
> > > > >>> I am think of of using the stacksize for synctask to 256k, will
> > > > >>> this
> > > > >>> mem
> > > > >>> be sufficient as we perform very limited operations within a
> > > > >>> synctask
> > > > >>> in
> > > > >>> marker updation?
> > > > >>>
> > > > >> Seems like a good idea to me. Do we need a 256k stacksize or can we
> > > > >> live
> > > > >> with something even smaller?
> > > > > It was 16K when synctask was introduced. This is a property of
> > > > > syncenv.
> > > > > We
> > > > > could
> > > > > create a separate syncenv for marker transactions which has smaller
> > > > > stacks.
> > > > > env->stacksize (and SYNCTASK_DEFAULT_STACKSIZE) was increased to 2MB
> > > > > to
> > > > > support
> > > > > pump xlator based data migration for replace-brick. For the no. of
> > > > > stack
> > > > > frames
> > > > > a marker transaction could use at any given time, we could use much
> > > > > lesser,
> > > > > 16K say.
> > > > > Does that make sense?
> > > > Creating one more syncenv will lead to extra sync-threads, may be we
> > > > can
> > > > take stacksize as argument.
> > > >
> > > > Pranith
> > > >
> > >
> >
>
More information about the Gluster-devel
mailing list