[Gluster-devel] Huge memory consumption with quota-marker

Pranith Kumar Karampuri pkarampu at redhat.com
Thu Jul 2 05:14:25 UTC 2015



On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
>
> ----- Original Message -----
>> On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote:
>>> Hi,
>>>
>>> The new marker xlator uses syncop framework to update quota-size in the
>>> background, it uses one synctask per write FOP.
>>> If there are 100 parallel writes with all different inodes but on the
>>> same directory '/dir', there will be ~100 txn waiting in queue to
>>> acquire a lock on on its parent i.e '/dir'.
>>> Each of this txn uses a syntack and each synctask allocates stack size
>>> of 2M (default size), so total 0f 200M usage. This usage can increase
>>> depending on the load.
>>>
>>> I am think of of using the stacksize for synctask to 256k, will this mem
>>> be sufficient as we perform very limited operations within a synctask in
>>> marker updation?
>>>
>> Seems like a good idea to me. Do we need a 256k stacksize or can we live
>> with something even smaller?
> It was 16K when synctask was introduced. This is a property of syncenv. We could
> create a separate syncenv for marker transactions which has smaller stacks.
> env->stacksize (and SYNCTASK_DEFAULT_STACKSIZE) was increased to 2MB to support
> pump xlator based data migration for replace-brick. For the no. of stack frames
> a marker transaction could use at any given time, we could use much lesser, 16K say.
> Does that make sense?
Creating one more syncenv will lead to extra sync-threads, may be we can 
take stacksize as argument.

Pranith


More information about the Gluster-devel mailing list