[Gluster-devel] Memory usage behavior for nested directories

Polakis Vaggelis vpolakis at gmail.com
Thu May 26 21:42:42 UTC 2016


many thx for the help!

please note that the relevant statedump after the dir deletion are the
https://bugzilla.redhat.com/attachment.cgi?id=1162159

for the quick-read I can see that
before dir deletion:
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_list_head mem   size = =16
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_iobref mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_asprintf mem   size = =6
[performance/quick-read.log-quick-read - usage-type gf_common_mt_char
mem   size = =18
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_iobrefs mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_qr_mt_qr_inode_t mem   size = =12597552
[performance/quick-read.log-quick-read - usage-type gf_qr_mt_content_t
mem   size = =134217204
[performance/quick-read.log-quick-read - usage-type
gf_qr_mt_qr_private_t mem   size = =72

after dir deletion
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_list_head mem   size = =16
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_iobref mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_asprintf mem   size = =0
[performance/quick-read.log-quick-read - usage-type gf_common_mt_char
mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_common_mt_iobrefs mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_qr_mt_qr_inode_t mem   size = =0
[performance/quick-read.log-quick-read - usage-type gf_qr_mt_content_t
mem   size = =0
[performance/quick-read.log-quick-read - usage-type
gf_qr_mt_qr_private_t mem   size = =72

As mentioned in my previous mail I see that cur-stdalloc is zero for
the pools (which as far as I
understood represent the extra heap allocation)
also other hot-counts
pool-name=glusterfs:dict_t
hot-count=3850

pool-name=glusterfs:data_t
hot-count=3869

but (as far as I understood) hot counts comes from the initial heap
not extra mallocs.
I am not sure it these denote memory leak, maybe we are missing
something in the glusterfs design.

br, vangelis

On Thu, May 26, 2016 at 10:29 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> On Thu, May 26, 2016 at 11:02 AM, Kremmyda, Olympia (Nokia -
> GR/Athens) <olympia.kremmyda at nokia.com> wrote:
>> Hi,
>>
>> We use Gluster 3.6.9 in one replicated volume (named “log”), with two
>> bricks.
>> Our tests include Nested Directory Creation operations (Creation from 1000
>> up to 250000 Directory Trees) with 396 depth and no deletion is performed.
>>
>> We have observed the following memory usage statistics shown in the images:
>>         https://bugzilla.redhat.com/attachment.cgi?id=1162032
>> https://bugzilla.redhat.com/attachment.cgi?id=1162033
>> https://bugzilla.redhat.com/attachment.cgi?id=1162034
>> (statedumps are in https://bugzilla.redhat.com/attachment.cgi?id=1162035 )
>>
>> and we would like your opinion if this memory usage is normal for glusterfs.
>> Also after our tests, we delete these directories and the memory is not
>> released.
>> Can you describe us the expected memory behaviour in these cases?
>>
>
>
> The behavior does not look right to me. Do you happen to have small
> files also in this directory tree structure? I can see that quick-read
> is consuming some memory for its cache from the statedump. FWIW I have
> been running directory creation / deletion tests with 3.8rc2 and do
> not see any steady increase in RSS.
>
> Regards,
> Vijay
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list