[Bugs] [Bug 1329335] GlusterFS - Memory Leak - High Memory Utilization

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 27 11:53:10 UTC 2016


--- Comment #11 from Kaushal <kaushal at redhat.com> ---
As Rafi has mentioned already, I too think it's the volume profile polling
causing issues.

>From the statedumps, I see that memory allocations for dict_t, data_t,
data_pair_t, gf_common_mt_memdup, gf_common_mt_asprintf and gf_common_mt_strdup
have increased quite a lot. These are the memory types generally associated
with the GlusterFS dictionary data type and operations on it (including
dict_serialize and unserialize).

Information in GlusterFS is passed between processes (brick to glusterd,
glusterd to glusterd and glusterd to cli) using dictionaries as containers. But
certain operations generate a large amount of data, like 'volume profile',
which make dictionaries huge. Collecting information from multiple sources in
volume profile involves a lot of data duplication happening, which uses a lot
of memory.

While in most cases, memory allocated to dictionaries should be freed upon the
dictionary being destroyed, it appears that there is a quite significant leak
in volume profile path. We'll try to identify the leak as soon as we can.

In the meantime, I do hope that stopping the agent has helped. GlusterFS
doesn't have any other way, apart from volume profile, to gather volume stats
on server. A client side tool, glusterfsiostat[1], was implemented a couple of
years back as a GSOC project. You could try it out.

If that doesn't work out, and you really need to monitor the stats, I suggest
that you reduce the polling interval. From the logs, I see that the interval is
1 minute right now, which can be changed to say 5 minutes. Also, the polling is
being done from both servers. This effectively make the polling period 30
seconds. You can just use one of the servers to get the stats, as volume
profile gathers stats of a volume from the whole cluster.

You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=6mQgBVdhng&a=cc_unsubscribe

More information about the Bugs mailing list