[Bugs] [Bug 1734027] glusterd 6.4 memory leaks 2-3 GB per 24h (OOM)

bugzilla at redhat.com bugzilla at redhat.com
Wed Aug 14 14:20:02 UTC 2019


--- Comment #14 from Alex <totalworlddomination at gmail.com> ---
Statedump worked, my bad, I was thinking kill -1 and not -10... :)
I've attached it under the command's name to generate it as its description.

I do have a glusterd-exporter for prometheus running.
I just stopped them for a few days to see what happens.

I've also attached all 3 cmd_history.log.
Interestingly, since I've stopped the glusterd-exporter at ~10AM EDT, 15
minutes prior to copying the logs (~14h UTC), the repeating message ("tail"

[2019-08-14 13:59:35.027247]  : volume profile gluster info cumulative : FAILED
: Profile on Volume gluster is not started
[2019-08-14 13:59:35.193063]  : volume status all detail : SUCCESS
[2019-08-14 13:59:35.199847]  : volume status all detail : SUCCESS

... seem to have stopped at the same time!

Is that what you meant by monitoring command?
Is it a problem with that exporter not clearing something after getting some
data or with gluster accumulating some sort of cache for those commands?


You are receiving this mail because:
You are on the CC list for the bug.

More information about the Bugs mailing list