[Bugs] [Bug 1532192] memory leak in glusterfsd process
bugzilla at redhat.com
bugzilla at redhat.com
Wed Jan 10 14:02:21 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1532192
Alex <motogvar at gmail.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(motogvar at gmail.co |
|m) |
--- Comment #2 from Alex <motogvar at gmail.com> ---
If you increase the frequency of the changes, does the memory usage grow faster
?
Yes.
How many days have you tracked this memory increase ?
On our system I see permanent memory growth for 3 weeks.
Has it been growing at the same speed all the time or the rate of growing has
decreased after some days ?
The same rate(see rate.txt file in attachement)
To collect debug statedumps two different test cases were performed:
Test case 1: on the system left for the night. No new files were created.
Every 5 seconds "sudo gluster volume heal home info" was called to check that
replication is ok.
In the evening:
Node A:
~$ date
Tue Jan 9 16:28:43 UTC 2018
~$ sudo ps -ely | grep gluster
S 0 6060 1 0 80 0 38224 246871 - ? 00:00:00
glusterfsd
S 0 6085 1 0 80 0 22772 155022 - ? 00:00:00 glusterfs
S 0 39482 1 0 80 0 23552 117579 - ? 00:00:01 glusterd
S 0 40111 1 0 80 0 50496 150073 - ? 00:00:01 glusterfs
Node B:
~$ date
Tue Jan 9 16:27:33 UTC 2018
~$ sudo ps -ely | grep gluster
S 0 158949 1 0 80 0 20340 117579 - ? 00:00:00 glusterd
S 0 160258 1 0 80 0 38308 246871 - ? 00:00:00
glusterfsd
S 0 160278 1 0 80 0 22256 119670 - ? 00:00:00 glusterfs
S 0 160379 1 0 80 0 39068 161520 - ? 00:00:00 glusterfs
Attachment (statedumps): NodeA_test_case_1_evening.6060.dump,
NodeB_test_case1_evening.160258.dump
In the morning:
Node A:
~$ date
Wed Jan 10 07:46:11 UTC 2018
~$ sudo ps -ely | grep gluster
S 0 6060 1 0 80 0 39812 263255 - ? 00:00:21
glusterfsd
S 0 6085 1 0 80 0 22772 155022 - ? 00:00:01 glusterfs
S 0 39482 1 0 80 0 26092 117579 - ? 00:00:04 glusterd
S 0 40111 1 0 80 0 50496 150073 - ? 00:00:01 glusterfs
Node B:
~$ date
Wed Jan 10 07:51:50 UTC 2018
~$ sudo ps -ely | grep gluster
S 0 158949 1 0 80 0 20384 117579 - ? 00:00:01 glusterd
S 0 160258 1 0 80 0 39892 263255 - ? 00:00:18
glusterfsd
S 0 160278 1 0 80 0 22256 119670 - ? 00:00:01 glusterfs
S 0 160379 1 0 80 0 39068 161520 - ? 00:00:00 glusterfs
Attachment (statedumps): NodeA_test_case_1_morning.6060.dump,
NodeB_test_case_1_morning.160258.dump
Test case 2: on the system left for hour with intensive file generation and
removing on mounted glusterfs directory.
Every 5 seconds "sudo gluster volume heal home info" was called to check that
replication is ok.
Script for file generation:
----------------------
#!/bin/bash
while
dd if=/dev/urandom of=tmp_file bs=64M count=16 iflag=fullblock;
sleep 1;
rm -f tmp_file;
do
sleep 1;
done
----------------------
Node A:
~$ sudo ps -ely | grep gluster
S 0 10859 1 0 80 0 21528 117579 - ? 00:00:00 glusterd
S 0 11900 1 0 80 0 37624 263255 - ? 00:00:00
glusterfsd
S 0 11920 1 0 80 0 22116 119671 - ? 00:00:00 glusterfs
S 0 12019 1 0 80 0 41440 145137 - ? 00:00:00 glusterfs
Node B:
~$ sudo ps -ely | grep gluster
S 0 12076 1 0 80 0 22320 117579 - ? 00:00:00 glusterd
S 0 12657 1 0 80 0 50268 150074 - ? 00:00:00 glusterfs
S 0 32953 1 0 80 0 37664 263255 - ? 00:00:00
glusterfsd
S 0 32973 1 0 80 0 22408 155023 - ? 00:00:00 glusterfs
Attachment (statedumps): NodeA_test_case_2_start.11900.dump,
NodeB_test_case_2_start.32953.dump
After one hour:
Node A:
~$ sudo ps -ely | grep gluster
S 0 10859 1 0 80 0 21528 117579 - ? 00:00:00 glusterd
S 0 11900 1 3 80 0 38396 296280 - ? 00:03:37
glusterfsd
S 0 11920 1 0 80 0 22844 136589 - ? 00:00:00 glusterfs
S 0 12019 1 0 80 0 41628 145137 - ? 00:00:00 glusterfs
Node B:
~$ sudo ps -ely | grep gluster
S 0 12076 1 0 80 0 22404 117579 - ? 00:00:00 glusterd
S 0 12657 1 3 80 0 53500 150074 - ? 00:04:07 glusterfs
S 0 32953 1 1 80 0 38584 312921 - ? 00:02:09
glusterfsd
S 0 32973 1 0 80 0 22524 155023 - ? 00:00:00 glusterfs
Attachment (statedumps): NodeA_test_case_2_end.11900.dump,
NodeB_test_case_2_start.32953.dump
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list