[Bugs] [Bug 1734027] glusterd 6.4 memory leaks 2-3 GB per 24h (OOM)

bugzilla at redhat.com bugzilla at redhat.com
Mon Aug 12 14:35:19 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1734027

Alex <totalworlddomination at gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(totalworlddominat |
                   |ion at gmail.com)              |



--- Comment #8 from Alex <totalworlddomination at gmail.com> ---
1.
> gluster volume info

Volume Name: gluster
Type: Replicate
Volume ID: 60ae0ddf-67d0-4b23-b694-0250c17a2f04
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.27.39.82:/mnt/xfs-drive-gluster/brick
Brick2: 172.27.39.81:/mnt/xfs-drive-gluster/brick
Brick3: 172.27.39.84:/mnt/xfs-drive-gluster/brick
Options Reconfigured:
cluster.self-heal-daemon: enable
cluster.consistent-metadata: off
ssl.dh-param: /etc/ssl/dhparam.pem
ssl.ca-list: /etc/ssl/glusterfs.ca
ssl.own-cert: /etc/ssl/glusterfs.pem
ssl.private-key: /etc/ssl/glusterfs.key
ssl.cipher-list:
HIGH:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1:TLSv1.2:!3DES:!RC4:!aNULL:!ADH
ssl.certificate-depth: 2
server.ssl: on
client.ssl: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
features.barrier: disable
features.bitrot: on
features.scrub: Active
auto-delete: enable

2.
Over the past month, since a recovery, I've had glusterd grow in ram on node
002 every 24h and every week on 001 and 003.
Interestingly, since last week, it seems to have stopped the rapid growth on
glusterd and glusterfsd might now be the one consuming more ram.

See attached graph of ram over the month, fast ram freeing, or quick vertical
lines, are due to the cron that restarted glusterd.


glusterfs-001:
root      1435  118 41.3 5878344 3379376 ?     Ssl  jui24 31930:51
/usr/sbin/glusterfsd -s 172.27.39.82 --volfile-id
gluster.172.27.39.82.mnt-xfs-drive-gluster-brick -p
/var/run/gluster/vols/gluster/172.27.39.82-mnt-xfs-drive-gluster-brick.pid -S
/var/run/gluster/b9ec53e974e8d080.socket --brick-name
/mnt/xfs-drive-gluster/brick -l
/var/log/glusterfs/bricks/mnt-xfs-drive-gluster-brick.log --xlator-option
*-posix.glusterd-uuid=2cc7ba6f-5478-4b27-b647-0c1527192f5a --process-name brick
--brick-port 49152 --xlator-option gluster-server.listen-port=49152
root     45129  0.2 17.8 1890584 1457448 ?     Ssl  aoû06  17:33
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

glusterfs-002:
root      1458 47.5 50.6 5878492 4141664 ?     Ssl  jui24 12775:43
/usr/sbin/glusterfsd -s 172.27.39.81 --volfile-id
gluster.172.27.39.81.mnt-xfs-drive-gluster-brick -p
/var/run/gluster/vols/gluster/172.27.39.81-mnt-xfs-drive-gluster-brick.pid -S
/var/run/gluster/dcbebdf486b846e2.socket --brick-name
/mnt/xfs-drive-gluster/brick -l
/var/log/glusterfs/bricks/mnt-xfs-drive-gluster-brick.log --xlator-option
*-posix.glusterd-uuid=be4912ac-b0a5-4a02-b8d6-7bccd3e1f807 --process-name brick
--brick-port 49152 --xlator-option gluster-server.listen-port=49152
root     20329  0.0  1.2 506132 99128 ?        Ssl  03:00   0:22
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

glusterfs-003:
root      1496 60.6 42.1 5878776 3443712 ?     Ssl  jui24 16308:37
/usr/sbin/glusterfsd -s 172.27.39.84 --volfile-id
gluster.172.27.39.84.mnt-xfs-drive-gluster-brick -p
/var/run/gluster/vols/gluster/172.27.39.84-mnt-xfs-drive-gluster-brick.pid -S
/var/run/gluster/848c5dbe437c2451.socket --brick-name
/mnt/xfs-drive-gluster/brick -l
/var/log/glusterfs/bricks/mnt-xfs-drive-gluster-brick.log --xlator-option
*-posix.glusterd-uuid=180e8f78-fa85-4cb8-8bbd-b0924a16ba60 --process-name brick
--brick-port 49152 --xlator-option gluster-server.listen-port=49152
root     58242  0.2 17.6 1816852 1440608 ?     Ssl  aoû06  19:08
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

3.
The kill -1 doesn't create anything in the /var/run/gluster folder on either
the glusterd or glusterfsd PID.
Is it creating a different dump than the one generated above via:
`gluster volume statedump gluster` ?
Anyhting I am missing to have 6.4 dump its state?

Thanks!

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list