[Bugs] [Bug 1476992] inode table lru list leak with glusterfs fuse mount
bugzilla at redhat.com
bugzilla at redhat.com
Thu Oct 26 14:03:42 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1476992
danny.lee at appian.com changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |danny.lee at appian.com
Flags|needinfo?(ryan.ding at open-fs |
|.com) |
--- Comment #2 from danny.lee at appian.com ---
Hi Csaba,
We also have a similar problem with the same steps to reproduce. It looks very
similar to https://bugzilla.redhat.com/show_bug.cgi?id=1501146, as well.
OS: CentOS Linux 7 (Core)
Kernel: Linux 3.10.0-693.2.2.el7.x86_64
Architecture: x86-64
Gluster Version(s) tried: 3.10.5, 3.12.1, 3.12.2 (using rpm)
Gluster Volume Info Output:
Volume Name: node
Type: Distribute
Volume ID: 1e7f74fe-e0e9-48b9-b80b-f35959f39647
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: xxx.xxx.xxx.xxx:/usr/local/node/local-data/mirrored-data
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.io-thread-count: 64
network.ping-timeout: 0
auth.allow: xxx.xxx.xxx.xxx
Pattern:
We used the smallfiles scripts to create the files
(https://github.com/bengland2/smallfile). The command we used is
"./smallfile/smallfile_cli.py --top /usr/local/node/data/mirrored-data/test
--threads 16 --file-size 16 --files 10000 --response-times Y"
glusterfs started with ~20mb of memory. After we created the files, glusterfs
used ~450mb of memory. After 10 hours of idle use, it seems to be stabilizing
around ~400mb.
Our production sites are 2-node-with-arbiter and 3-node clusters and they are
also having the same issue. For the 3-nodes, we are working around it with a
rolling restart, but for the 2-nodes, we have to take a full outage, so it has
become a big issue.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list