[Bugs] [Bug 1369364] New: Huge memory usage of FUSE client

bugzilla at redhat.com bugzilla at redhat.com
Tue Aug 23 08:29:47 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1369364

            Bug ID: 1369364
           Summary: Huge memory usage of FUSE client
           Product: GlusterFS
           Version: 3.7.14
         Component: fuse
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: oleksandr at natalenko.name
                CC: bugs at gluster.org



Description of problem:

FUSE client consumes lots of memory for volume with lots of small files
(mailboxes storage for dovecot).

Here are volume options:

===
Type: Distributed-Replicate
...
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
...
network.ping-timeout: 10
features.cache-invalidation: on
performance.cache-size: 16777216
performance.cache-max-file-size: 1048576
performance.io-thread-count: 4
performance.write-behind: on
performance.flush-behind: on
performance.read-ahead: on
performance.quick-read: on
performance.stat-prefetch: on
performance.write-behind-window-size: 2097152
storage.linux-aio: on
performance.client-io-threads: off
server.event-threads: 8
network.inode-lru-limit: 4096
client.event-threads: 4
cluster.readdir-optimize: on
cluster.lookup-optimize: on
performance.readdir-ahead: on
nfs.disable: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.entry-self-heal: off
===

Here goes memory usage before drop_caches:

===
root      1049  0.0 40.1 1850040 1170912 ?     S<sl сер19   0:28
/usr/sbin/glusterfs --fopen-keep-cache --direct-io-mode=disable
--volfile-server=glusterfs.server.com --volfile-id=mail_boxes
/var/spool/mail/virtual
===

And after drop_caches:

===
root      1049  0.0 40.1 1850040 1170912 ?     S<sl сер19   0:28
/usr/sbin/glusterfs --fopen-keep-cache --direct-io-mode=disable
--volfile-server=glusterfs.server.com --volfile-id=mail_boxes
/var/spool/mail/virtual
===

Nothing changes after drop_caches.

pmap of PID 1049 is attached. I see lots of suspicious entries:

===
00007fee74000000  65536K rw---   [ anon ]
===

Are those related to I/O translator? What else could consume 64M at once?

Also, attaching statedumps before drop_caches and after drop_caches (almost no
difference between them).

Version-Release number of selected component (if applicable):

GlusterFS v3.7.14 + following patches (all of them are already merged for
3.7.15 release):

===
Aravinda VK (1):
      packaging: Remove ".py" extension from symlink target

Atin Mukherjee (1):
      rpc : build_prog_details should iterate program list inside critical
section

Jiffin Tony Thottan (2):
      gfapi : Avoid double freeing of dict in glfs_*_*getxattr
      xlator/trash : append '/' at the end in trash_notify_lookup_cbk

Raghavendra G (2):
      libglusterfs/client_t: Dump the 0th client too
      storage/posix: fix inode leaks

Soumya Koduri (2):
      glfs/upcall: entries should be removed under mutex lock
      gfapi/upcall: Fix a ref leak

Susant Palai (1):
      posix: fix posix_fgetxattr to return the correct error
===

How reproducible:

Always.

Steps to Reproduce:
1. mount volume with lots of small files (mail boxes);
2. use them
3. ...
4. PROFIT!

Actual results:

Client memory leaks.

Expected results:

Client memory does not leak :).

Additional info:

Feel free to ask me for additional info.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list