[Bugs] [Bug 1657743] New: Very high memory usage (25GB) on Gluster FUSE mountpoint

bugzilla at redhat.com bugzilla at redhat.com
Mon Dec 10 11:07:49 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1657743

            Bug ID: 1657743
           Summary: Very high memory usage (25GB) on Gluster FUSE
                    mountpoint
           Product: GlusterFS
           Version: 3.12
            Status: NEW
         Component: fuse
          Assignee: bugs at gluster.org
          Reporter: ryan at magenta.tv
                CC: bugs at gluster.org
  Target Milestone: ---
   External Bug ID: Samba Project 13694
    Classification: Community



Description of problem:
Very high memory usage (25GB) on Gluster fuse mountpoint.
Process has been running for around 60 hours.
We're seeing this issue on multiple nodes and on multiple clusters.

Previously we had been using the glusterfs_vfs module for Samba, however due to
memory issues we moved to FUSE. More about the VFS issues can be found here:
-https://bugzilla.samba.org/show_bug.cgi?id=13694
-https://access.redhat.com/solutions/2969381

Version-Release number of selected component (if applicable):
3.12.14

How reproducible:
Occurs very frequently.
Application using the cluster is a DB based video file ingest + transcoding
system with multiple worker nodes.

Steps to Reproduce:
1.Mount gluster volume via FUSE mountpoint
2.Share FUSE mountpoint via SMB
3.Wait for memory usage to steadily rise

Actual results:
Process slowly uses more memory until all system memory is used. Dmesg reports
that the thread was killed due to high memory usage

Expected results:
Memory usage is consistent and does not consume all system memory


Additional info:
Statedumps for all bricks show around 300-400MB of usage per-brick process.

Gluster volume info output:
Volume Name: mcv01
Type: Distribute
Volume ID: aa451513-0d2e-4216-97c2-966dc6ca8b1d
Status: Started
Snapshot Count: 0
Number of Bricks: 15
Transport-type: tcp
Bricks:
Brick1: node-wip01:/mnt/h1a/data
Brick2: node-wip02:/mnt/h1a/data
Brick3: node-wip03:/mnt/h1a/data
Brick4: node-wip01:/mnt/h2a/data
Brick5: node-wip02:/mnt/h2a/data
Brick6: node-wip03:/mnt/h2a/data
Brick7: node-wip01:/mnt/h3a/data
Brick8: node-wip02:/mnt/h3a/data
Brick9: node-wip03:/mnt/h3a/data
Brick10: node-wip01:/mnt/h4a/data
Brick11: node-wip02:/mnt/h4a/data
Brick12: node-wip03:/mnt/h4a/data
Brick13: node-wip01:/mnt/h5a/data
Brick14: node-wip02:/mnt/h5a/data
Brick15: node-wip03:/mnt/h5a/data
Options Reconfigured:
performance.parallel-readdir: on
performance.nl-cache: on
performance.nl-cache-timeout: 600
cluster.lookup-optimize: off
performance.client-io-threads: on
client.event-threads: 4
server.event-threads: 4
storage.batch-fsync-delay-usec: 0
performance.write-behind-window-size: 1MB
performance.md-cache-timeout: 600
performance.cache-samba-metadata: on
performance.cache-invalidation: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.stat-prefetch: on
performance.cache-size: 100MB
performance.io-thread-count: 32
server.allow-insecure: on
transport.address-family: inet
nfs.disable: on

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list