[Bugs] [Bug 1501146] FUSE client Memory usage issue
bugzilla at redhat.com
bugzilla at redhat.com
Thu Oct 12 07:21:28 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1501146
--- Comment #2 from Josh Coyle <joshua.coyle at probax.io> ---
More additional info based on guidelines from gluster docs.
GlusterFS Cluster Information:
Number of volumes: 1
Volume Names: gvAA01
Volume on which the particular issue is seen [ if applicable ]: gvAA01
Type of volumes: Distributed Replicated
Volume options if available:
Options Reconfigured:
cluster.data-self-heal: off
cluster.lookup-unhashed: auto
cluster.lookup-optimize: on
cluster.self-heal-daemon: enable
client.bind-insecure: on
server.allow-insecure: on
nfs.disable: off
transport.address-family: inet
cluster.favorite-child-policy: size
Output of gluster volume info
Volume Name: gvAA01
Type: Distributed-Replicate
Volume ID: ca4ece2c-13fe-414b-856c-2878196d6118
Status: Started
Snapshot Count: 0
Number of Bricks: 5 x (2 + 1) = 15
Transport-type: tcp
Bricks:
Brick1: PB-WA-AA-01-B:/brick1/gvAA01/brick
Brick2: PB-WA-AA-02-B:/brick1/gvAA01/brick
Brick3: PB-WA-AA-00-A:/arbiterAA01/gvAA01/brick1 (arbiter)
Brick4: PB-WA-AA-01-B:/brick2/gvAA01/brick
Brick5: PB-WA-AA-02-B:/brick2/gvAA01/brick
Brick6: PB-WA-AA-00-A:/arbiterAA01/gvAA01/brick2 (arbiter)
Brick7: PB-WA-AA-01-B:/brick3/gvAA01/brick
Brick8: PB-WA-AA-02-B:/brick3/gvAA01/brick
Brick9: PB-WA-AA-00-A:/arbiterAA01/gvAA01/brick3 (arbiter)
Brick10: PB-WA-AA-01-B:/brick4/gvAA01/brick
Brick11: PB-WA-AA-02-B:/brick4/gvAA01/brick
Brick12: PB-WA-AA-00-A:/arbiterAA01/gvAA01/brick4 (arbiter)
Brick13: PB-WA-AA-01-B:/brick5/gvAA01/brick
Brick14: PB-WA-AA-02-B:/brick5/gvAA01/brick
Brick15: PB-WA-AA-00-A:/arbiterAA01/gvAA01/brick5 (arbiter)
Options Reconfigured:
cluster.data-self-heal: off
cluster.lookup-unhashed: auto
cluster.lookup-optimize: on
cluster.self-heal-daemon: enable
client.bind-insecure: on
server.allow-insecure: on
nfs.disable: off
transport.address-family: inet
cluster.favorite-child-policy: size
Output of gluster volume status
root at PB-WA-AA-00-A:/# gluster volume status
Status of volume: gvAA01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick PB-WA-AA-01-B:/brick1/gvAA01/brick 49152 0 Y 10547
Brick PB-WA-AA-02-B:/brick1/gvAA01/brick 49152 0 Y 10380
Brick PB-WA-AA-00-A:/arbiterAA01/gvAA01/bri
ck1 49152 0 Y 16770
Brick PB-WA-AA-01-B:/brick2/gvAA01/brick 49153 0 Y 10554
Brick PB-WA-AA-02-B:/brick2/gvAA01/brick 49153 0 Y 10388
Brick PB-WA-AA-00-A:/arbiterAA01/gvAA01/bri
ck2 49153 0 Y 16789
Brick PB-WA-AA-01-B:/brick3/gvAA01/brick 49154 0 Y 10565
Brick PB-WA-AA-02-B:/brick3/gvAA01/brick 49154 0 Y 10396
Brick PB-WA-AA-00-A:/arbiterAA01/gvAA01/bri
ck3 49154 0 Y 20685
Brick PB-WA-AA-01-B:/brick4/gvAA01/brick 49155 0 Y 10571
Brick PB-WA-AA-02-B:/brick4/gvAA01/brick 49155 0 Y 10404
Brick PB-WA-AA-00-A:/arbiterAA01/gvAA01/bri
ck4 49155 0 Y 14312
Brick PB-WA-AA-01-B:/brick5/gvAA01/brick 49156 0 Y 990
Brick PB-WA-AA-02-B:/brick5/gvAA01/brick 49156 0 Y 14869
Brick PB-WA-AA-00-A:/arbiterAA01/gvAA01/bri
ck5 49156 0 Y 19462
NFS Server on localhost 2049 0 Y 2950
Self-heal Daemon on localhost N/A N/A Y 2959
NFS Server on PB-WA-AA-01-B 2049 0 Y 23815
Self-heal Daemon on PB-WA-AA-01-B N/A N/A Y 23824
NFS Server on PB-WA-AA-02-B 2049 0 Y 14889
Self-heal Daemon on PB-WA-AA-02-B N/A N/A Y 14898
Task Status of Volume gvAA01
------------------------------------------------------------------------------
Task : Rebalance
ID : 5930cdcd-bb76-4d32-aeca-c41aea8f832d
Status : in progress
Client Information
OS Type: Ubuntu Linux
Mount type: gluster FUSE client
OS Version: 16.04.3
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list