[Bugs] [Bug 1165429] New: Gluster Fuse high memory consumption
bugzilla at redhat.com
bugzilla at redhat.com
Tue Nov 18 23:43:21 UTC 2014
https://bugzilla.redhat.com/show_bug.cgi?id=1165429
Bug ID: 1165429
Summary: Gluster Fuse high memory consumption
Product: GlusterFS
Version: 3.5.2
Component: fuse
Severity: high
Assignee: bugs at gluster.org
Reporter: sama.bharat at gmail.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
Version-Release number of selected component (if applicable):
Glsuter Fuse is taking up all the memory on the client machines
How reproducible:
When mounted the memory utilization is minimal and over the period of 12 hrs
approx. is going upto 98 % Memory utilization which is about 10 GB .We are
unmounting and mounting back the gluster volume on the client . Tried to
upgrade to the latest version on both server and clients .
here's our server configuration
Volume Name: gv0
Type: Replicate
Volume ID: a49f40eb-336b-4e01-9c28-198ae9c658e8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/data/gv0/brick1/data
Brick2: server2:/data/gv0/brick1/data
Options Reconfigured:
performance.quick-read: off
performance.io-cache: off
[bsama at phednfsect001 ~]$ sudo gluster volume info^C
[bsama at phednfsect001 ~]$ ^C
[bsama at phednfsect001 ~]$ sudo gluster volume status all mem
Memory status for volume : gv0
----------------------------------------------
Brick : server1:/data/gv0/brick1/data
Mallinfo
--------
Arena : 7950336
Ordblks : 626
Smblks : 0
Hblks : 12
Hblkhd : 16060416
Usmblks : 0
Fsmblks : 0
Uordblks : 7542576
Fordblks : 407760
Keepcost : 87184
Mempool Stats
-------------
Name HotCount ColdCount PaddedSizeof AllocCount
MaxAlloc Misses Max-StdAlloc
---- -------- --------- ------------ ----------
-------- -------- ------------
gv0-server:fd_t 157 867 108 40212202
220 0 0
gv0-server:dentry_t 16365 19 84 1450722424
16384 15715630 2071
gv0-server:inode_t 16372 12 156 2244317198
16384 98579495 3228
gv0-changelog:changelog_local_t 0 64 108 0
0 0 0
gv0-locks:pl_local_t 2 30 148 1740575473
16 0 0
gv0-marker:marker_local_t 0 128 332 1249876
3 0 0
gv0-quota:quota_local_t 0 64 404 0
0 0 0
gv0-server:rpcsvc_request_t 2 510 2828 1838534433
125 0 0
glusterfs:struct saved_frame 0 8 124 4
2 0 0
glusterfs:struct rpc_req 0 8 588 4
2 0 0
glusterfs:rpcsvc_request_t 1 7 2828 3
1 0 0
glusterfs:data_t 196 16187 52 30204860344
1221 0 0
glusterfs:data_pair_t 185 16198 68 26819284422
1076 0 0
glusterfs:dict_t 46 4050 140 4218606001
859 0 0
glusterfs:call_stub_t 5 1019 3756 1826990513
127 0 0
glusterfs:call_stack_t 4 1020 1836 1798113692
128 0 0
glusterfs:call_frame_t 12 4084 172 12518578562
535 0 0
----------------------------------------------
Brick : server2:/data/gv0/brick1/data
Mallinfo
--------
Arena : 8433664
Ordblks : 393
Smblks : 2
Hblks : 12
Hblkhd : 16060416
Usmblks : 0
Fsmblks : 64
Uordblks : 7540080
Fordblks : 893584
Keepcost : 81216
Mempool Stats
-------------
Name HotCount ColdCount PaddedSizeof AllocCount
MaxAlloc Misses Max-StdAlloc
---- -------- --------- ------------ ----------
-------- -------- ------------
gv0-server:fd_t 156 868 108 55753104
221 0 0
gv0-server:dentry_t 16348 36 84 2000635433
16384 20808339 3780
gv0-server:inode_t 16379 5 156 3122284446
16384 120646155 3788
gv0-changelog:changelog_local_t 0 64 108 0
0 0 0
gv0-locks:pl_local_t 2 30 148 2407464686
16 0 0
gv0-marker:marker_local_t 0 128 332 3434
1 0 0
gv0-quota:quota_local_t 0 64 404 0
0 0 0
gv0-server:rpcsvc_request_t 3 509 2828 2570259495
126 0 0
glusterfs:struct saved_frame 0 8 124 2
2 0 0
glusterfs:struct rpc_req 0 8 588 2
2 0 0
glusterfs:rpcsvc_request_t 1 7 2828 5
1 0 0
glusterfs:data_t 183 16200 52 41564989834
1224 0 0
glusterfs:data_pair_t 175 16208 68 36881333262
1093 0 0
glusterfs:dict_t 19 4078 140 5854155160
798 0 0
glusterfs:call_stub_t 2 1022 3756 2556467197
127 0 0
glusterfs:call_stack_t 3 1021 1836 2514212548
125 0 0
Let us know if you need anymore stats on client/server .
Thanks in advance
sam
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list