[Bugs] [Bug 1371544] New: high memory usage on client node

bugzilla at redhat.com bugzilla at redhat.com
Tue Aug 30 12:53:50 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1371544

            Bug ID: 1371544
           Summary: high memory usage on client node
           Product: GlusterFS
           Version: 3.8.3
         Component: fuse
          Keywords: Triaged
          Severity: high
          Priority: medium
          Assignee: bugs at gluster.org
          Reporter: hgowtham at redhat.com
                CC: bugs at gluster.org, khoj at osci.kr, vbellur at redhat.com
        Depends On: 1289442



+++ This bug was initially created as a clone of Bug #1289442 +++

Description of problem:
high memory leak on client node on ubuntu 12.04 TLS

Description of problem:

client glusterfs daemon use high memory at glusterfs 3.6.3

I saw the pmap data

pmap (glusterfs pid) | grep anon

00007f6cd0000000 131072K rw---    [ anon ]
00007f6cd8000000 131072K rw---    [ anon ]
00007f6ce0000000 131072K rw---    [ anon ]
00007f6ce8000000 131072K rw---    [ anon ]
00007f6cf0000000 131072K rw---    [ anon ]
00007f6cf8000000 131072K rw---    [ anon ]
00007f6d00000000 131072K rw---    [ anon ]
00007f6d08000000 131072K rw---    [ anon ]
... 
00007f6d4b385000 3450888K rw---    [ anon ]

There are many memory usage by anon 

I think it;s memory leak. 

Steps to Reproduce:

no

Actual results:

no

Expected results:


Additional info:

I gathered sosreport, pmap, /proc/gluster pid/status, lsof .

--- Additional comment from hojin kim on 2015-12-08 21:10:53 EST ---

I uploaded the sosreport.and 3 txt files..
It's data for client
was01, was02 have very big [anon] area(about 3GB) by pmap command

was01
                        54 [anon] area 
00007f6d49c04000     76K r-x--  /usr/sbin/glusterfsd
00007f6d49e16000      4K r----  /usr/sbin/glusterfsd
00007f6d49e17000      8K rw---  /usr/sbin/glusterfsd
00007f6d4b33d000    288K rw---    [ anon ]
00007f6d4b385000 3450888K rw---    [ anon ]
00007fff26b32000    132K rw---    [ stack ]
00007fff26bf3000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

was02
00007fdef3016000     76K r-x--  /usr/sbin/glusterfsd
00007fdef3228000      4K r----  /usr/sbin/glusterfsd
00007fdef3229000      8K rw---  /usr/sbin/glusterfsd
00007fdef36fb000    288K rw---    [ anon ]
00007fdef3743000 3096552K rw---    [ anon ]
00007fff23fe8000    132K rw---    [ stack ]
00007fff241f8000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

but, at was03, there are a little size [anon] file 

00007fa42b6e9000     76K r-x--  /usr/sbin/glusterfsd
00007fa42b8fb000      4K r----  /usr/sbin/glusterfsd
00007fa42b8fc000      8K rw---  /usr/sbin/glusterfsd
00007fa42c5de000    288K rw---    [ anon ]
00007fa42c626000  32804K rw---    [ anon ]


please check it ..Thanks

--- Additional comment from Vijay Bellur on 2016-01-06 04:16:18 EST ---

I don't see the sosreport attached. Can you please provide more details on the
gluster volume configuration and the nature of I/O operations being performed
on the client? Thanks.

--- Additional comment from hojin kim on 2016-01-13 00:07 EST ---

I will upload 2 files..

The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz .
This server is for WAS service. and client of glusterfs file server.
In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.

--- Additional comment from hojin kim on 2016-01-13 00:12 EST ---

It's normal WAS system. There is no issue about memory. 
It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz

Of course, It's same env like UK1-PRD-WAS01
But, memory usage is about 0.4G.

--- Additional comment from hojin kim on 2016-01-13 00:22:37 EST ---

Hi, Vijay. 
I uploaded 2 client files, 

sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs
is high 
sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is
low

and glusterfs Client system is WAS system,and normally I/O was made by
WAS(tomcat) client.


the configuration is as below. It;s is server env.

--------------------------------------------------------------
mount volume 

server1: UK1-PRD-FS01     UK2-PRD-FS01 ==> replicated volume0
                  |                        |
             distributed          distributed
                  |                         |
server2: UK1-PRD-FS02    UK2-PRD-FS02 ==> replicated volume1
                 | 
                 |
                 |
                 |
georeplicate  ukdr

===============================================
client  

UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded)
UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem 
UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem  (uploaded)
..........about 10 machines

--- Additional comment from hojin kim on 2016-02-16 23:00:43 EST ---

(In reply to Vijay Bellur from comment #2)
> I don't see the sosreport attached. Can you please provide more details on
> the gluster volume configuration and the nature of I/O operations being
> performed on the client? Thanks.

please review again. we are waiting for ur response


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1289442
[Bug 1289442] high memory usage on client node
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list