[Gluster-users] high memory usage of mount
pgurusid at redhat.com
Mon Aug 4 06:36:01 UTC 2014
>From the statedump it is evident that the iobufs are leaking.
Also the hot count of the pool-name=w-vol-io-cache:rbthash_entry_t is 10053, implies io-cache xlator could be the cause of the leak.
>From the logs, it looks like, quick-read performance xlator is calling iobuf_free with NULL pointers, implies quick-read could be leaking iobufs as well.
As a temperory solution, could you disable io-cache and/or quick-read and see if the leak still persists?
$gluster volume set io-cache off
$gluster volume set quick-read off
This may reduce the performance to certain extent.
For further debugging, could you provide the core dump or steps to reproduce if avaiable?
----- Original Message -----
From: "Tamas Papp" <tompos at martos.bme.hu>
To: "Poornima Gurusiddaiah" <pgurusid at redhat.com>
Cc: Gluster-users at gluster.org
Sent: Sunday, August 3, 2014 10:33:17 PM
Subject: Re: [Gluster-users] high memory usage of mount
On 07/31/2014 09:17 AM, Tamas Papp wrote:
> On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
>> Can you provide the statedump of the process, it can be obtained as
>> $ gluster --print-statedumpdir #create this directory if it doesn't
>> $ kill -USR1 <pid-of-glusterfs-process> #generates state dump.
>> Also, xporting Gluster via Samba-VFS-plugin method is preferred over
>> Fuse mount export. For more details refer to:
> When I tried it about half year ago it didn't work properly. Clients
> lost mounts, access errors etc.
> But I will give it a try, though it's not included in ubuntu's samba
> Thank you,
> ps. I forget to mention, I can see this issue only one node. The rest
> of nodes are fine.
Do you have idea, what's going on here?
More information about the Gluster-users