[Gluster-users] high memory usage of mount
tompos at martos.bme.hu
Tue Aug 5 14:00:55 UTC 2014
Just an update, the settings below did not help for me.
Volume Name: w-vol
Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
Number of Bricks: 5
On 08/04/2014 09:22 AM, Tamas Papp wrote:
> hi Poornima,
> I don't really have any advice, how you could reproduce this issue
> also I don't have coredump (the process killed after oom issue).
> I will see, what can I do.
> I set the two settings you wrote.
> On 08/04/2014 08:36 AM, Poornima Gurusiddaiah wrote:
>> From the statedump it is evident that the iobufs are leaking.
>> Also the hot count of the pool-name=w-vol-io-cache:rbthash_entry_t is
>> 10053, implies io-cache xlator could be the cause of the leak.
>> From the logs, it looks like, quick-read performance xlator is
>> calling iobuf_free with NULL pointers, implies quick-read could be
>> leaking iobufs as well.
>> As a temperory solution, could you disable io-cache and/or quick-read
>> and see if the leak still persists?
>> $gluster volume set io-cache off
>> $gluster volume set quick-read off
>> This may reduce the performance to certain extent.
>> For further debugging, could you provide the core dump or steps to
>> reproduce if avaiable?
>> ----- Original Message -----
>> From: "Tamas Papp" <tompos at martos.bme.hu>
>> To: "Poornima Gurusiddaiah" <pgurusid at redhat.com>
>> Cc: Gluster-users at gluster.org
>> Sent: Sunday, August 3, 2014 10:33:17 PM
>> Subject: Re: [Gluster-users] high memory usage of mount
>> On 07/31/2014 09:17 AM, Tamas Papp wrote:
>>> On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
>>>> Can you provide the statedump of the process, it can be obtained as
>>>> $ gluster --print-statedumpdir #create this directory if it doesn't
>>>> $ kill -USR1 <pid-of-glusterfs-process> #generates state dump.
>>>> Also, xporting Gluster via Samba-VFS-plugin method is preferred over
>>>> Fuse mount export. For more details refer to:
>>> When I tried it about half year ago it didn't work properly. Clients
>>> lost mounts, access errors etc.
>>> But I will give it a try, though it's not included in ubuntu's samba
>>> Thank you,
>>> ps. I forget to mention, I can see this issue only one node. The rest
>>> of nodes are fine.
>> hi Poornima,
>> Do you have idea, what's going on here?
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users