[Gluster-users] high memory usage of mount

Tamas Papp tompos at martos.bme.hu
Fri Aug 8 07:47:15 UTC 2014


hi Poornima,

The volume size is 25TB but only 11TB is used.
It's mostly read and less writing.


There are 6 gluster nodes (distributed), each mounted on itself and 
shared via smb a couple of netatalk clients. I have this issue only on 
this particular node.

Typical file size varies between a few bytes and 50MB, but mostly under 1MB.
There are a lot of temp files (created ... deleted and so on).

I have no idea, how much files there are, butI will try to find out.


Thanks,
tamas


On 08/08/2014 09:34 AM, Poornima Gurusiddaiah wrote:
>  From the statedump, it looks like the iobufs are not leaking any more.
> Inode and dentry have huge hot counts, but it is expected if large number
> of files are present and also depends on the kernel parameter 'VFS cache pressure'.
> Unable to identify which resource is leaking.
>
> Can you provide the workload(data size, number of files, operations) that is leading to memory leak?
> This will help us reproduce and debug.
>
> Regards,
> Poornima
>
> ----- Original Message -----
> From: "Tamas Papp" <tompos at martos.bme.hu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>, "Poornima Gurusiddaiah" <pgurusid at redhat.com>
> Cc: Gluster-users at gluster.org
> Sent: Wednesday, August 6, 2014 5:59:15 PM
> Subject: Re: [Gluster-users] high memory usage of mount
>
> Yes, I did.
> I have to do it at least once per day.
>
> Currently:
>
> $ free
>                total       used       free     shared    buffers cached
> Mem:      16422548   16047536     375012          0        320 256884
> -/+ buffers/cache:   15790332     632216
> Swap:      5859324    3841584    2017740
>
>
> http://rtfm.co.hu/glusterdump.24405.dump.1407327928.zip
>
> Thanks,
> tamas
>
> On 08/06/2014 02:22 PM, Pranith Kumar Karampuri wrote:
>> You may have to remount the volume so that the already leaked memory
>> is reclaimed by the system. If you still see the leaks, please provide
>> the updated statedumps.
>>
>> Pranith
>>
>> On 08/05/2014 07:30 PM, Tamas Papp wrote:
>>> Just an update, the settings below did not help for me.
>>>
>>> Current settings:
>>>
>>> Volume Name: w-vol
>>> Type: Distribute
>>> Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
>>> Status: Started
>>> Number of Bricks: 5
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: gl0:/mnt/brick1/export
>>> Brick2: gl1:/mnt/brick1/export
>>> Brick3: gl2:/mnt/brick1/export
>>> Brick4: gl3:/mnt/brick1/export
>>> Brick5: gl4:/mnt/brick1/export
>>> Options Reconfigured:
>>> nfs.mount-udp: on
>>> nfs.addr-namelookup: off
>>> nfs.ports-insecure: on
>>> nfs.port: 2049
>>> cluster.stripe-coalesce: on
>>> nfs.disable: off
>>> performance.flush-behind: on
>>> performance.io-thread-count: 64
>>> performance.quick-read: off
>>> performance.stat-prefetch: on
>>> performance.io-cache: off
>>> performance.write-behind: on
>>> performance.read-ahead: on
>>> performance.write-behind-window-size: 4MB
>>> performance.cache-refresh-timeout: 1
>>> performance.cache-size: 4GB
>>> network.frame-timeout: 60
>>> performance.cache-max-file-size: 1GB
>>>
>>>
>>> Cheers,
>>> tamas
>>>
>>> On 08/04/2014 09:22 AM, Tamas Papp wrote:
>>>> hi Poornima,
>>>>
>>>> I don't really have any advice, how you could reproduce this issue
>>>> also I don't have coredump (the process killed after oom issue).
>>>>
>>>> I will see, what can I do.
>>>>
>>>>
>>>> I set the two settings you wrote.
>>>>
>>>>
>>>> Cheers,
>>>> tamas
>>>>
>>>> On 08/04/2014 08:36 AM, Poornima Gurusiddaiah wrote:
>>>>> Hi,
>>>>>
>>>>>   From the statedump it is evident that the iobufs are leaking.
>>>>> Also the hot count of the pool-name=w-vol-io-cache:rbthash_entry_t
>>>>> is 10053, implies io-cache xlator could be the cause of the leak.
>>>>>   From the logs, it looks like, quick-read performance xlator is
>>>>> calling iobuf_free with NULL pointers, implies quick-read could be
>>>>> leaking iobufs as well.
>>>>>
>>>>> As a temperory solution, could you disable io-cache and/or
>>>>> quick-read and see if the leak still persists?
>>>>>
>>>>> $gluster volume set io-cache off
>>>>> $gluster volume set quick-read off
>>>>>
>>>>> This may reduce the performance to certain extent.
>>>>>
>>>>> For further debugging, could you provide the core dump or steps to
>>>>> reproduce if avaiable?
>>>>>
>>>>> Regards,
>>>>> Poornima
>>>>>
>>>>> ----- Original Message -----
>>>>> From: "Tamas Papp" <tompos at martos.bme.hu>
>>>>> To: "Poornima Gurusiddaiah" <pgurusid at redhat.com>
>>>>> Cc: Gluster-users at gluster.org
>>>>> Sent: Sunday, August 3, 2014 10:33:17 PM
>>>>> Subject: Re: [Gluster-users] high memory usage of mount
>>>>>
>>>>>
>>>>> On 07/31/2014 09:17 AM, Tamas Papp wrote:
>>>>>> On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
>>>>>>> Hi,
>>>>>> hi,
>>>>>>
>>>>>>> Can you provide the statedump of the process, it can be obtained as
>>>>>>> follows:
>>>>>>> $ gluster --print-statedumpdir  #create this directory if it doesn't
>>>>>>> exist.
>>>>>>> $ kill -USR1 <pid-of-glusterfs-process> #generates state dump.
>>>>>> http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip
>>>>>>
>>>>>>> Also, xporting Gluster via Samba-VFS-plugin method is preferred over
>>>>>>> Fuse mount export. For more details refer to:
>>>>>>> http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
>>>>>>>
>>>>>>>
>>>>>> When I tried it about half year ago it didn't work properly. Clients
>>>>>> lost mounts, access errors etc.
>>>>>>
>>>>>> But I will give it a try, though it's not included in ubuntu's samba
>>>>>> AFAIK.
>>>>>>
>>>>>>
>>>>>> Thank you,
>>>>>> tamas
>>>>>>
>>>>>> ps. I forget to mention, I can see this issue only one node. The rest
>>>>>> of nodes are fine.
>>>>> hi Poornima,
>>>>>
>>>>> Do you have  idea, what's going on here?
>>>>>
>>>>> Thanks,
>>>>> tamas
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list