[Gluster-devel] Potential Memory Leak?
Kamil Srot
kamil.srot at nlogy.com
Mon Nov 12 15:29:40 UTC 2007
Hi August,
few days ago I followed a discussion on IRC. The result of it was,
read-ahead xlator mem-leaks. I'm not sure if this is already resolved in
lastest tla, but it seems you're facing this problem...
Try to drop read-ahead xlator from your spec files and see if the memory
footprint will stay reasonable...
Best Regards,
--
Kamil
August R. Wohlt wrote:
> HI Krishna,
>
> I am also wondering about memory. I restart glusterfs every night
> because it grows to 800MB of memory when I try to copy some backups to
> the mount because I don't have very much memory. Is this a typical
> memory footprint? Is there a way to limit how much memory will be used
> by either the client or the server?
>
> This is a brand new x86_64 Centos 5.0 box, compiled with
> fuse-2.7.0-glfs5 and glusterfs-1.3.7
>
> Here's an example from top in the middle of my current backups:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 30880 root 15 0 695m 613m 760 R 12 7.7 86:26.78
> glusterfs
>
> The backup is just an rsync to the mount of about 5 million files.
>
> The client spec is a simple one:
>
> volume brick
> type protocol/client
> option transport-type tcp/client # for TCP/IP transport
> option remote-host 192.168.2.5 # IP address of the remote brick
> option remote-port 6996
> option remote-subvolume brick_thr # name of the remote volume
> end-volume
>
> volume brick-wb
> type performance/write-behind
> subvolumes brick
> end-volume
>
> volume readahead
> type performance/read-ahead
> subvolumes brick-wb
> end-volume
>
> it goes to one simple server, which uses a lot of memory as well:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 22929 root 15 0 452m 242m 764 S 6 12.1 132:02.51
> glusterfsd
>
> its spec file is:
>
> volume brick_posix
> type storage/posix
> option directory /home/3dm/pool/brick
> end-volume
>
> volume brick_locks
> type features/posix-locks
> subvolumes brick_posix
> end-volume
>
> volume brick_thr
> type performance/io-threads
> option thread-count 16
> subvolumes brick_locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server # For TCP/IP transport
> option bind-address 192.168.2.5 # Default is to listen on all
> interfaces
> option listen-port 6996
> subvolumes brick_thr
> option auth.ip.brick_thr.allow * # Allow access to "brick" volume
> end-volume
>
> thanks,
> :august
>
> On 10/24/07, Krishna Srinivas <krishna at zresearch.com> wrote:
>
>> Hi Karl,
>>
>> Your glusterfsd config is simple with only 4 translators.
>>
>> Is the problem seen every time you run your script?
>>
>> Can you run the script using a simpler client config file? just
>> connect the client to a single server (no afr/unify etc on the
>> client side)
>>
>> just have the folloing in your client spec and see if the glusterfsd
>> memory grows:
>>
>> ---
>>
>> volume sxx04
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host sxx04b
>> option remote-subvolume brick
>> end-volume
>>
>> ----
>>
>>
>>
>> On 10/23/07, Karl Bernard <karl at vomba.com> wrote:
>>
>>> Hello Krishna,
>>>
>>> I have 5 servers running the client and 4 servers running the brick
>>> server. In the config I was testing, only 3 of the brick servers are used.
>>>
>>> I have scripts running on the 5 servers that open images of 5k to 20k
>>> and create thumbnails for those images of about 4k. All files are
>>> written in a hash directory structure.
>>>
>>> After reading and creating a lot of files (1 million for example), I
>>> can see that the memory usage for the glusterfsd have grown substancially.
>>>
>>> Software versions:
>>> glusterfs-1.3.4
>>> fuse-2.7.0-glfs4
>>>
>>> <<-- glusterfs-server.vol -->>
>>> volume brick-posix
>>> type storage/posix
>>> option directory /data/glusterfs/dataspace
>>> end-volume
>>>
>>> volume brick-ns
>>> type storage/posix
>>> option directory /data/glusterfs/namespace
>>> end-volume
>>>
>>> volume brick
>>> type performance/io-threads
>>> option thread-count 2
>>> option cache-size 32MB
>>> subvolumes brick-posix
>>> end-volume
>>>
>>> volume server
>>> type protocol/server
>>> option transport-type tcp/server
>>> subvolumes brick brick-ns
>>> option auth.ip.brick.allow 172.16.93.*
>>> option auth.ip.brick-ns.allow 172.16.93.*
>>> end-volume
>>> <<-- end of glusterfs-server.vol -->>
>>>
>>> <<-- start client.sharedbig.vol -->>
>>> volume sxx01-ns
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx01b
>>> option remote-subvolume brick-ns
>>> end-volume
>>>
>>> volume sxx02-ns
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx02b
>>> option remote-subvolume brick-ns
>>> end-volume
>>>
>>> volume sxx03-ns
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx03b
>>> option remote-subvolume brick-ns
>>> end-volume
>>>
>>> volume sxx04-ns
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx04b
>>> option remote-subvolume brick-ns
>>> end-volume
>>>
>>> volume sxx01
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx01b
>>> option remote-subvolume brick
>>> end-volume
>>>
>>> volume sxx02
>>> type protocol/client
>>> option transport-type tcp/client
>>> option remote-host sxx02b
>>> option remote-subvolume brick
>>> end-volume
>>>
>>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list