[Gluster-users] GlusterFS process take very many memory

Krishna Srinivas krishna at zresearch.com
Thu Dec 18 09:35:50 UTC 2008


glusterfs--mainline--3.0--patch-785 fixed an fd leak issue. It might
be the same problem faced by you. Can you try the version from the TLA
and see if you still face the problem?

Krishna

On Thu, Dec 18, 2008 at 1:04 PM, Krishna Srinivas <krishna at zresearch.com> wrote:
> Looks like glusterfs is failing to close fd in some situations leading
> to huge number of open fds. We will try to reproduce the problem here,
> we will get back to you.
>
> Krishna
>
> On Thu, Dec 18, 2008 at 11:17 AM, sal poliandro <popsikle at gmail.com> wrote:
>> Are you using dedicated server boxes or are you mounting the
>> filesystems on the gluster servers?
>>
>> I ran into this when I was multitasking gluster and trying to do AFR.
>> Also try without locking and see if your memory usage decreases, I saw
>> that as well.
>>
>> On Wed, Dec 17, 2008 at 1:44 PM, Krishna Srinivas <krishna at zresearch.com> wrote:
>>> Барынин,
>>>
>>> Does the memory usage increase till it fails? Can you give us access
>>> to your setup?
>>>
>>> Krishna
>>>
>>> On Wed, Dec 17, 2008 at 1:26 AM, aka_Red=5FLion Барынин Константин
>>> <red_lion at inbox.ru> wrote:
>>>> Hello!!!
>>>>
>>>> I try use GLusterFS + openvz, but gfs process every 1 minute memory usare increase at ~2MB. How i can fix this?
>>>>
>>>> P.S. sorry about my bad english.
>>>>
>>>> Cluster information:
>>>> 1) 3 nodes (server-client), conf:
>>>> ##############
>>>> # local data #
>>>> ##############
>>>>
>>>> volume vz
>>>>  type storage/posix
>>>>  option directory /home/local
>>>> end-volume
>>>>
>>>> volume vz-locks
>>>>  type features/posix-locks
>>>>  subvolumes vz
>>>> end-volume
>>>>
>>>> volume vz-locks-perf
>>>>  type performance/io-threads
>>>>  option thread-count 8
>>>>  option cache-size 8MB
>>>>  subvolumes vz-locks
>>>> end-volume
>>>>
>>>> volume server
>>>>  type protocol/server
>>>>  option transport-type tcp/server
>>>>  subvolumes vz-locks-perf
>>>>  option auth.ip.vz-locks-perf.allow 192.168.*
>>>> end-volume
>>>>
>>>> ################
>>>> # remoute data #
>>>> ################
>>>>
>>>> ####
>>>> # main
>>>> volume remvz01
>>>>  type protocol/client
>>>>  option transport-type tcp/client
>>>>  option remote-host 192.168.34.2
>>>>  option remote-subvolume vz-locks-perf
>>>>  option transport-timeout 60
>>>> end-volume
>>>>
>>>> ####
>>>> # sv
>>>> volume remvz02
>>>>  type protocol/client
>>>>  option transport-type tcp/client
>>>>  option remote-host 192.168.8.40
>>>>  option remote-subvolume vz-locks-perf
>>>>  option transport-timeout 60
>>>> end-volume
>>>>
>>>> ####
>>>> # main2
>>>> volume remvz03
>>>>  type protocol/client
>>>>  option transport-type tcp/client
>>>>  option remote-host 192.168.34.6
>>>>  option remote-subvolume vz-locks-perf
>>>>  option transport-timeout 60
>>>> end-volume
>>>>
>>>> #################################
>>>> # AFR - 1 local, anover remoute #
>>>> #################################
>>>> volume afr
>>>>   type cluster/afr
>>>>   option read-subvolume remvz01 # this line depend of node
>>>>   subvolumes remvz01 remvz02 remvz03
>>>> end-volume
>>>>
>>>> 2)GLusterFS version used:
>>>> glusterfs 1.4.0rc3 built on Dec 16 2008 13:39:08
>>>> Repository revision: glusterfs--mainline--3.0--patch-777
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Salvatore "Popsikle" Poliandro
>> Founder - CaffeineLAN.net
>>
>> Wanna help the LAN?
>>
>


More information about the Gluster-users mailing list