[Gluster-users] Abnormally large number of Open files with Glusterfs

Chandranshu . chandranshu at gmail.com
Thu Dec 18 08:59:18 UTC 2008


Hi,

We didn't have any particular reason to choose this timeout. We had set it
to larger values initially and then decreased it in steps. Timeouts started
to happend at a limit of 1.5s. So, we doubled it to 3s. Ever since then, we
have not encountered timeouts though the system did become slow at times.
Are there heuristics available to estimate a good timeout settings?

Thanks and regards
Chandranshu

On Wed, Dec 17, 2008 at 11:48 PM, Raghavendra G <raghavendra.hg at gmail.com>wrote:

> Hi,
>
> As a side note, 3s transport-timeout seems to be too small. Is there any
> particular reason for using such a small timeout?
>
> regards
>
>
> On Wed, Dec 17, 2008 at 4:34 PM, Chandranshu . <chandranshu at gmail.com>wrote:
>
>> I am including the server and client configurations inline in the mail.
>> The log files are quite huge and we are looking into them ourselves. If you
>> are looking for something in particular, I can attach the specific logs.
>>
>> Regards
>> Chandranshu
>>
>> ########################
>> ##       glusterfs-server.vol                 ##
>> ########################
>> volume brick
>>   type storage/posix
>>   option directory /var/glusterfs
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   subvolumes brick
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option auth.ip.brick.allow *
>> end-volume
>> ########## Server Vol ends ####
>>
>> ########################
>> ##      glusterfs-client.vol                   ##
>> ########################
>> volume client1
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.4.53
>>   option remote-subvolume brick
>>  option transport-timeout 3
>> end-volume
>>
>> volume client2
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.4.55
>>   option remote-subvolume brick
>>  option transport-timeout 3
>> end-volume
>>
>> volume afr
>>  type cluster/afr
>>  subvolumes client1 client2
>> # option replicate *:2
>> end-volume
>> ####### Glusterfs Client ends #####
>>
>>
>> On Wed, Dec 17, 2008 at 5:49 PM, Basavanagowda Kanur <
>> basavanagowda at gmail.com> wrote:
>>
>>> Chandranshu,
>>>   Can you share the volume file for the server which is keeping the files
>>> open?
>>>   It will also be helpful to debug if you can share the log of the
>>> corresponding server with us.
>>>
>>> --
>>> gowda
>>>
>>>
>>> On Wed, Dec 17, 2008 at 4:39 PM, Chandranshu . <chandranshu at gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> I am using glusterfs to provide storage for a web server. The current
>>>> server in question had been running for around 20 days now without requiring
>>>> any intervention. However, the performance was degrading over the last week
>>>> and it became completely unresponsive today. The panic button was pressed
>>>> when we realized that not only glusterfs but all other processes on that
>>>> machine were not responsive and trying to restart any service resulted in
>>>> the error - "Too many files open."
>>>> Doing an lsof showed over 115000 files open of which glusterfs was
>>>> responsible for around 112000.  Checking the list of files opened by
>>>> glusterfs, i was surprisd to see that glusterfs had opened same files again
>>>> and again and again. More surprisingly, most of these file descriptors were
>>>> for directories rather than files. Is it a fall off from some book keeping?
>>>> Or are a new file descriptor is always opened by glusterfs irrespective of
>>>> whether a previous one already exists or not?
>>>>
>>>> Also, can someone suggest a good time interval after which to restart
>>>> glusterfsd or if there is already a mechanism in place to ask glusterfsd to
>>>> release the open file descriptoes?
>>>>
>>>> Thanks and regards
>>>> Chandranshu
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>>
>>>
>>>
>>> --
>>> hard work often pays off after time, but laziness always pays off now
>>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>
> --
> Raghavendra G
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081218/e09ff01c/attachment.html>


More information about the Gluster-users mailing list