[Gluster-users] Frequent glusterd restarts needed to avoid NFS performance degradation
Paul Simpson
paul at realisestudio.com
Mon Apr 23 18:24:14 UTC 2012
just like to add that we sometimes need to restart glusterd on servers too.
again - on a renderfarm that hammers our 4 server dist/repl servers
heavily.
-p
On 23 April 2012 15:38, Brian Cipriano <bcipriano at zerovfx.com> wrote:
> Hi Dan - I've seen this problem too. I agree with everything you've
> described - seems to happen more quickly on more heavily used volumes, and
> a restart fixes it right away. I've also been considering writing a cronjob
> to fix this - have you made any progress on this, anything to report?
>
> I'm running a fairly simple distributed, non-replicated volume across two
> servers. What sort of tasks are you using your gluster for? Ours is for a
> render farm, so we see a very large number of mounts/unmounts as render
> nodes mount various parts of the filesystem. I wonder if this has anything
> to do with it; is your use case anything similar?
>
> - brian
>
>
> On 4/17/12 7:30 PM, Dan Bretherton wrote:
>
>> Dear All-
>> I find that I have to restart glusterd every few days on my servers to
>> stop NFS performance from becoming unbearably slow. When the problem
>> occurs, volumes can take several minutes to mount and there are long delays
>> responding to "ls". Mounting from a different server, i.e. one not
>> normally used for NFS export, results in normal NFS access speeds. This
>> doesn't seem to have anything to do with load because it happens whether or
>> not there is anything running on the compute servers. Even when the system
>> is mostly idle there are often a lot of glusterfsd processes running, and
>> on several of the servers I looked at this evening there is a process
>> called glusterfs using 100% of one CPU. I can't find anything unusual in
>> nfs.log or etc-glusterfs-glusterd.vol.log on the servers affected.
>> Restarting glusterd seems to stop this strange behaviour and make NFS
>> access run smoothly again, but this usually only lasts for a day or two.
>>
>> This behaviour is not necessarily related to the length of time since
>> glusterd was started, but has more to do with the amount of work the
>> GlusterFS processes on each server have to do. I use a different server to
>> export each of my 8 different volumes, and the NFS performance degradation
>> seems to affect the most heavily used volumes more than the others. I
>> really need to find a solution to this problem; all I can think of doing is
>> setting up a cron job on each server to restart glusterd every day, but I
>> am worried about what side effects that might have. I am using GlusterFS
>> version 3.2.5. All suggestions would be much appreciated.
>>
>> Regards,
>> Dan.
>> ______________________________**_________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>>
>
> ______________________________**_________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120423/e4458b27/attachment.html>
More information about the Gluster-users
mailing list