[Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak

Giuseppe Ragusa giuseppe.ragusa at hotmail.com
Thu Mar 27 23:27:07 UTC 2014


Hi,

> Date: Thu, 27 Mar 2014 09:26:10 +0530
> From: vbellur at redhat.com
> To: giuseppe.ragusa at hotmail.com; gluster-devel at nongnu.org
> Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak
> 
> On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote:
> > Hi all,
> > I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published
> > nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0
> > (latest snapshot too) on 2 physical nodes (12 GiB RAM) with
> > self-hosted-engine.
> >
> > I suppose this should be a good "selling point" for Gluster/oVirt and I
> > have solved almost all my oVirt problems but one remains:
> > Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine)
> > grows (from reboot) to about 8 GiB RAM usage (I even had it die before,
> > when put under cgroup memory restrictions) in about one day of no actual
> > usage (only the oVirt Engine VM is running on one node with no other
> > operations done on it or the whole cluster).
> >
> > I have seen similar reports on users and devel mailing lists and I'm
> > wondering how I can help in diagnosing this and/or if it would be better
> > to rely on latest 3.4.x Gluster (but it seems that the stable line has
> > had its share of memleaks too...).
> >
> 
> Can you please check if turning off drc through:
> 
> volume set <volname> nfs.drc off
> 
> helps?
> 
> -Vijay

I'm reinstalling just now to start from scratch with clean logs, configuration etc.
I will report after one day of activity, but from the old system I can already confirm that I had plenty of logs containing:

0-rpc-service: DRC failed to detect duplicates


like in BZ#1008301
Many thanks for your suggestion.

Regards,
Giuseppe

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140328/57c7a638/attachment-0001.html>


More information about the Gluster-devel mailing list