[Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

Oleksandr Natalenko oleksandr at natalenko.name
Mon Jan 25 00:46:32 UTC 2016


Also, I've repeated the same "find" test again, but with glusterfs process 
launched under valgrind. And here is valgrind output:

https://gist.github.com/097afb01ebb2c5e9e78d

On неділя, 24 січня 2016 р. 09:33:00 EET Mathieu Chateau wrote:
> Thanks for all your tests and times, it looks promising :)
> 
> 
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
> 
> 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksandr at natalenko.name>:
> > OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the
> > following
> > patches:
> > 
> > ===
> > 
> > Kaleb S KEITHLEY (1):
> >       fuse: use-after-free fix in fuse-bridge, revisited
> > 
> > Pranith Kumar K (1):
> >       mount/fuse: Fix use-after-free crash
> > 
> > Soumya Koduri (3):
> >       gfapi: Fix inode nlookup counts
> >       inode: Retire the inodes from the lru list in inode_table_destroy
> >       upcall: free the xdr* allocations
> > 
> > ===
> > 
> > I run rsync from one GlusterFS volume to another. While memory started
> > from
> > under 100 MiBs, it stalled at around 600 MiBs for source volume and does
> > not
> > grow further. As for target volume it is ~730 MiBs, and that is why I'm
> > going
> > to do several rsync rounds to see if it grows more (with no patches bare
> > 3.7.6
> > could consume more than 20 GiBs).
> > 
> > No "kernel notifier loop terminated" message so far for both volumes.
> > 
> > Will report more in several days. I hope current patches will be
> > incorporated
> > into 3.7.7.
> > 
> > On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote:
> > > On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> > > > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> > > >> I presume by this you mean you're not seeing the "kernel notifier
> > > >> loop
> > > >> terminated" error in your logs.
> > > > 
> > > > Correct, but only with simple traversing. Have to test under rsync.
> > > 
> > > Without the patch I'd get "kernel notifier loop terminated" within a few
> > > minutes of starting I/O.  With the patch I haven't seen it in 24 hours
> > > of beating on it.
> > > 
> > > >> Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
> > 
> > > >> stable:
> > http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev
> > 
> > > >> ity /client.out
> > > > 
> > > > What ops do you perform on mounted volume? Read, write, stat? Is that
> > > > 3.7.6 + patches?
> > > 
> > > I'm running an internally developed I/O load generator written by a guy
> > > on our perf team.
> > > 
> > > it does, create, write, read, rename, stat, delete, and more.




More information about the Gluster-devel mailing list