[Gluster-users] Gfapi memleaks, all versions
Pranith Kumar Karampuri
pkarampu at redhat.com
Thu Oct 27 06:53:00 UTC 2016
Prasanna changed qemu code to reuse the glfs object for adding multiple
disks from same volume using refcounting. So the memory usage went down
from 2GB to 200MB in the case he targetted. Wondering if the same can be
done for this case too.
Prasanna could you let us know if we can use refcounting even in this case.
On Wed, Sep 7, 2016 at 10:28 AM, Oleksandr Natalenko <
oleksandr at natalenko.name> wrote:
> On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri <
> pkarampu at redhat.com> wrote:
> >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
> >oleksandr at natalenko.name> wrote:
> >> Hello,
> >> thanks, but that is not what I want. I have no issues debugging gfapi
> >> but have an issue with GlusterFS FUSE client not being handled
> >properly by
> >> Massif tool.
> >> Valgrind+Massif does not handle all forked children properly, and I
> >> that happens because of some memory corruption in GlusterFS FUSE
> >Is this the same libc issue that we debugged and provided with the
> >to avoid it?
> >> Regards,
> >> Oleksandr
> >> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu929 at sina.com wrote:
> >> > Hello, Oleksandr
> >> > You can compile that simple test code posted
> >> > here(http://www.gluster.org/pipermail/gluster-users/2016-
> >> August/028183.html
> >> > ). Then, run the command
> >> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
> >> > --tool=massif ./glfsxmp the cmd will produce a file like
> >> massif.out.xxxx,
> >> > the file is the memory leak log file , you can use ms_print tool
> >> below
> >> > command $>ms_print massif.out.xxxx
> >> > the cmd will output the memory alloc detail.
> >> >
> >> > the simple test code just call glfs_init and glfs_fini 100 times to
> >> > the memory leak, by my test, all xlator init and fini is the main
> >> > leak function. If you can locate the simple code memory leak code,
> >> > you can locate the leak code in fuse client.
> >> >
> >> > please enjoy.
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users