[Gluster-users] Gfapi memleaks, all versions

Pranith Kumar Karampuri pkarampu at redhat.com
Fri Sep 2 21:36:50 UTC 2016

On Fri, Sep 2, 2016 at 11:41 AM, <feihu929 at sina.com> wrote:

> Hi,
> *Pranith*
> >        There was a time when all this clean up was not necessary because
> >the mount process would die anyway. Then when gfapi came in, the process
> >wouldn't die anymore :-), so we have to fix all the shortcuts we took over
> >the years to properly fix it. So lot of work :-/. It is something that
> >needs to be done properly (multi threaded nature of the workloads make it
> a
> >bit difficult), because the previous attempts at fixing it caused the
> >process to die because of double free etc.
> When libgfapi used by libvirtd, the glfs_init and glfs_fini will call two
> times with start or stop virtual machine, and with other libvirt command
> like virsh domblkinfo which will visit glusterfs api, SO, as long as
> libvirtd start and stop vm, the libvirtd process will leak large memory,
> about 1.5G memory consume by 100 times call glfs_init and glfs_fini

Fixing it all at once is a very big effort. I thought a little about how to
give smaller fixes per release so that we can fix this bug over some
releases. Fortunately in the virt layer only client side xlators are loaded
and in that, performance xlators are generally disabled except write-behind
at least by using 'group-virt' setting this is what will happen. So
probably that should be the first set of xlators where we need to
concentrate our efforts to make just this usecase better.
So just write-behind, shard, dht, afr, client xlators should probably a
good start to address this case.

On a completely different note, I see that you used massif for doing this
analysis. Oleksandr is looking for some help in using massif to provide
more information in a different usecase. Could you help him?

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160903/5b9e3ea8/attachment.html>

More information about the Gluster-users mailing list