[Gluster-users] Fwd: libgfapi libvirt memory leak version 3.7.8

Piotr Rybicki piotr.rybicki at innervision.pl
Thu Feb 11 15:20:39 UTC 2016

Hi All

I have to report, that there is a mem leak latest version of gluster

gluster: 3.7.8
libvirt 1.3.1

mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster 3.5.X).

I believe libvirt itself uses libgfapi only to check existence of a disk
(via libgfapi). Libvirt calls glfs_ini and glfs_fini when doing this check.

When using drive via file (gluster fuse mount), there is no mem leak
when starting domain.

my drive definition (libgfapi):

     <disk type='network' device='disk'>
       <driver name='qemu' type='raw' cache='writethrough' iothread='1'/>
       <source protocol='gluster' name='pool/disk-sys.img'>
         <host name='X.X.X.X' transport='rdma'/> # connection is still
via tcp. Defining 'tcp' here doesn't make any difference.
       <blockio logical_block_size='512' physical_block_size='32768'/>
       <target dev='vda' bus='virtio'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'

I've at first reported to libvirt developers, but they blame gluster.

valgrind details (libgfapi):

# valgrind --leak-check=full --show-reachable=yes
--child-silent-after-fork=yes libvirtd --listen 2> libvirt-gfapi.log

On the other console:
virsh start DOMAIN
virsh shutdown DOMAIN
...wait and stop valgrind/libvirtd

valgrind log:

==5767== LEAK SUMMARY:
==5767==    definitely lost: 19,666 bytes in 96 blocks
==5767==    indirectly lost: 21,194 bytes in 123 blocks
==5767==      possibly lost: 2,699,140 bytes in 68 blocks
==5767==    still reachable: 986,951 bytes in 15,038 blocks
==5767==         suppressed: 0 bytes in 0 blocks
==5767== For counts of detected and suppressed errors, rerun with: -v
==5767== ERROR SUMMARY: 96 errors from 96 contexts (suppressed: 0 from 0)

full log:

Best regards
Piotr Rybicki

More information about the Gluster-users mailing list