[Gluster-devel] Behaviour of glfs_fini() affecting QEMU

Bharata B Rao bharata.rao at gmail.com
Thu Apr 17 13:28:44 UTC 2014


In QEMU, we initialize gfapi in the following manner:

glfs = glfs_new();
if (!glfs)
   goto out;
if (glfs_set_volfile_server() < 0)
   goto out;
if (glfs_set_logging() < 0)
   goto out;
if (glfs_init(glfs))
   goto out;


if (glfs)

Now if either glfs_set_volfile_server() or glfs_set_logging() fails, we end
up doing glfs_fini() which eventually hangs in glfs_lock().

#0  0x00007ffff554a595 in pthread_cond_wait@@GLIBC_2.3.2 () from
#1  0x00007ffff79d312e in glfs_lock (fs=0x555556331310) at
#2  0x00007ffff79d5291 in glfs_active_subvol (fs=0x555556331310) at
#3  0x00007ffff79c9f23 in glfs_fini (fs=0x555556331310) at glfs.c:753

Note that we haven't done glfs_init() in this failure case.

- Is this failure expected ? If so, what is the recommended way of
releasing the glfs object ?
- Does glfs_fini() depend on glfs_init() to have worked successfully ?
- Since QEMU-GlusterFS driver was developed when libgfapi was very new, can
gluster developers take a look at the order of the glfs_* calls we are
making in QEMU and suggest any changes, improvements or additions now given
that libgfapi has seen a lot of development ?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140417/496fd7e2/attachment-0001.html>

More information about the Gluster-devel mailing list