[Gluster-devel] question on glusterfs kvm performance
maillistofyinyin at gmail.com
Thu Aug 16 07:43:14 UTC 2012
Hi, Bharata B Rao:
The problem has been solved.
I configure QEMU with --enable-uuid, but don't install libuuid-dev
rpm, so it actually use vdi.c's uuid_is_null.
in glfs_active_subvol, it will call inode_table_new to init itable,
gfid should be 0(repeat 15 times), 001.
glusterfs's uuid_is_null will compare all 16 bit, but qemu vdi.c's
only compare 8 bit, which cause glfs client see the root inode gfid
invalid.. so glfs_open failed.
On Wed, Aug 15, 2012 at 6:24 PM, Bharata B Rao <bharata.rao at gmail.com>wrote:
> On Wed, Aug 15, 2012 at 12:14 PM, Yin Yin <maillistofyinyin at gmail.com>
> > Hi,Bharata B Rao:
> > I have try your patch, but has some problem. I found that both
> > glusterfs and qemu have a fun uuid_is_null.
> > glusterfs use contrib/uuid/isnull.c
> > qemu use block/vdi.c
> > I test the api/example, the glfsxmp.c call uuid_is_null in
> > contrib/uuid/isnull.c
> > but qemu/block/gluster.c finally call the uuid_is_null in vdi.c which
> > vm can't boot.
> So you are configuring QEMU with --disable-uuid ? Even then, there
> should be no issues. I just verified that and I don't understand why
> it should cause problems in VM booting.
> Can you please ensure the following:
> - Remove all the traces of gluster from your system (which means
> removing any installed gluster rpms) before you compile gluster from
> - Try with my v6 patcheset
> When you say VM isn't booting, do you see a hang or a segfault ?
> Let me know how are you specifying the gluster drive (-drive
> Please verify that you are able to FUSE mount your volume before tying
> with QEMU.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-devel