[Gluster-devel] Qemu glusterfs, exposing complete bricks instead of individual images as shared storage to VM's ?

James purpleidea at gmail.com
Sat Nov 30 00:02:47 UTC 2013


On Fri, Nov 29, 2013 at 6:56 PM, Sander Eikelenboom <linux at eikelenboom.it>wrote:

> Erhmm well that's why glusterfs is momentarily in between :-)
>
> I have a LVM volume "shared_data" on the host .. which I export as a brick
> with glusterfs.
> Multiple VM's mount this brick over the tcp/ip transport, and all seems to
> go well with locking.
>
> I have looked at GFS2 and Ceph as well, though glusterfs served me well.
> It's just to see if it would be possible to eliminate the use of the
> tcp/ip transport for
> the VM's that use Qemu to reduce that overhead.
>

Okay, this should be on gluster-users first of all.

Second of all, the regular fuse mount and libgfapi both use tcp/ip.

Thirdly, you have to understand the difference between a block device
(qemu-libgfapi integration) and a gluster fuse mount (a filesystem). Read
about those a bit more, and hopefully this will make my comments make sense.

Fourthly, it's not a GFS2 vs. Gluster question. They are DIFFERENT
technologies, not competing technologies. GlusterFS is one piece. If you
_really_ want to have a shared block device, be used for a mounted
filesystem, then the individual writers _need_ to coordinate. That's what
GFS2+cman does. Also, I've never tested GlusterFS through qemu for a GFS2
fs. I'd be curious to hear if it works without bugs though.

Fifthly?, it's dinner time!

Cheers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20131129/c3295614/attachment-0001.html>


More information about the Gluster-devel mailing list