[Gluster-users] glusterfs volume as a massively shared storage for VM images

Fernando Frediani (Qube) fernando.frediani at qubenet.net
Mon Jun 11 12:29:28 UTC 2012


Hi Christian,

In theory it should work, but ability to properly run VMs on Gluster is something relatively new due the improvements on granular healing so I don't think it has been extended tested.
I wasn't able to find any people using it in production and those I heard are using for testing. I tried myself here to use it with VMware and could never get it working, some problem with NFS on Gluster side.

With regards performance not sure how long you have been on this mail list, but have look for the last emails Brian Candler sent with his results for KVM VMs, they don't seem very promising as it stands.

Fernando

From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Christian Parpart
Sent: 11 June 2012 01:11
To: gluster-users at gluster.org
Subject: [Gluster-users] glusterfs volume as a massively shared storage for VM images

Hi all,

I am looking for a solution, like, I hope Glusterfs can fit into it, that is,
something that allows me to do live migrations of virtual machines from one compute node to another (KVM, OpenStack).

And so I found an article about the Glusterfs-OpenStack-Connector, which was just a basic Glusterfs setup that shared the /var/lib/nova/instances directory across all compute nodes. This is necessary to allow virtual machines to migrate within milliseconds from one compute node to another, as all nodes already share every data.

Now my question is, how well does Glusterfs scale when you're having about 50+ compute nodes (on which you're about to run virtual machines). What kind of setup would you recommend, to actually not suffer in runtime performance, networking iops, nor in availability.

Many thanks in advance,
Christian Parpart.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120611/3126bc66/attachment.html>


More information about the Gluster-users mailing list