[Gluster-users] IO performance cut down when VM on Gluster

Stephan von Krawczynski skraw at ithnet.com
Sun Jan 13 22:55:01 UTC 2013


On Sun, 13 Jan 2013 07:11:14 -0800
Joe Julian <joe at julianfamily.org> wrote:

> On 01/13/2013 04:14 AM, glusterzhxue wrote:
> > Hi all,
> > We placed Virtual Machine Imame(based on kvm) on gluster file system, 
> > but IO performance of the VM is only half of the bandwidth.
> > If we mount it on a physical machine using the same volume as the 
> > above VM, physical host reaches full bandwidth. We performed it many 
> > times, but each had the same result.
> What you're seeing is the difference between bandwidth and latency. When 
> you're writing a big file to a VM filesystem, you're not performing the 
> same operations as writing a file to a GlusterFS mount thus you're able 
> to measure bandwidth. The filesystem within the VM is doing things like 
> journaling, inode operations, etc. that you don't have to do when 
> writing to the client requiring a lot more I/O operations per second, 
> thus amplifying the latency present in both your network and the context 
> switching through FUSE.
> 
> You have two options:
> 1. Mount the GlusterFS volume from within the VM and host the data 
> you're operating on there. This avoids all the additional overhead of 
> managing a filesystem on top of FUSE.
> 2. Try the 3.4 qa release and native GlusterFS support in the latest 
> qemu-kvm.

Thank you for telling the people openly that FUSE is a performance problem
which could be solved by a kernel-based glusterfs.

Do you want to make drivers for every application like qemu? How many burnt
manpower will it take until the real solution is accepted?
It is no solution to mess around _inside_ the VM for most people, you simply
don't want _customers_ on your VM with a glusterfs mount. You want them to see
a local fs only.

-- 
Regards,
Stephan



More information about the Gluster-users mailing list