[Gluster-users] IO performance cut down when VM on Gluster

Joe Julian joe at julianfamily.org
Mon Jan 14 15:19:25 UTC 2013


That's impressive, thanks. 

To be clear, that follows the second suggestion which requires the library in the 3.4 qa release, right? 

Bharata B Rao <bharata.rao at gmail.com> wrote:

>Joe,
>
>On Sun, Jan 13, 2013 at 8:41 PM, Joe Julian <joe at julianfamily.org>
>wrote:
>>
>> You have two options:
>> 1. Mount the GlusterFS volume from within the VM and host the data
>you're
>> operating on there. This avoids all the additional overhead of
>managing a
>> filesystem on top of FUSE.
>
>In my very limited testing, I have found that using the gluster data
>drive as a 2nd gluster drive (1st being the VM image itself) to QEMU
>gives better performance than mounting the gluster volume directly
>from guest.
>
>Here are some numbers from FIO read and write:
>Env: Dual core x86_64 system with F17 running 3.6.10-2.fc17.x86_64
>kernel for host and F18 3.6.6-3.fc18.x86_74 for guest.
>
>Case 1: Mount a gluster volume (test) from inside guest and run FIO
>read and writes into the mounted gluster drive.
>[host]# qemu -drive 
>file=gluster://bharata/rep/F18,if=virtio,cache=none
>[guest]# glusterfs -s bharata --volfile-id=test /mnt
>
>Case 2: Specify gluster volume (test) as a drive to QEMU itself.
>[host]# qemu -drive
>file=gluster://bharata/rep/F18,if=virtio,cache=none -drive
>file=gluster://bharata/test/F17,if=virtio,cache=none.
>[guest]# mount /dev/vdb3 /mnt
>
>In both the above cases, the VM image(F18) resides on GlusterFS volume
>(rep). And FIO read and writes are performed to /mnt/data1 in both
>cases.
>
>FIO aggregated bandwidth (kB/s) (Avg of 5 runs)
>         Case1    Case 2
>read  28740     52309
>Write 27578     48765
>
>FIO load file is as follows:
>[global]
>ioengine=libaio
>direct=1
>rw=read # rw=write for write test
>bs=128k
>size=512m
>directory=/mnt/data1
>[file1]
>iodepth=4
>[file2]
>iodepth=32
>[file3]
>iodepth=8
>[file4]
>iodepth=16
>
>Of course this is just one case, I wonder if you have seen better
>numbers for guest FUSE mount case for any of the benchmarks you use ?
>
>> 2. Try the 3.4 qa release and native GlusterFS support in the latest
>> qemu-kvm.
>
>Regards,
>Bharata.
>-- 
>http://raobharata.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130114/d6cee493/attachment.html>


More information about the Gluster-users mailing list