[Gluster-devel] question on glusterfs kvm performance

Yin Yin maillistofyinyin at gmail.com
Fri Aug 10 01:35:37 UTC 2012


Bharata B Rao :
      Thanks!! This is what I wanted!
      I'll try your patch and make some test.
      BTW: why gluster-qemu integration had so high performance than fuse
mount?

Best Regards,
yinyin

On Thu, Aug 9, 2012 at 10:02 PM, John Mark Walker <johnmark at redhat.com>wrote:

> Bharata:
>
> Thanks for writing this up. I bet someone could take this information and
> flesh out more scenarios + tests, posting the results on gluster.org. Any
> takers?
>
> -JM
>
>
> ----- Original Message -----
> > On Wed, Aug 8, 2012 at 11:50 PM, John Mark Walker
> > <johnmark at redhat.com> wrote:
> > >
> > > ----- Original Message -----
> > >>
> > >> Or change your perspective. Do you NEED to write to the VM image?
> > >>
> > >> I write to fuse mounted GlusterFS volumes from within my VMs. The
> > >> VM
> > >> image is just for the OS and application. With the data on a
> > >> GlusterFS
> > >> volume, I get the normal fuse client performance from within my
> > >> VM.
> >
> > I ran FIO on 3 scenarios and here are the comparison numbers from
> > them:
> >
> > Scenario 1: GlusterFS block backend of QEMU is used for root and data
> > partition (a gluster volume)
> > ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
> > -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none
> > -drive file=gluster://bharata/test/F17,if=virtio,cache=none
> >
> > Scenario 2: GlusterFS block backend of QEMU for root and GlusterFS
> > FUSE mount for data partition
> > ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
> > -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none
> > -drive file=/mnt/F17,if=virtio,cache=none
> > (Here data partition is FUSE mounted on host at /mnt)
> >
> > Scenarios 3: GlusterFS block backend of QEMU for root and FUSE
> > mounting gluster data partition from inside VM
> > ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
> > -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none
> >
> > FIO exercises the data partition in each case.
> >
> > Here are the numbers:
> >
> > Scenario 1:  aggrb=47836KB/s
> > Scenario 2:  aggrb=20894KB/s
> > Scenario 3:  aggrb=36936KB/s
> >
> > FIO load file I used is this:
> > ; Read 4 files with aio at different depths
> > [global]
> > ioengine=libaio
> > direct=1
> > rw=read
> > bs=128k
> > size=512m
> > directory=/data1
> > [file1]
> > iodepth=4
> > [file2]
> > iodepth=32
> > [file3]
> > iodepth=8
> > [file4]
> > iodepth=16
> >
> > Regards,
> > Bharata.
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120810/142d1517/attachment-0003.html>


More information about the Gluster-devel mailing list