[Gluster-users] mirrored glusterfs for virtual machine images?

Jon M. Skelton jskelton at adelinedigital.com
Sun Apr 25 23:33:12 UTC 2010


On 04/25/2010 02:24 PM, Tomasz Chmielewski wrote:
> Am 25.04.2010 23:05, Jon M. Skelton wrote:
>> I'm currently doing this. Ubuntu 10.04 (beta) using glusterfs to mirror
>> qcow2 KVM machine images. Works quite well. In both your crashing cases,
>> things look much like when VM gets 'virsh destroy'. It's a little rough
>> on the filesystem contained within the machine image. I've had some luck
>> cleaning them up accessing the machine image filesystems via qemu-nbd:
>
> Did you compare how much performance drops, versus bare metal?
>
> I.e. run it in virtual guest, virtual host and the glusterfs server 
> itself (if virtual host is a different server than at least one of 
> glusterfs servers)?
>
>
> time dd if=/dev/zero of=/bigfile bs=1M count=5000
>
> time dd if=/bigfile of=/dev/null bs=64k
>
>
> And drop caches between each run with:
>
> echo 3 > /proc/sys/vm/drop_caches
>

A few notes:

Underlying storage is two 1.5TB disks mirrored via standard Linux mdadm 
facilities.  Gluster fs mirrors to an identical machine with its own 
pair of mirrored disks (same model).  Glusterfs interconnect is 
infiniband using the ib-verbs interface.

The hardware is running other VMs that are backed by the same filesystem 
so a grain of salt should be taken with the results but the relative 
performance characteristics are likely captured.

Write performance of KVM on qcow2 is not good.  This was observed on 
ext3 as well as glusterfs.

* local ext3 fs from real hardare *

# echo 3 > /proc/sys/vm/drop_caches
# time dd if=/dev/zero of=./bigfile bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 58.994 s, 88.9 MB/s

real    1m1.072s
user    0m0.030s
sys    0m27.130s

# echo 3 > /proc/sys/vm/drop_caches
# time dd if=./bigfile of=/dev/null bs=64k
80000+0 records in
80000+0 records out
5242880000 bytes (5.2 GB) copied, 76.9581 s, 68.1 MB/s

real    1m17.387s
user    0m0.180s
sys    0m14.000s

* glusterfs from real hardware *

# echo 3 > /proc/sys/vm/drop_caches
# time dd if=/dev/zero of=./bigfile bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 83.9366 s, 62.5 MB/s

real    1m24.414s
user    0m0.030s
sys    0m12.800s

# echo 3 > /proc/sys/vm/drop_caches
# time dd if=./bigfile of=/dev/null bs=64k
80000+0 records in
80000+0 records out
5242880000 bytes (5.2 GB) copied, 79.0526 s, 66.3 MB/s

real    1m19.360s
user    0m0.250s
sys    0m9.600s

* ext3 from KVM VM on qcow2 on ext3 *

# echo 3 > /proc/sys/vm/drop_caches (on both guest and host)
# time dd if=/dev/zero of=./bigfile bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 249.116 s, 21.0 MB/s

real    4m9.722s
user    0m0.160s
sys    1m6.530s

# echo 3 > /proc/sys/vm/drop_caches (on both guest and host)
# time dd if=./bigfile of=/dev/null bs=64k
80000+0 records in
80000+0 records out
5242880000 bytes (5.2 GB) copied, 71.3949 s, 73.4 MB/s

real    1m11.579s
user    0m0.180s
sys    0m36.590s

* ext3 from KVM VM on qcow2 on glusterfs *

# echo 3 > /proc/sys/vm/drop_caches (on both guest and host)
# time dd if=/dev/zero of=./bigfile bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 397.378 s, 13.2 MB/s

real    6m37.860s
user    0m0.270s
sys    1m18.210s

# echo 3 > /proc/sys/vm/drop_caches
# time dd if=./bigfile of=/dev/null bs=64k
80000+0 records in
80000+0 records out
5242880000 bytes (5.2 GB) copied, 78.749 s, 66.6 MB/s

real    1m18.823s
user    0m0.290s
sys    0m31.720s

I hope this is useful,
Jon S.



More information about the Gluster-users mailing list