[Gluster-users] Performance issues on GlusterFS with KVM/qcow2
John Lauro
john.lauro at covenanteyes.com
Wed Jan 25 13:48:22 UTC 2012
> ---------
>
> Write test 2, VM (Debian 6, VirtIO, stored as qcow2 on /vm_storage):
>
> /dev/vda2 on / type ext4 (rw,errors=remount-ro)
>
> echo 3 > /proc/sys/vm/drop_caches
> time dd if=/dev/zero of=./bigfile bs=1M count=5000
>
> Result:
>
> 5000+0 records in
> 5000+0 records out
> 5242880000 bytes (5.2 GB) copied, 796.309 s, 6.6 MB/s
>
> real 13m16.626s
> user 0m0.000s
> sys 0m3.700s
>
> --------
Not that this helps, just duplicated your test on my test setup as a
reference point. Your performance does seem a little worse than expected.
[root at glustertestc1 v1]# time dd if=/dev/zero of=./bigfile bs=1M
count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 173.744 s, 30.2 MB/s
real 2m53.800s
user 0m0.022s
sys 0m8.496s
This is with servers (replicated gluster volume spread over 2 servers),
and the physical disks are on a gigabit ethernet iSCSI SAN with 10K drive,
so not exactly high speed. I find with my testing performance for large
files is tolerable, but writing of many small files is terrible with
gluster.
For reference, here was timing of virtual gluster server:
[root at glustertests1 data1]# time dd if=/dev/zero of=./bigfile2 bs=1M
count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 87.2035 s, 60.1 MB/s
real 1m27.237s
user 0m0.006s
sys 0m5.872s
(My test clients and servers are all ESXi 4.1 VMs running Scientific Linux
6.1).
How is your network latency between servers and clients?
[root at glustertestc1 v1]# ping 10.0.12.141 -c 1000 -q -A -s 8000
PING 10.0.12.141 (10.0.12.141) 8000(8028) bytes of data.
--- 10.0.12.141 ping statistics ---
1000 packets transmitted, 1000 received, 0% packet loss, time 163ms
rtt min/avg/max/mdev = 0.114/0.132/1.638/0.061 ms, ipg/ewma 0.163/0.124 ms
(My test clients and servers are all on the same physical box. Guess it
might be virtualized a little faster than gigabit speeds between them.)
More information about the Gluster-users
mailing list