[Gluster-users] GlsuterFS as OpenVZ backend [Was: bonnie hangs with glusterFS 2.0.4]
Julien Cornuwel
cornuwel at gmail.com
Fri Aug 7 10:11:12 UTC 2009
Le dimanche 02 août 2009 à 23:49 +0200, Julien Cornuwel a écrit :
> Hi,
>
> I'm doing some performance tests with bonnie (1.03d) on GlusterFS 2.0.4
> (Debian packages).
>
> Write tests went OK.
> But on the rewrite test, bonnie seemed to hang. Load average dropped to
> 0.00 on both nodes. Nothing in server or client logs.
>
> I will launch the test again this night cause it takes very long (16GB
> RAM).
>
> Any idea what could cause that ?
Well I simplified my setup to the most (see attached files) and the test
passed. Results :
Block write : 57500KB/s (53769 on local disks)
Rewrite : 4477KB/s (30742 on local disks)
Block Read : 8375KB/s (79528 on local disks)
Write performances are surprisingly high, better than local disks ! I
guess writebehind translator is doing a great job. But reads are so
slow ! I will do another test with readahead enabled to see the
difference, hoping bonnie will survive it.
Original setup was more complicated : I had two volumes replicated on
both nodes, and a Distribute volume on top of them. The idea was to be
able to add new nodes, one by one, when needed. I haven't been able to
test this setup by I guess performances would have been lower (same
network/disk speed, more overhead).
The purpose of these tests is to determine whether I can build an OpenVZ
cluster on top of GlusterFS instead of DRBD. At first, there will be
only two nodes, so both solutions can apply. But if the cluster grows as
I hope, glusterFS is the only way to share storage accross all nodes.
What I want to know is : "Is it possible to start directly with
glusterfs, or do I need to reach a critical mass where the number of
nodes will be enough to overpower local storage ?"
Hardware nodes are :
- 2*quad opteron
- 16GB RAM
- 750GB RAID1
- 1 GbE
-------------- next part --------------
#####################################
### GlusterFS Client Volume File ##
#####################################
volume node01primary
type protocol/client
option transport-type tcp
option remote-host node01
option remote-subvolume primary
end-volume
volume node02secondary
type protocol/client
option transport-type tcp
option remote-host node02
option remote-subvolume secondary
end-volume
volume storage01
type cluster/replicate
subvolumes node01primary node02secondary
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes storage01
end-volume
-------------- next part --------------
#####################################
### GlusterFS Server Volume File ##
#####################################
volume posix
type storage/posix
option directory /mnt/primary
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume primary
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.primary.allow *
option auth.addr.secondary.allow *
subvolumes primary
end-volume
More information about the Gluster-users
mailing list