[Gluster-users] Test results [Was : bonnie hangs with glusterFS 2.0.4]

Julien Cornuwel cornuwel at gmail.com
Mon Aug 17 16:51:29 UTC 2009


Le lundi 17 août 2009 à 19:49 +0400, Konstantin A. Lepikhov a écrit :
> Hi Julien!
> 
> Monday 17, at 05:04:43 PM you wrote:
> 
> > Le mardi 11 ao??t 2009 ?? 15:03 +0400, Konstantin A. Lepikhov a ??crit :
> > 
> > > You can try to git clone kernel source and switch between different tags.
> > > It's also very good test.
> > 
> > Here are the final test results. The setup is :
> > - 2 nodes, GbE, SATA drives, 2*4-cores Opteron 2.2Ghz, 16GB RAM
> > - Ping between nodes is 0.120ms
> > - GlusterFS 2.0.6
> > - Very simple setup : Replicate with readahead and writebehind.
> > - Tests are done on only one node (no concurrent access)
> Did you send this results to glusterfs-users list?

Oops, sorry, I just hit 'reply'. Now it's done.

> > The purpose of these tests is to compare GlusterFS versus local disk
> > performances, on a two node cluster, as I want to host OpenVZ VEs on my
> > servers.
> Do you have disk load/network load statistics for this test?

I haven't detailed stats, but for what I saw, there was no bottlenecks :
- Load average never reached 1
- There was plenty of CPU power/RAM available during the tests
- Network load was never above 30 percent of the bandwidth.

It really looked as if the system was waiting for something, and my
guess goes to the network.

> > Untar a kernel archive :
> > Local: 0:19
> > GlusterFS: 9:12
> > 
> > Kernel compilation :
> > Local: 55:06
> > GlusterFS: 3:37:38
> > 
> > GIT clone kernel sources :
> > Local: 5:31
> > GlusterFS: 2:49:09
> > 
> > So, clearly, GlusterFS solution is not viable here. I think this is
> > because of network latency. As I don't think my hosting provider is
> > likely to offer IB in the near future, this is a no-go. 
> > 
> > Maybe if I had dozens of servers, latency would be compensated by
> > parallelism. I hope I'll be able to test it someday ;-)
> Yes, latency is highly depends on configuration - I think DHT setup must be
> much faster.
> 
> > 
> > Anyway, thank you for your support and advice folks, I'll keep an eye on
> > this project in the future.
> IMHO in your setup pohmelfs/drbd8 are more acceptable. 
> 





More information about the Gluster-users mailing list