[Gluster-users] Very bad performance /w glusterfs. Am I missing something?
Jean-Francois Chevrette
jfchevrette at funio.com
Thu Aug 11 14:36:25 UTC 2011
Hello everyone,
I have just began playing with GlusterFS 3.2 on a debian squeeze system. This system is a powerful quad-core xeon with 12GB of RAM and two 300GB SAS 15k drives configured as a RAID-1 on an Adaptec 5405 controller. Both servers are connected through a crossover cable on gigabit ethernet ports.
I installed the latest GlusterFS 3.2.2 release from the provided debian package.
As an initial test, I've created a simple brick on my first node:
gluster volume create brick transport tcp node1.internal:/brick
I started the volume and mounted it locally
mount -t glusterfs 127.0.0.1:/brick /mnt/brick
I can an iozone test on both the underlying partition and the glusterfs mountpoint. Here are my results for the random write test (results are in ops/sec):
"Random write report" w/o glusterfs
"4" "8" "16" "32" "64" "128" "256" "512" "1024" "2048" "4096" "8192" "16384"
"64" 166603 121220 76676 46395 25605
"128" 171020 126906 83301 49372 27431 14275
"256" 172871 110303 85948 51957 28590 15147 7196
"512" 172029 129816 85336 51949 28881 15158 7517 3859
"1024" 175453 131270 73993 53413 29961 15866 7800 3936 1980
"2048" 176735 132777 87669 48482 28473 15918 7867 3980 1851 1011
"4096" 194828 146079 145045 53511 28624 15157 7490 5340 1989 1007 490
"Random write report" /w glusterfs
"4" "8" "16" "32" "64" "128" "256" "512" "1024" "2048" "4096" "8192" "16384"
"64" 6872 6390 5797 5103 4630
"128" 6871 6661 5865 4767 4424 4656
"256" 8953 6691 6506 5513 4999 3429 1908
"512" 9222 8727 6650 6003 5290 2386 2057 1061
"1024" 10363 10127 10023 7385 5839 4629 2267 1234 571
"2048" 9200 8778 8280 7394 5852 4221 2234 1262 634 324
"4096" 5739 5549 5441 4810 3952 2824 1931 1075 552 302 148
(sorry if the formatting is messed)
Any ideas why I am getting such bad results? My volume is not even replicated or distributed yet!
Thanks!
--
Jean-Francois Chevrette
More information about the Gluster-users
mailing list