[Gluster-users] Gluster Installation and Benchmarks
David Pusch
pusch.david at googlemail.com
Wed Aug 10 12:22:55 UTC 2011
Hello again,
we now did another test where we mounted the Volume on the client and shut down all Servers but one. We then transferred a 1 GB test file to the Volume. The transfer took around 10 seconds. We then brought up another server from the Cluster and again transferred a 1 GB file. Transfer time now was roughly 20 seconds. We proceeded in this manner for the last two servers, and each time the transfer time increased by ca. 10 seconds.
I hope someone can make sense of this and maybe help with this problem.
Regards,
--
David Pusch
On Mittwoch, 10. August 2011 at 13:23, David Pusch wrote:
> Hello,
> I am a Trainee in an IT-Firm and have been tasked with creating a basic gluster System for testing purposes. The setup went fine and creating a Server Pool was also unproblematic. Now to my problem, I created a 6 node Server Pool, with each node running 2hdds that have been partitioned to be 2 50%partitions. This leaves me with 3 "export" partitions per node. I set up an 18 brick distributed replicated Volume with a replica 6 setting. The gluster is connected with a cisco gbit switch and cat6 cables. The network setup has been evaluated with iperf and constantly yields 978 mbit/s across the board.
> When I benchmarked the system with bonnie++, it took ages and the output showed that "Per Char" writespeed was 9 K/sec and latency around 976ms.
> Overall speed of the hdd's used is fine when simply transferring files between them, but when I try to copy to a mounted gluster volume from a client in the network speed drops to around 24MB/s.
> I hope that someone can tell me why gluster is performing so badly.
> If it helps I can attach bonnie++ logs in html format for the different nodes and export directories.
> Thanks in advance,
> David
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110810/020a8962/attachment.html>
More information about the Gluster-users
mailing list