[Gluster-users] self healing with sharding

Lindsay Mathieson lindsay.mathieson at gmail.com
Sat Jul 9 02:46:46 UTC 2016


On 8/07/2016 9:40 PM, Gandalf Corvotempesta wrote:
> How did you mesure the performance? I would like to test in the same
> way, so that results are comparable.

Not particularity scientific. I have four main tests I run

1.    CrystalDiskMark in a Windows VM. This lets me see IOPS as 
experienced by the VM. I'm suspicious of std disk becnhmarks though, 
they don't really reflect day-day usage.

2.    The build server for our enterprise product, a fairly large cmd 
line build, a real world usage that exercises random read/writes fairly 
well.

3.    Starting up and running std applications - eclipse, Office 365, 
outlook etc. More subjective, which does matter.

4.    Multiple simultaneous VM starts, a good stress test.


> Which network/hardware/servers topology are you using ?

3 Compute Servers - Combined VM hosts and gluster nodes, for a replica 3 
gluster volume

VNA:
- Dual Xeon E5-2660 2.2Ghz
- 64GB EEC Ram
- 2 * 1Gb Bond
- 4x3TB WD red in ZFS RAID10

VNB, VNG :
- Xeon E5-2620 2.0 Ghz
- 64GB Ram
- 3 * 1Gb Bond
- 4x3TB WD red in ZFS RAID10

All Bonds are LACP Balance-tcp with a dedicated Switch. VNA is supposed 
to have 3*1Gb as well but we had driver problems with the 3rd card and I 
haven't got round to fixing it :(

Internal & external traffic share the bond. External traffic is minimal.


-- 
Lindsay Mathieson



More information about the Gluster-users mailing list