[Gluster-users] Gluster 3.3.0 on CentOS 6 - GigabitEthernet vs InfiniBand

Bartek Krawczyk bbartlomiej.mail at gmail.com
Thu Oct 18 06:38:43 UTC 2012


Hi, we've been assembling a small cluster using a Dell M1000e and few
M620s. We've decided to use GlusterFS as our storage solution. Since
our setup has an InfiniBand switch we want to use GlusterFS using rdma
transport.
I've been doing some iozone benchmarking and the results I got are
really strange. There's almost no difference between using Gigabit
Ethernet, InfiniBand IPoIB or InfiniBand RDMA.
To test InfiniBand IPoIB I added peers using IPs on ibX interfaces and
a tcp transport in volumes.
To test InfiniBand RDMA I added peers also using IPs on ibX interfaces
and a rdma transport (it was the only transport on that volume). I've
mounted it using "mount -t glusterfs masterib:/vol4.rdma /home/test".

Please find some plots which compare raw disk iozone benchmarks (I
took the average of all results of iozone -a for each test). The
second plot is the max value of iozone -a results for each type of
connection. As you can see raw disk performance isn't the bottleneck.
In the documentation of GlusterFS 3.3.0 it's said that rdma transport
isn't well supported in 3.3.0 release. But then why there's almost no
difference between InfiniBand IPoIB (10Gbps) and Gigabit Ethernet
(1Gbps) ?

I see GlusterFS 3.3.1 was released on 16th of October. I'll try
upgrading but I don't see any significant changes to RDMA.

Reagards and feel free to chime in with your suggestions

PS. here are some infiniband diagnostic commands to show that it
should be working correctly:

[root at node01 ~]# ibv_rc_pingpong masterib
  local address:  LID 0x0001, QPN 0x4c004a, PSN 0xd90c3e, GID ::
  remote address: LID 0x0002, QPN 0x64004a, PSN 0x64d15d, GID ::
8192000 bytes in 0.01 seconds = 8733.48 Mbit/sec
1000 iters in 0.01 seconds = 7.50 usec/iter

[root at node01 ~]# ibhosts
Ca	: 0x0002c90300384bc0 ports 2 "node02 mlx4_0"
Ca	: 0x0002c90300385450 ports 2 "master mlx4_0"
Ca	: 0x0002c90300385150 ports 2 "node01 mlx4_0"

[root at node01 ~]# ibv_devinfo
hca_id:	mlx4_0
	transport:			InfiniBand (0)
	fw_ver:				2.10.2132
	node_guid:			0002:c903:0038:5150
	sys_image_guid:			0002:c903:0038:5153
	vendor_id:			0x02c9
	vendor_part_id:			4099
	hw_ver:				0x0
	board_id:			DEL0A10210018
	phys_port_cnt:			2
		port:	1
			state:			PORT_ACTIVE (4)
			max_mtu:		2048 (4)
			active_mtu:		2048 (4)
			sm_lid:			4
			port_lid:		1
			port_lmc:		0x00
			link_layer:		InfiniBand

		port:	2
			state:			PORT_DOWN (1)
			max_mtu:		2048 (4)
			active_mtu:		2048 (4)
			sm_lid:			0
			port_lid:		0
			port_lmc:		0x00
			link_layer:		InfiniBand


-- 
Bartek Krawczyk
network and system administrator
-------------- next part --------------
A non-text attachment was scrubbed...
Name: average.jpg
Type: image/jpeg
Size: 71591 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121018/8b796715/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: max.jpg
Type: image/jpeg
Size: 79196 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121018/8b796715/attachment-0001.jpg>


More information about the Gluster-users mailing list