[Gluster-users] glusterfs over rdma ... not.
Harry Mangalam
harry.mangalam at uci.edu
Sat Nov 5 01:06:36 UTC 2011
OK - finished some tests over tcp and ironed out a lot of problems.
rdma is next; should be snap now....
[I must admit that this is my 1st foray into the land of IB, so some
of the following may be obvious to a non-naive admin..]
except that while I can create and start the volume with rdma as
transport:
==================================
root at pbs3:~
622 $ gluster volume info glrdma
Volume Name: glrdma
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp,rdma
Bricks:
Brick1: pbs1:/data2
Brick2: pbs2:/data2
Brick3: pbs3:/data2
Brick4: pbs3:/data
==================================
I can't mount the damn thing. This seems to be a fairly frequent
problem according to google.. Again, all servers and clients are
ubuntu 10.04.3/64b, running self-compiled 3.3b1.
the IB device is:
02:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx
HCA] (rev a0)
dmesg says:
==================================
$ dmesg |grep ib_
[ 12.592406] ib_mthca: Mellanox InfiniBand HCA driver v1.0 (April 4,
2008)
[ 12.592411] ib_mthca: Initializing 0000:02:00.0
[ 12.592777] ib_mthca 0000:02:00.0: PCI INT A -> Link[LNKD] -> GSI
17 (level, low) -> IRQ 17
[ 12.592790] ib_mthca 0000:02:00.0: setting latency timer to 64
[ 14.996462] ib_mthca 0000:02:00.0: HCA FW version 1.0.800 is old
(1.2.000 is current).
[ 14.996465] ib_mthca 0000:02:00.0: If you have problems, try
updating your HCA FW.
[ 14.996678] ib_mthca 0000:02:00.0: irq 58 for MSI/MSI-X
[ 14.996686] ib_mthca 0000:02:00.0: irq 59 for MSI/MSI-X
[ 14.996692] ib_mthca 0000:02:00.0: irq 60 for MSI/MSI-X
==================================
(I did see that it says to update the firmware, but before I do
that...)
ibverbs are installed:
==================================
642 $ dpkg -l |grep verb
ii ibverbs-utils 1.1.3-2ubuntu1
ii libibverbs-dev 1.1.3-2ubuntu1
ii libibverbs1 1.1.3-2ubuntu1
ii libipathverbs-dev 1.1-1
ii libipathverbs1 1.1-1
==================================
but when I try to mount it from either a completely different client
or from one of the server nodes, I don't get far:
==================================
root at pbs:/
413 $ mount -t glusterfs -o transport=rdma pbs3:/glrdma /mnt
Usage: mount.glusterfs <volumeserver>:<volumeid/volumeport> -o
<options> <mountpoint>
Options:
man 8 mount.glusterfs
To display the version number of the mount helper:
mount.glusterfs --version
==================================
same message if I try swapping the order of the '-o transport=rdma' to
after the server.
==================================
root at pbs:/
414 $ mount -t glusterfs pbs3:/glrdma -o transport=rdma /mnt
Usage: mount.glusterfs <volumeserver>:<volumeid/volumeport> -o
<options> <mountpoint>
Options:
man 8 mount.glusterfs
To display the version number of the mount helper:
mount.glusterfs --version
==================================
So, what is the rdma magic that will let me do this?
hjm
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
This signature has been OCCUPIED!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111104/4cc81dcb/attachment.html>
More information about the Gluster-users
mailing list