[Gluster-users] Problems getting Glusterfs running on IB

Peter Pelekis pete.pelekis at wsm.com
Fri Jul 25 22:55:57 UTC 2008


Thanks for the responses I got it to work with libibverbs. I found out 
you can install the OFED libibverbs with the infinipath drivers and it 
will work fine. Now I just need to tune the file system. What setting 
would you guys recommend for a system with 4 storage nodes using 
ib-verbs? The customer is going to be coping large data files back and 
forth between the storage and the compute nodes.

-Peter

Mickey Mazarick wrote:
> All the stuff the you need for infiniband is in the latest OFED package:
> http://www.openfabrics.org/downloads/OFED/ofed-1.3.1/OFED-1.3.1.tgz
>
> We are using Mellanox cards and also had to flash them up to the 
> newest version for ibverbs to work.
>
> -Mickey Mazarick
>
>
> Peter Pelekis wrote:
>> The Infinipath drivers don't have a libibverbs they call it 
>> libinfinipath. I'll test out the ib-sdp interface to see if it works.
>>
>> -Peter
>>
>> Raghavendra G wrote:
>>  
>>> Hi Peter,
>>> Glusterfs ib-verbs transport needs libibverbs to be installed. Is it 
>>> installed on your system. Also, in case of ib-sdp are you able to 
>>> ping the other node using the interface which has ib-sdp support?
>>>
>>> regards,
>>> On Fri, Jul 25, 2008 at 2:26 AM, Peter Pelekis <pete.pelekis at wsm.com 
>>> <mailto:pete.pelekis at wsm.com>> wrote:
>>>
>>>     I'm having some problems getting Glusterfs to run on my system.
>>>     This is
>>>     the first time using it so you I'm not to knowledgeable yet. The
>>>     system
>>>     I'm using has infiniband installed so I would like to use the IB
>>>     for the
>>>     transport  for better performance. Heres what I have done so 
>>> far.  I
>>>     first set up glusterfs to run over tcp on the Gig-E network and 
>>> that
>>>     worked fine so I decided to move to the IB network and thats 
>>> when my
>>>     problems started. Heres some information about the system.
>>>
>>>     Glusterfs: glusterfs-1.3.9
>>>     Fuse: fuse-2.7.3glfs10
>>>     Cards: Qlogic QLE7240
>>>     Drivers: InfiniPath-2.2
>>>     OS: Centos5.2
>>>     Kernel: 2.6.18-53
>>>     4 Storage nodes
>>>     46 Compute nodes
>>>
>>>     When I first ran configure on glusterfs I got the following message
>>>     after it ran configure.
>>>
>>>     GlusterFS configure summary
>>>     ===========================
>>>     Fuse client        : yes
>>>     Infiniband verbs   : no
>>>     epoll IO multiplex : yes
>>>
>>>     If I look at lsmod I do have a ib_uverbs module installed and 
>>> working
>>>     but the configure script doesn't pick it up. I decided move on 
>>> and use
>>>     ib-sdp instead. I modified all my client and server volume files to
>>>     reflect ib-sdp for the transport then I started the server on my 4
>>>     storage nodes it started fine.  I then went to one of my compute 
>>> nodes
>>>     to mount the volume.
>>>     glusterfsd --transport=ib-sdp -s storage-ib1 /glusterfs
>>>
>>>     It returned with the following error.
>>>     glusterfs: could not open specfile
>>>
>>>     Then I tried to specify the volume file to see if that would work.
>>>     glusterfsd 
>>> --spec-file=/usr/local/etc/glusterfs/glusterfs-client.vol
>>>     /glusterfs
>>>
>>>     It returned fine so I thought everything was working until I ran 
>>> df I
>>>     got the following error.
>>>     df: `/glusterfs': Transport endpoint is not connected
>>>
>>>     I had it log the error that its getting when I try and mount the
>>>     filesystem.
>>>     2008-07-24 17:28:01 E [ib-sdp-client.c:141:ib_sdp_connect] trans:
>>>     error:
>>>     not in progress - trace: Network is unreachable
>>>     2008-07-24 17:28:01 W [client-protocol.c:332:client_protocol_xfer]
>>>     trans: not connected at the moment to submit frame type(2) op(4)
>>>     2008-07-24 17:28:01 E [client-protocol.c:4538:client_getspec_cbk]
>>>     trans:
>>>     no proper reply from server, returning ENOTCONN
>>>
>>>     This where I'm now I looked around the devel list and didn't find
>>>     anything on using the qlogic drivers just the ofed IB drivers. 
>>> Heres
>>>     what my configuration files. I would like to us ib-verbs if thats
>>>     possible the performance should be much better then ib-sdp.
>>>
>>>      volume posix-stripe
>>>               type storage/posix
>>>               option directory /export/glusterfs
>>>       end-volume
>>>
>>>       volume server
>>>               type protocol/server
>>>               option transport-type ib-sdp/server
>>>               option auth.ip.posix-stripe.allow 192.168.2.
>>>     <http://192.168.2.>*
>>>               subvolumes posix-stripe
>>>       end-volume
>>>
>>>       volume client-stripe-1
>>>         type protocol/client
>>>         option transport-type ib-sdp/client
>>>         option remote-host storage-ib1.cluster
>>>         option remote-subvolume posix-stripe
>>>       end-volume
>>>
>>>       volume client-stripe-2
>>>         type protocol/client
>>>         option transport-type ib-sdp/client
>>>         option remote-host storage-ib2.cluster
>>>         option remote-subvolume posix-stripe
>>>       end-volume
>>>
>>>       volume client-stripe-3
>>>         type protocol/client
>>>         option transport-type ib-sdp/client
>>>         option remote-host storage-ib3.cluster
>>>         option remote-subvolume posix-stripe
>>>       end-volume
>>>
>>>       volume client-stripe-4
>>>         type protocol/client
>>>         option transport-type ib-sdp/client
>>>         option remote-host storage-ib4.cluster
>>>         option remote-subvolume posix-stripe
>>>       end-volume
>>>
>>>       volume stripe
>>>         type cluster/stripe
>>>         option block-size *:2MB # All files ending with .img are 
>>> striped
>>>     with 2MB stripe block size.
>>>         subvolumes client-stripe-1 client-stripe-2 client-stripe-3
>>>     client-stripe-4
>>>       end-volume
>>>
>>>     -Peter
>>>
>>>     --
>>>     Peter Pelekis           5444 Napa Street
>>>     Western Scientific      San Diego CA, 92110
>>>     Senior Systems Engineer         www.wsm.com <http://www.wsm.com>
>>>     pete.pelekis at wsm.com <mailto:pete.pelekis at wsm.com>            Fax
>>>     619-220-6590
>>>     Phone 619-220-6580 x212 Toll Free 800-443-6699
>>>     GSA# GS-35F-5009H
>>>
>>>     Visit us at:
>>>
>>>     "I'm a great believer in luck, and I find the harder I work the
>>>     more I have of it."
>>>     -Thomas Jefferson
>>>
>>>     "If everything seems under control, you're not going fast enough."
>>>     -Mario Andretti
>>>
>>>
>>>     _______________________________________________
>>>     Gluster-users mailing list
>>>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>>     http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>>
>>> -- 
>>> Raghavendra G
>>>
>>> A centipede was happy quite, until a toad in fun,
>>> Said, "Prey, which leg comes after which?",
>>> This raised his doubts to such a pitch,
>>> He fell flat into the ditch,
>>> Not knowing how to run.
>>> -Anonymous
>>>     
>>
>>
>>   
>
>


-- 
Peter Pelekis  		5444 Napa Street
Western Scientific 	San Diego CA, 92110
Senior Systems Engineer		www.wsm.com
pete.pelekis at wsm.com		Fax 619-220-6590
Phone 619-220-6580 x212 Toll Free 800-443-6699
GSA# GS-35F-5009H

Visit us at:

"I'm a great believer in luck, and I find the harder I work the more I have of it."
-Thomas Jefferson

"If everything seems under control, you're not going fast enough."
-Mario Andretti





More information about the Gluster-users mailing list