[Gluster-users] Performance gluster 3.2.5 + QLogic Infiniband
Bryan Whitehead
driver at megahappy.net
Wed Apr 25 05:10:19 UTC 2012
I'm confused, you said "everything works ok (IPoIB)" but later you
state you are using RDMA? Can you post details of your setup? Maybe
the output from gluster volume info <volumename>?
On Sat, Apr 21, 2012 at 1:40 AM, Michael Mayer <michael at mayer.cx> wrote:
> Hi all,
>
> thanks for your suggestions,
>
> i think I have "solved" the performance issue now. I had a few too many
> kernel patches included. I am back to the stock RHEL 5.8 kernel with stock
> QLogic OFED and everything works ok (IPoIB). My original intent was to
> explore cachefs on RHEL5 by building a 2.6.32 kernel but while cachefs
> worked like a treat performance for gluster was as bad as reported
> previously - so will go without cachefs for now and reintroduce cachefs in
> an OS upgrade later on.
>
> I even have a nicely working rdma setup now and - using that - performance
> is 900 MB/s + and that consistently so.
>
> Since I have two volumes exported by the same bricks it seems I only can get
> one of them to use RDMA, the other will then refuse to mount and only mount
> if not using rdma on that one - but that is not a real problem for now as
> the second one is only used for backup purposes.
>
> Michael,
>
> On 04/12/2012 01:13 AM, Fabricio Cannini wrote:
>
> Hi there
>
> The only time i setup a gluster "distributed scratch" like Michael is doing,
> ( 3.0.5 Debian packages ) i too choose IPoIB simply because i could not get
> rdma working at all.
> Time was short and IPoIB "Just worked" well enough for our demand at the
> time, so i didn't looked into this issue. Plus, pinging and ssh'ing into a
> node through the IB interface comes handy when diagnosing and fixing
> networking issues.
>
> Em quarta-feira, 11 de abril de 2012, Sabuj Pattanayek<sabujp at gmail.com>
> escreveu:
>> I wonder if it's possible to have both rdma and ipoib served by a
>> single glusterfsd so I can test this? I guess so, since it's just a
>> tcp mount?
>>
>> On Wed, Apr 11, 2012 at 1:43 PM, Harry Mangalam <harry.mangalam at uci.edu>
>> wrote:
>>> On Tuesday 10 April 2012 15:47:08 Bryan Whitehead wrote:
>>>
>>>> with my infiniband setup I found my performance was much better by
>>>
>>>> setting up a TCP network over infiniband and then using pure tcp as
>>>
>>>> the transport with my gluster volume. For the life of me I couldn't
>>>
>>>> get rdma to beat tcp.
>>>
>>> Thanks for that data point, Brian.
>>>
>>> Very interesting. Is this a common experience? The RDMA experience has
>>> not
>>> been a very smooth one for me and doing everything with IPoIB would save
>>> a
>>> lot of headaches, especially if it's also higher performance.
>>>
>>> hjm
>>>
>>> --
>>>
>>> Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
>>>
>>> [ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
>>>
>>> 415 South Circle View Dr, Irvine, CA, 92697 [shipping]
>>>
>>> MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
>>>
>>> --
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list