[Gluster-users] GlusterFS Preformance

Mickey Mazarick mic at digitaltadpole.com
Thu Jul 9 21:17:44 UTC 2009


Just a not, we have seen a pretty significant increase in speed from 
this latest 2.03 release. Doing a test read over afr we are seeing 
speeds between 200-320 mB a second. (over infiniband, ib-verbs)

This is with direct IO disabled too. Oddly putting performance 
translators on the clients made no difference. 

We cued up 10 servers to read a single file simultaneously and got about 
~30-50 mB a sec from each client, totaling up to ~400 mB a sec on a 
single file (read from 2 servers, by 10 servers).
We are really happy with these numbers for a virtual cluster so, most 
single drives won't read at those speeds.

Thanks!
-Mic


Hiren Joshi wrote:
>  
>
>   
>> -----Original Message-----
>> From: Stephan von Krawczynski [mailto:skraw at ithnet.com] 
>> Sent: 09 July 2009 13:50
>> To: Hiren Joshi
>> Cc: Liam Slusser; gluster-users at gluster.org
>> Subject: Re: [Gluster-users] GlusterFS Preformance
>>
>> On Thu, 9 Jul 2009 09:33:59 +0100
>> "Hiren Joshi" <josh at moonfruit.com> wrote:
>>
>>     
>>>  
>>>
>>>       
>>>> -----Original Message-----
>>>> From: Stephan von Krawczynski [mailto:skraw at ithnet.com] 
>>>> Sent: 09 July 2009 09:08
>>>> To: Liam Slusser
>>>> Cc: Hiren Joshi; gluster-users at gluster.org
>>>> Subject: Re: [Gluster-users] GlusterFS Preformance
>>>>
>>>> On Wed, 8 Jul 2009 10:05:58 -0700
>>>> Liam Slusser <lslusser at gmail.com> wrote:
>>>>
>>>>         
>>>>> You have to remember that when you are writing with NFS 
>>>>>           
>>>> you're writing to
>>>>         
>>>>> one node, where as your gluster setup below is copying the 
>>>>>           
>>>> same data to two
>>>>         
>>>>> nodes;  so you're doubling the bandwidth.  Dont expect nfs 
>>>>>           
>>>> like performance
>>>>         
>>>>> on writing with multiple storage bricks.  However read 
>>>>>           
>>>> performance should be
>>>>         
>>>>> quite good.
>>>>> liam
>>>>>           
>>>> Do you think this problem can be solved by using 2 storage 
>>>> bricks on two
>>>> different network cards on the client?
>>>>         
>>> I'd be surprised if the bottleneck here was the network. 
>>>       
>> I'm testing on
>>     
>>> a xen network but I've only been given one eth per slice.
>>>       
>> Do you mean your clients and servers are virtual XEN 
>> installations (on the
>> same physical box) ?
>>     
>
> They are on different boxes and using different disks (don't ask), this
> seemed like a good way to evaluate as I setup an NFS server using the
> same equipment to get relative timings. The plan is to roll it out onto
> new physical boxes in a month or 2....
>
>
>   
>> Regards,
>> Stephan
>>
>>
>>     
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>   


-- 




More information about the Gluster-users mailing list