[Gluster-users] Typical setup questions

Matt Weil mweil at genome.wustl.edu
Thu Aug 30 14:50:39 UTC 2012


Guys,

Thanks for the responses it is appreciated.

On 8/28/12 5:28 PM, Bryan Whitehead wrote:
> I'f found pricing for Infiniband switches / cards to be cheaper than
> 10G cards/switches with the addition of being 4X fast.

I will look into this but putting all of our compute on Infiniband may 
be cost prohibitive.

>
> On Tue, Aug 28, 2012 at 11:44 AM, Joe Topjian <joe at topjian.net> wrote:
>> Hi Matt,
>>
>> On Tue, Aug 28, 2012 at 9:29 AM, Matt Weil <mweil at genome.wustl.edu> wrote:
>>>
>>> Since we are on the subject of hardware what would be the perfect fit for
>>> a gluster brick. We where looking at a PowerEdge C2100 Rack Server.
>>
>>
>> Just a note: the c2100 has been superseded by the Dell r720xd. Although the
>> r720 is not part of the c-series, it's their official replacement.

I looked at these but they only hold 8 3.5" drives verses 12 and two on 
the inside on the 2100.  I will ask our rep about this.

Do you typically run hot spares or just keep cold spares handy?

>>
>>>
>>> During testing I found it pretty easy to saturate 1 Gig network links.
>>> This was also the case when multiple links where bonded together.  Are there
>>> any cheap 10 gig switch alternatives that anyone would suggest?
>>
>>
>> While not necessarily cheap, I've had great luck with Arista 7050 switches.

also looking at dell's new force 10 switches.  Wonder how they compare 
price wise.

>>
>> We implement them in sets of two, linked together. We then use dual-port
>> 10gb NICs and connect each NIC to each switch. It gives multiple layers of
>> redundancy + a theoretical 20gb throughput per server.
>>
>> Thanks,
>> Joe
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>




More information about the Gluster-users mailing list