[Gluster-users] adding to gluster a node with 24TB of disk and 16GB RAM

admin iqtc iqtcub at gmail.com
Thu Dec 30 11:51:25 UTC 2010


Sorry Mark, but i don't understand what you exactly need. Could you give me
an example of information you're asking?

Regarding bonding, don't worry, all the current 5 machines are bonded(1gbit
each interface) to the switch, and the new machine would be installed the
same way.

That switch load is from the HPC clusters to the gluster. The info is from
the trunking interface in the switch. Our network topology is as follows:
each gluster server(and the new one) are connected with bonding to a L2
switch, then from that switch 4x1gbit cables goes to a L3 switch. Both
switches are configured for those 4 cables to be trunked. The traffic load i
told you is from the L3 switch.

We may expand that trunking some day, but for now we aren't having any


2010/12/28 Mark "Naoki" Rogers <mrogers at valuecommerce.co.jp>

> Hi,
> Your five machines should get you raw speeds of at least 300MB/s sequential
> and 300-500 random IOP/s, your file-system cache alters things depending on
> access patterns. Without knowing about those patterns I can't guess as to
> the most beneficial disk/memory ratios for you. If possible run some
> synthetic benchmarks for base-lining and then try and benchmark your
> application, even if it's only a limited benchmark that's ok you can still
> extrapolate from there.
> The first thing you might hit though could be the 1Gbit interfaces so keep
> an eye on those and perhaps have a plan to bond them, and get ready to think
> about 10G on the larger one if needed.
> Right now it seems the switch load is light, is that per port to the
> storage bricks?
> On 12/28/2010 05:38 PM, admin iqtc wrote:
>> Hi,
>> sorry for not giving more information on the first mail.
>> The setup would be straight distributed. The disks are SATA2 7200RPM. ATM
>> the 5 machines we're currently running have 5 disks of 1TB(4TB with RAID5)
>> each. The new machine would have 12 disks of 2TB with RAID5 as well, so 23TB
>> approx.
>> We're using gluster for storage of an HPC cluster. That means: Data gets
>> copied from gluster and to gluster all the times. For example looking at the
>> traffic on the switch, the average is 64Mbit/s IN(that is, writing) and
>> 43Mbit/s OUT(that is, reading). That is among the 5 machines.
>> Is this enough?
>> Thanks!
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

More information about the Gluster-users mailing list