[Gluster-users] 答复: >1PB

任强 renqiang at 360buy.com
Tue Feb 21 05:23:44 UTC 2012


Hi, Nathan!
	You said:  As the number of nodes grows that chance of losing a node
becomes higher.
    'node losing' , what's it mean? And what's you suggestion of the number
of nodes?

-----邮件原件-----
发件人: Song [mailto:gluster at 163.com] 
发送时间: 2012年2月21日 10:51
收件人: renqiang; 'Nathan Stratton'
抄送: gluster-users at gluster.org; 'Andrew Holway'
主题: RE: [Gluster-users] >1PB

Total capability > 1PB, each node has 12 disks and capability of each disk
is 1.8TB.
We haven't build so larger cluster. The glusterfs's performance is testing
on cluster with 4 nodes.

-----Original Message-----
From: 任强 [mailto:renqiang at 360buy.com]
Sent: Monday, February 20, 2012 7:40 PM
To: 'Nathan Stratton'; 'Song'
Cc: gluster-users at gluster.org; 'Andrew Holway'
Subject: 答复: [Gluster-users] >1PB


        Can you tell me the total capability of your glusterfs, and how many
disks each node has? And your cluster's speed?


-----邮件原件-----
发件人: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] 代表 Nathan Stratton
发送时间: 2012年2月17日 22:35
收件人: Song
抄送: gluster-users at gluster.org; 'Andrew Holway'
主题: Re: [Gluster-users] >1PB

On Fri, 17 Feb 2012, Song wrote:

> Hi,
>
> We have the same question. How many nodes are suitable in one gulsterfs?
>
> For example, volume type: DHT + replication(=3)
> There are 240 nodes server. Server information is:
> disk: 12*1.8T
> network: Gigabit Ethernet
>
> 40 nodes * 6 clusters?
> 60 nodes * 4 clusters?
> 80 nodes * 3 clusters?
> 120 nodes * 2 clusters?
> 240 nodes * 1 cluster?
>
> What are limitations to scale larger cluster of glusterfs?

Gluster can easily support a large number of nodes, the question is how
much you care about the underlying data. We store 2 copies of the data and
our underlying hardware is RAID6 allowing us to lose 2 disks on each
server. As the number of nodes grows that chance of losing a node becomes
higher, but one of the beautiful things about Gluster is you still have
access to all the other data that is NOT on the lost pair of servers. We
have found that your best bet is to split the data over as many servers as
possible provides the best up time.

><>
Nathan Stratton                                CTO, BlinkMind, Inc.
nathan at robotics.net                         nathan at blinkmind.com
http://www.robotics.net                        http://www.blinkmind.com
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






More information about the Gluster-users mailing list