[Gluster-devel] GlusterFS suitability in an ad-hoc cluster

Anand Babu Periasamy ab at gnu.org.in
Fri Oct 5 10:12:03 UTC 2007


Hi David,
Unify translator with NUFA I/O scheduler can handle your case.
NUFA is Non-Uniform Filesystem Access. It is designed for work loads
where local disks are used for scratch space in a parallel processing
environment. You run the server and client on each node and still
cluster all the local disks together. NUFA I/O scheduler will give
higher precedence to local disks when you create files (until local
disk gets full). You can still access files sitting on remote disks
transparently. (for example, if node4 creates files, node4's local
disk will be used).

You let the client side of volume-specification map the local disk
directly as a volume. Your applications will see the processed file as
a regular file and can read it directly in a disconnected
state. GlusterFS will automatically reconnect when the remote volumes
become available. Instead of hot-add/remove, you can just pre-define a
cluster volume with all the nodes that can potentially
participate. hot-add/remove/migrate is also scheduled for next
release.

In terms of replication and load balancing, you can combine AFR
(Automatic File Replication) translator and Unify together to achieve
the desired results. Over all.. GlusterFS has features to handle your
case. 

I also recommend Infiniband if you want very high I/O throughput. 

--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]


,----[ David Flynn writes: ]
| Hi,
| 
| I have an ad-hoc cluster of seven machines that are used for batch
| processed computations.  Each machine has either a 1TB or 2TB local
| array.  I'm currently investigating methods to gain cluster-wide
| visability of all the local arrays on all nodes.  However, there are
| a few complications:
| 
|  - Each node should have read-only access to the rest of the
|  cluster.
|  - Any writes should only be done to the local array.
|  - I need to support disconnected operation of any node.
|  - Hot add/remove of a node.
| 
| I think this also requires that there are no metadata servers.
| 
| Some more background, we are performing batch image processing
| operations on large [constant] data sets.  Currently we divide the
| data up and replicate it across the machines; then follows the
| nightmare of trying to assign work to the correct node with the
| correct portion of the source data.  Having all nodes see the whole
| [distributed] data set would be of great benefit.
| 
| When a node processes the data, it needs to be stored on its local
| array, since the machine may then be disconnected from the network
| to playback the video (at rates of upto 400MB/sec). This also
| requires that the disconnected node can read the filesystem without
| assistance from the rest of the cluster.
| 
| Interconnect between the nodes is 1000baseT ethernet.
| 
| A final spanner in the works, it `would be nice' if identical (by
| name) files appearing on seperate nodes could be load balanced in
| access in some way.
| 
| Is any of this achievable with GlusterFS?  Is any more achievable
| with modification?
| 
| ..david
`----






More information about the Gluster-devel mailing list