[Gluster-users] Gluster in a cluster

harry mangalam harry.mangalam at uci.edu
Thu Nov 15 17:09:33 UTC 2012


This doesn't seem like a good way to do what I think you want to do.

1 - /scratch should be as fast as possible, so putting it on a distributed fs, 
unless that fs is optimized for speed (mumble.Lustre.mumble), is a mistake.

2 - if you insist on doing this with gluster (perhaps because none of your 
individual /scratch partitions is large enough), making a dist & replicated 
/scratch is making a bad decision worse as replication will slow the process 
down even more. (Why replicate what is a temp data store?)

3 - integrating the gluster server into the rocks environment (on a per-node 
basis) seems like a recipe for .. well, migraines, at least 

If you need a relatively fast, simple, large, reliable, aggressively caching 
fs for /scratch, NFS to a large RAID0/10 has some attractions, unless the 
gluster server fanout IO overwhelms the aforementioned attractions.

IMHO...




On Thursday, November 15, 2012 09:30:52 AM Jerome wrote:
> Dear all
> 
> I'm testing Gluster in a cluster of compute nodes, based on Rocks. The
> idea is to use the scratch of each nodes as a big volume for scratch. It
> permit to access to this scratch system file on all the nodes of the
> cluster.
> For the moment, i have installed this gluster system on 4 nodes, ona
> distributed replica of 2, like this:
> 
> # gluster volume info
> 
> Volume Name: scratch1
> Type: Distributed-Replicate
> Volume ID: c8c3e3fe-c785-4438-86eb-0b84c7c29123
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: compute-2-0:/state/partition1/scratch
> Brick2: compute-2-1:/state/partition1/scratch
> Brick3: compute-2-6:/state/partition1/scratch
> Brick4: compute-2-9:/state/partition1/scratch
> 
> # gluster volume status
> Status of volume: scratch1
> Gluster process						Port	Online	Pid
> ----------------------------------------------------------------------------
> -- Brick compute-2-0:/state/partition1/scratch		24009	Y	16464
> Brick compute-2-1:/state/partition1/scratch		24009	Y	3848
> Brick compute-2-6:/state/partition1/scratch		24009	Y	511
> Brick compute-2-9:/state/partition1/scratch		24009	Y	2086
> NFS Server on localhost					38467	N	4060
> Self-heal Daemon on localhost				N/A	N	4065
> NFS Server on compute-2-0				38467	Y	16470
> Self-heal Daemon on compute-2-0				N/A	Y	16476
> NFS Server on compute-2-9				38467	Y	2092
> Self-heal Daemon on compute-2-9				N/A	Y	2099
> NFS Server on compute-2-6				38467	Y	517
> Self-heal Daemon on compute-2-6				N/A	Y	524
> NFS Server on compute-2-1				38467	Y	3854
> Self-heal Daemon on compute-2-1				N/A	Y	3860
> 
> 
> All of this run correctly, i used some stress file to advise that
> configuration could be runnable.
> My problem is when a node reboot accidentaly, or for some administration
> task: the node reinstall itself, and the gluster volume begin to
> fail.... I detect taht the UUID of a machine is generated during the
> instalation, so i develop some script to get back the original UUID of
> the node. Despote this, the node could not get back in the volume. I
> miss some special task to do. So, it is possible to do a such system
> with gluster? or maybe i have to reconfigure all of the voluem when a
> node reinstall?
> 
> Best regards.
-- 
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
Passive-Aggressive Supporter of the The Canada Party:
  <http://www.americabutbetter.com/>




More information about the Gluster-users mailing list