[Gluster-users] Gluster in a cluster
Jerome
jerome at ibt.unam.mx
Thu Nov 15 15:30:52 UTC 2012
Dear all
I'm testing Gluster in a cluster of compute nodes, based on Rocks. The
idea is to use the scratch of each nodes as a big volume for scratch. It
permit to access to this scratch system file on all the nodes of the
cluster.
For the moment, i have installed this gluster system on 4 nodes, ona
distributed replica of 2, like this:
# gluster volume info
Volume Name: scratch1
Type: Distributed-Replicate
Volume ID: c8c3e3fe-c785-4438-86eb-0b84c7c29123
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: compute-2-0:/state/partition1/scratch
Brick2: compute-2-1:/state/partition1/scratch
Brick3: compute-2-6:/state/partition1/scratch
Brick4: compute-2-9:/state/partition1/scratch
# gluster volume status
Status of volume: scratch1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick compute-2-0:/state/partition1/scratch 24009 Y 16464
Brick compute-2-1:/state/partition1/scratch 24009 Y 3848
Brick compute-2-6:/state/partition1/scratch 24009 Y 511
Brick compute-2-9:/state/partition1/scratch 24009 Y 2086
NFS Server on localhost 38467 N 4060
Self-heal Daemon on localhost N/A N 4065
NFS Server on compute-2-0 38467 Y 16470
Self-heal Daemon on compute-2-0 N/A Y 16476
NFS Server on compute-2-9 38467 Y 2092
Self-heal Daemon on compute-2-9 N/A Y 2099
NFS Server on compute-2-6 38467 Y 517
Self-heal Daemon on compute-2-6 N/A Y 524
NFS Server on compute-2-1 38467 Y 3854
Self-heal Daemon on compute-2-1 N/A Y 3860
All of this run correctly, i used some stress file to advise that
configuration could be runnable.
My problem is when a node reboot accidentaly, or for some administration
task: the node reinstall itself, and the gluster volume begin to
fail.... I detect taht the UUID of a machine is generated during the
instalation, so i develop some script to get back the original UUID of
the node. Despote this, the node could not get back in the volume. I
miss some special task to do. So, it is possible to do a such system
with gluster? or maybe i have to reconfigure all of the voluem when a
node reinstall?
Best regards.
--
-- Jérôme
Celui qui ne connaît pas les langues étrangères
ne connaît rien de sa propre langue.
(Johann Wolfgang von Goethe)
More information about the Gluster-users
mailing list