[Gluster-devel] Gluster Recovery

Anand Avati avati at zresearch.com
Fri Apr 27 23:25:59 UTC 2007


> 
> Version 1.3 questions
> 
> * How do I cleanly shut down the bricks making sure that they remain
> consistent?

For 1.3 you have to kill the glusterfsd manually. You can get the pid
from the pidfile (${datadir}/run/glusterfsd.pid)


> * How do I add client hosts to the server.vol file? I've tried adding a
> host and
>     killall -HUP glusterfsd

you mean 'option auth.ip.allow' lines? glusterfsd does not support
-HUP (yet). you have to kill and start freshly for now.

> * Do the bricks know about each other and their clients' replication at
> all? What if one client has "replicate *:2" and the other "replicate
> *:1"?

no, bricks dont know about each other. they not only dont know their
client's replication, but they dont even know whether they are being
used in a unify/afr/stripe too. all clients are expected to use the
same config file, hence the option is provided to 'fetch' the spec
file from one of the servers during mount (glusterfs -s SERVER) so
that it is convinient to maintain a centralized spec file which all
clients use.

> * Could race conditions ever lead to the different bricks having
> different data if two clients tried to write to the same mirrored file?
> Is this the reason for using the posix-locks translator over and above
> the posix locks on the underlying bricks? 

you are right, two clients writing to the same region of a file are
expected to use posix locks to lockout their region before editing in
an AFR scenario.

> Version 1.4 requests
> 
> * A mirror-consistency check command. Presumably this would be a fairly-
> small addition to the rebuild code. A danger of all mirroring schemes is
> that they can hide underlying problems until it's too late!

the 'self-heal' feature is aimed to be this, which, in runtime keeps
checking for inconsitoncies and fixes them 'on-the-fly' in a proactive
fashion.

> * A quorum that would require two out of three three mirrors (or N out
> of 2*N-1) to be able to talk to each other or for two mirrors to be able
> to talk either to each other or a quorum-server. This is to avoid data
> inconsistencies after a temporary disconnection. Perhaps an isolated
> mirror could be read-only.

i'm thinking about this. will get back to you later about it.


> * When rebuilding, will file-locking occur at a reasonable block size
> rather than the whole file? Some of my astronomers have some big files!

these level of details are not yet frozen. your suggestions will
surely be considered.

thanks!

avati

-- 
ultimate_answer_t
deep_thought (void)
{ 
  sleep (years2secs (7500000)); 
  return 42;
}





More information about the Gluster-devel mailing list