[Gluster-devel] HA failover question.

Chris Johnson johnson at nmr.mgh.harvard.edu
Wed Oct 17 16:50:36 UTC 2007


On Wed, 17 Oct 2007, Kevan Benson wrote:

      If I'm reading this right then, I CAN have two servers with two
SHARED file systems and multiple AFR'ed client setups accessing them?
This would seem to require serious locking and sever to sever
communications to pull off.

>
> When the afr is run on the client, the self-heal is handled by the client (I 
> assume, I don't see how else it would work when the servers may not even have 
> access to each other).  Here's my understanding (glusterfs team please 
> correct me if I'm wrong):
>
> 1) File data operation requested from AFR share
> 2) AFR translator (on the clinet in this case) requests file information from 
> all it's subvolumes
> 3) AFR translator aggregates the results and finds the latest version of the 
> file
>   a) retrieve latest version
>   b) If latest version isn't on all subvolumes, overwrite obsolete version 
> with latest
>   c) If file isn't shared on enough subvolumes, copy to new subvolumes
>   d) Remove extra copies of files if it's more than required by AFR spec? 
> (glfs team care to comment on whether this happens?)
> 4) Read, write or append to file as requested in #1
>
> An AFR subvolume can be any other defined volume (except for unify volumes, 
> you can't afr unify volumes YET, see 
> http://www.mail-archive.com/gluster-devel@nongnu.org/msg02161.html).  That 
> means you can AFR a local and remote volume together (as in some of the HA 
> examples in the wiki), or multiple remote volumes (as in the example posted 
> to you earlier), or multiple local volumes (in the case you want the data 
> stored on two physical disks.
>
> You can also AFR other AFR volumes (I believe I read that, it should work), 
> so you could do a tiered AFR structure if you wanted so you don't have one 
> client or server responsible for writing lots of copies of a file if you want 
> lots of copies (think binary tree).
>
> Right now, for HA and ease of admin, I think a simple AFR handled on the 
> client is easiest.  No unify.  Unify will give you better performance, but at 
> a cost of splitting your files up.  Each server will have a full set of 
> files, it will just be split into two locations, making any sort of 
> pre-population of files or direct access complicated without glusterfs. 
> Locking may be problematic with this though, I'll be posting about that 
> shortly...
>
> In short, for HA if you need the extra performance I suggest you use the 
> config that Daniel posted before.  If you just need HA and want easier 
> administration, just use a single AFR on the client.  No carp, heartbeat or 
> ay type of shared IP should be required.
>
> P.S.
> The transport-timeout option in protocol/client is key to finding a good 
> failover time for your cluster.  When there's a failure the first write from 
> a client will hang for the timeout period before finishing it's request with 
> the available subvolumes.  A filure mid-write just stalls the same amount of 
> time before finishing the write to the available subvolumes.  It's extremely 
> robust.
>
> -- 
>
> -Kevan Benson
> -A-1 Networks
>
>
>

------------------------------------------------------------------------------- 
Chris Johnson               |Internet: johnson at nmr.mgh.harvard.edu
Systems Administrator       |Web:      http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street      |Hey guys, when she tells you her problems, she's
Charlestown, MA., 02129 USA |looking for sympathy, not solutions.  Me.
-------------------------------------------------------------------------------





More information about the Gluster-devel mailing list