[Gluster-devel] HA failover question.

Kevan Benson kbenson at a-1networks.com
Wed Oct 17 16:30:07 UTC 2007


Chris Johnson wrote:
> On Wed, 17 Oct 2007, Chris Johnson wrote:
>
>      I think a light just came on.  I don't know how 'self healing'
> would work on the client saide.  That sounds like a server side deal.
> Is it possible to set up two AFR servers between two RAIDs and access
> them in client side AFR mode?  Would that provide my failover and
> 'self-healing' when a failed node came back up?  Somehow I think the
> servers would need to talk to each other to pull this off.  Or is AFR
> something that can only run on one node between two file systems?
> That wouldn't be as fault tolerant.

When the afr is run on the client, the self-heal is handled by the 
client (I assume, I don't see how else it would work when the servers 
may not even have access to each other).  Here's my understanding 
(glusterfs team please correct me if I'm wrong):

1) File data operation requested from AFR share
2) AFR translator (on the clinet in this case) requests file information 
from all it's subvolumes
3) AFR translator aggregates the results and finds the latest version of 
the file
    a) retrieve latest version
    b) If latest version isn't on all subvolumes, overwrite obsolete 
version with latest
    c) If file isn't shared on enough subvolumes, copy to new subvolumes
    d) Remove extra copies of files if it's more than required by AFR 
spec?  (glfs team care to comment on whether this happens?)
4) Read, write or append to file as requested in #1

An AFR subvolume can be any other defined volume (except for unify 
volumes, you can't afr unify volumes YET, see 
http://www.mail-archive.com/gluster-devel@nongnu.org/msg02161.html).  
That means you can AFR a local and remote volume together (as in some of 
the HA examples in the wiki), or multiple remote volumes (as in the 
example posted to you earlier), or multiple local volumes (in the case 
you want the data stored on two physical disks.

You can also AFR other AFR volumes (I believe I read that, it should 
work), so you could do a tiered AFR structure if you wanted so you don't 
have one client or server responsible for writing lots of copies of a 
file if you want lots of copies (think binary tree).

Right now, for HA and ease of admin, I think a simple AFR handled on the 
client is easiest.  No unify.  Unify will give you better performance, 
but at a cost of splitting your files up.  Each server will have a full 
set of files, it will just be split into two locations, making any sort 
of pre-population of files or direct access complicated without 
glusterfs.  Locking may be problematic with this though, I'll be posting 
about that shortly...

In short, for HA if you need the extra performance I suggest you use the 
config that Daniel posted before.  If you just need HA and want easier 
administration, just use a single AFR on the client.  No carp, heartbeat 
or ay type of shared IP should be required.

P.S.
The transport-timeout option in protocol/client is key to finding a good 
failover time for your cluster.  When there's a failure the first write 
from a client will hang for the timeout period before finishing it's 
request with the available subvolumes.  A filure mid-write just stalls 
the same amount of time before finishing the write to the available 
subvolumes.  It's extremely robust.

-- 

-Kevan Benson
-A-1 Networks





More information about the Gluster-devel mailing list