[Gluster-devel] A new approach to solve Geo-replication ACTIVE/PASSIVE switching in distributed replicate setup!
Kotresh Hiremath Ravishankar
khiremat at redhat.com
Mon Feb 23 05:07:01 UTC 2015
The logic discussed in previous mail thread is not feasible. So in order to solve
the Active/Passive switching in geo-replication, following new idea is thought off.
1. Have a shared storage, a glusterfs management volume specific to geo-replication.
2. Use fcntl lock on a file stored on above said shared volume. There will be one file
per replica set.
Each worker tries to lock the file on shared storage, who ever wins will be ACTIVE.
With this, we are able to solve the problem but there is an issue when the shared
storage goes down (if it is replica, when all replicas goes down). In that case,
the lock state is lost.
But if we use sanlock, as ovirt uses, I think the above problem of lock state being
lost could be solved ?
If anybody have used sanlocks, is it a good option in this respect ?
Please share your thoughts, suggestions on this.
Thanks and Regards,
Kotresh H R
----- Original Message -----
> From: "Kotresh Hiremath Ravishankar" <khiremat at redhat.com>
> To: gluster-devel at gluster.org
> Sent: Monday, December 22, 2014 10:53:34 AM
> Subject: [Gluster-devel] A new approach to solve Geo-replication ACTIVE/PASSIVE switching in distributed replicate
> Hi All,
> Current Desgin and its limitations:
> Geo-replication syncs changes across geography using changelogs
> by changelog translator. Changelog translator sits on server side just
> above posix
> translator. Hence, in distributed replicated setup, both replica pairs
> changelogs w.r.t their bricks. Geo-replication syncs the changes using only
> brick among the replica pair at a time, calling it as "ACTIVE" and other
> non syncing
> brick as "PASSIVE".
> Let's consider below example of distributed replicated setup where
> NODE-1 as b1 and its replicated brick b1r is in NODE-2
> NODE-1 NODE-2
> b1 b1r
> At the beginning, geo-replication chooses to sync changes from NODE-1:b1
> NODE-2:b1r will be "PASSIVE". The logic depends on virtual getxattr
> 'trusted.glusterfs.node-uuid' which always returns first up subvolume i.e.,
> When NODE-1 goes down, the above xattr returns NODE-2 and that is made
> But when NODE-1 comes back again, the above xattr returns NODE-1 and it is
> 'ACTIVE' again. So for a brief interval of time, if NODE-2 had not finished
> the changelog, both NODE-2 and NODE-1 will be ACTIVE causing rename race
> as below.
> Don't make NODE-2 'PASSIVE' when NODE-1 comes back again untill NODE-2
> goes down.
> APPROACH TO SOLVE WHICH I CAN THINK OF:
> Have a distributed store of a file, which captures the bricks which are
> When a NODE goes down, the file is updated with it's replica bricks making
> sure, at any point in time, the file has all the bricks to be made active.
> Geo-replication worker process is made 'ACTIVE' only if it is in the file.
> Implementation can be in two ways:
> 1. Have a distributed store for above implementation. This needs to be
> of as distributed store is not in place in glusterd yet.
> 2. Other solution is to store in a file similar to existing glusterd global
> configuration file (/var/lib/glusterd/options). When this file is
> version number is incremented. When the node which is gone down, comes
> gets this file from peers if it's version number is less that of peers.
> I did a POC with second approach storing list of active bricks
> in options file itself. It seems to work fine except the bug in glusterd
> where the
> daemons are getting spawned before the node gets 'options' file from other
> node during
> CHANGES IN GLUSTERD:
> When a node goes down, all the other nodes are notified through
> where, it needs to find the replicas of the node which went down and update
> the global
> PROBLEMS/LIMITATIONS WITH THIS APPRAOCH:
> 1. If glusterd is killed and the node is still up, this makes the other
> replica 'ACTIVE'.
> So both replica bricks will be syncing at this point of time which is
> not expected.
> 2. If the single brick process is killed, it's replica brick is not made
> Glusterd/AFR folks,
> 1. Do you see a better approach other than above to solve this issue?
> 2. Is this approach feasible? If yes, how can I handle the problems
> mentioned above ?
> 3. Is this approach feasible from scalability point of view since
> complete list of active
> brick path is stored and read by gsyncd ?
> 3. Does this approach fits into three way replication and erasure coding?
> Thanks and Regards,
> Kotresh H R
> Gluster-devel mailing list
> Gluster-devel at gluster.org
More information about the Gluster-devel