[Gluster-users] Geo-replication

David Cunningham dcunningham at voisonics.com
Tue Feb 25 02:46:56 UTC 2020


Hi Aravinda and Sunny,

Thank you for the replies. We have 3 replicating nodes on the master side,
and want to geo-replicate their data to the remote slave side. As I
understand it if the master node which had the geo-replication create
command run goes down then another node will take over pushing updates to
the remote slave. Is that right?

We have already taken care of adding all master node's SSH keys to the
remote slave's authorized_keys externally, so won't include the push-pem
part of the create command.

Mostly I wanted to confirm the geo-replication behaviour on the replicating
master nodes if one of them goes down.

Thank you!


On Tue, 25 Feb 2020 at 14:32, Aravinda VK <aravinda at kadalu.io> wrote:

> Hi David,
>
>
> On 25-Feb-2020, at 3:45 AM, David Cunningham <dcunningham at voisonics.com>
> wrote:
>
> Hello,
>
> I've a couple of questions on geo-replication that hopefully someone can
> help with:
>
> 1. If there are multiple nodes in a cluster on the master side (pushing
> updates to the geo-replication slave), which node actually does the
> pushing? Does GlusterFS decide itself automatically?
>
>
> Once Geo-replication session is started, one worker will be started
> corresponding to each Master bricks. Each worker identifies the changes
> happened in respective brick and sync those changes via Mount. This way
> load is distributed among Master nodes. In case of Replica sub volume, one
> worker among the Replica group will become active and participate in the
> syncing. Other bricks in that Replica group will remain Passive. Passive
> worker will become Active if the previously Active brick goes down (This is
> because all Replica bricks will have the same set of changes, syncing from
> each worker is redundant).
>
>
> 2.With regard to copying SSH keys, presumably the SSH key of all master
> nodes should be authorized on the geo-replication client side?
>
>
> Geo-replication session is established between one master node and one
> remote node. If Geo-rep create command is successful then,
>
> - SSH keys generated in all master nodes
> - Public keys from all master nodes are copied to initiator Master node
> - Public keys copied to the Remote node specified in the create command
> - Master public keys are distributed to all nodes of remote Cluster and
> added to respective ~/.ssh/authorized_keys
>
> After successful Geo-rep create command, any Master node can connect to
> any remote node via ssh.
>
> Security: Command prefix is added while adding public key to remote node’s
> authorized_keys file, So that if anyone gain access using this key can
> access only gsyncd command.
>
> ```
> command=gsyncd ssh-key….
> ```
>
>
>
> Thanks for your help.
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
>

-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200225/1c9119a7/attachment.html>


More information about the Gluster-users mailing list