[Gluster-users] distributed-replicated pool geo-replication to distributed-only pool only syncing to one slave node
Aravinda
avishwan at redhat.com
Wed Feb 24 06:41:40 UTC 2016
Answers inline.
regards
Aravinda
http://aravindavk.in
On 02/23/2016 02:25 PM, Christian Rice wrote:
> The subject line is a mouthful, but pretty much says it all.
>
> apivision:~$ sudo gluster volume geo-replication MIXER svc-mountbroker at trident24::DR-MIXER status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> apivision MIXER /zpuddle/audio/mixer svc-mountbroker svc-mountbroker at trident24::DR-MIXER ua610 Active History Crawl 2016-02-22 21:45:56
> studer900 MIXER /zpuddle/audio/mixer svc-mountbroker svc-mountbroker at trident24::DR-MIXER trident24 Passive N/A N/A
> neve88rs MIXER /zpuddle/audio/mixer svc-mountbroker svc-mountbroker at trident24::DR-MIXER trident24 Passive N/A N/A
> ssl4000 MIXER /zpuddle/audio/mixer svc-mountbroker svc-mountbroker at trident24::DR-MIXER ua610 Active History Crawl 2016-02-22 22:05:53
>
>
> This seems to indicate only one of my slave nodes is actively participating in the geo-replication. It seems wrong to me, or did I misunderstand the new geo-replication feature related to multiple nodes participating in the process? Can I get it to balance the rsyncs to more than one slave node?
Sync happens from Master Volume mount to Slave Volume mount. Both Active
workers connected to ua610, and maintains Slave volume mounts in that
node to sync data. So data is distributed in Slave as usual.(Depending
on Slave Volume topology)
>
> i used georepsetup which, by the way, is a freaking awesome tool that did in a few seconds what I was tearing my hair out to do for days--namely, to get geo-replication working with mountbroker. But even using simple root geo-replication with manual setup, the balance seemed to fall this way every time on the back end.
Glad that tool helped you to setup Geo-replication. Let us know if you
have any feedback for that
tool.(https://github.com/aravindavk/georepsetup/issues)
>
> Debian 8/Jessie, gluster 3.7.8-1, on zfs, a 119TB volume at each end. Data is properly distributing in the slave pool (at cursory glance), and in general I’m not aware of anything being outright broken. Front end replica pairs are apivsion/neve88rs and ssl4000/studer900.
>
> PS it’s in history crawl at the moment due to pausing/resuming geo-replication.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160224/c014bd5c/attachment.html>
More information about the Gluster-users
mailing list