[Gluster-users] Geo-rep failing initial sync

Wade Fitzpatrick wade.fitzpatrick at ladbrokes.com.au
Tue Oct 20 05:44:39 UTC 2015


Thanks Saravana, that is starting to make sense. The change_detector was 
already set to changelog (automatically). I updated it to xsync and the 
volume successfully replicated to the remote volume, however I then 
deleted all the data from the master and have not seen those changes 
replicated yet even after pausing, resuming, stopping and starting the 
geo-rep session.

I think somehow it prematurely switched to CHANGELOG mode before the 
initial sync had completed.

We have 6 identical servers across 3 sites. The 2 at site A are one 
stripe, mirrored to site B, and those 4 servers are all in the same 
logical network; but we also want the data to be replicated to site C, 
1000km away, where our developers can access it read-only.

We chose a Stripe volume to distribute the I/O load and to increase the 
available capacity for bricks at each site, as each server has only one 
nVME disk in it.

Regards,
Wade.

On 19/10/2015 7:07 pm, Saravanakumar Arumugam wrote:
> Hi Wade,
>
> There seems to be some issue in syncing the existing data in the 
> volume using Xsync crawl.
> ( To give some background: When geo-rep is started it goes to 
> filesystem crawl(Xsync) and sync all the data to slave, and then the 
> session switches to CHANGELOG mode).
>
> We are looking in to this.
>
> Any specific reason to go for Stripe volume?  This seems to be not 
> extensively tested with geo-rep.
>
> Thanks,
> Saravana
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151020/a3473729/attachment.html>


More information about the Gluster-users mailing list