[Gluster-users] Again on GlusterFS and active/active WAN Replication
Gionatan Danti
g.danti at assyoma.it
Mon Feb 17 11:12:52 UTC 2014
On 02/17/2014 11:18 AM, Vijay Bellur wrote:
>
> write-behind can help with write operations but the lookup preceding the
> write is sent out to all bricks today and hence that affects overall
> performance.
>
Ok, this is in-line with my tests and the auto-generated configuration files
>
> Not as of today.
>
> One possibility is to let local clients assume that the remote target is
> not reachable and hence all operations happen locally. self-heal-daemon
> can perform background syncs when they notice that there is a delta.
> Achieving this form of near synchronous replication will involve some
> amount of work. If the same file is updated from multiple sites, then
> there are chances of running into split-brains and we would need good
> conflict resolution mechanisms too.
>
I thought the same thing, but what scary me is that I expect split-brain
to be quite common, even when using frequent volume heal cycles (eg:
each 15 min). The problem is that users are likely to work with the same
office files. From what I understand, a split-brain scenario led to
unaccessible files until the split-brain condition is solvede (which
implies manual operation at the hard-link level). Is that true?
>
> The answer is no, however if you do not require all of your data in a
> single namespace, you can configure geo-replication over two volumes in
> different directions.
>
> i.e vol1 (Site A) ==========> vol1 (Site B)
>
> and
>
> vol2 (Site B) =============> vol2 (Site A)
>
Unfortunately, I need the same namespace on both offices.
>
> Cannot readily think of anything that does not involve code changes.
>
> Thanks,
> Vijay
>
Thank you very much for yours clear reply, they are much appreciated.
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8
More information about the Gluster-users
mailing list