[Gluster-devel] semi-sync replication
avishwan at redhat.com
Wed Aug 12 13:28:17 UTC 2015
I think NSR is good candidate here. It has leadership election for
writing data, if that can be enhanced to give more priority to SSD
bricks during leadership election.
On 08/12/2015 06:06 PM, Ravishankar N wrote:
> On 08/12/2015 05:56 PM, Anoop Nair wrote:
>> Hmm, that's kind of risky. What if you good leg fails before the sync
>> happens to the secondary leg?
> Oh, the writes would still need to happen as a part of the AFR
> transaction; so if the writes (which are wound to all bricks
> immediately, its just that we don't wait for all responses before
> unwinding to DHT ) failed on some bricks, the self-heal would take
> care of it..
>> Replay cache may serve as a lifeline in such a scenario.
>> ----- Original Message -----
>> From: "Ravishankar N" <ravishankar at redhat.com>
>> To: "Anoop Nair" <annair at redhat.com>, gluster-devel at gluster.org
>> Sent: Wednesday, August 12, 2015 5:46:04 PM
>> Subject: Re: [Gluster-devel] semi-sync replication
>> On 08/12/2015 12:50 PM, Anoop Nair wrote:
>>> Do we have plans to support "semi-synchronous" type replication in
>>> the future? By semi-sync I mean writing to one leg the replica,
>>> securing the write on a faster stable storage (capacitor backed SSD
>>> or NVRAM) and then acknowledge the client. The write on other
>>> replica leg may happen at later point in time.
>> Not exactly in the way you describe, but there are plans to achieve
>> "near-synchronous" replication wherein we wind the write to all replica
>> legs, but acknowledge success as soon as we hear a success from one of
>> the bricks (instead of waiting for responses from all bricks as we do
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
> Gluster-devel mailing list
> Gluster-devel at gluster.org
More information about the Gluster-devel