[Gluster-devel] Rebalance data migration and corruption

Raghavendra G raghavendra at gluster.com
Tue Feb 9 07:00:58 UTC 2016


>>    Right. But if there are simultaneous access to the same file from

>     any other client and rebalance process, delegations shall not be
>>     granted or revoked if granted even though they are operating at
>>     different offsets. So if you rely only on delegations, migration may
>>     not proceed if an application has held a lock or doing any I/Os.
>>
>>
>> Does the brick process wait for the response of delegation holder
>> (rebalance process here) before it wipes out the delegation/locks? If
>> that's the case, rebalance process can complete one transaction of
>> (read, src) and (write, dst) before responding to a delegation recall.
>> That way there is no starvation for both applications and rebalance
>> process (though this makes both of them slower, but that cannot helped I
>> think).
>>
>
> yes. Brick process should wait for certain period before revoking the
> delegations forcefully in case if it is not returned by the client. Also if
> required (like done by NFS servers) we can choose to increase this timeout
> value at run time if the client is diligently flushing the data.


hmm.. I would prefer an infinite timeout. The only scenario where brick
process can forcefully flush leases would be connection lose with rebalance
process. The more scenarios where brick can flush leases without knowledge
of rebalance process, we open up more race-windows for this bug to occur.

In fact at least in theory to be correct, rebalance process should replay
all the transactions that happened during the lease which got flushed out
by brick (after re-acquiring that lease). So, we would like to avoid any
such scenarios.

Btw, what is the necessity of timeouts? Is it an insurance against rogue
clients who won't respond back to lease recalls?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160209/8786544f/attachment.html>


More information about the Gluster-devel mailing list