[Gluster-devel] Replace cluster wide gluster locks with volume wide locks
vbellur at redhat.com
Fri Sep 13 05:28:03 UTC 2013
On 09/13/2013 12:30 AM, Avra Sengupta wrote:
> After having further discussions, we revisited the requirements and it
> looks possible to further improve them, as well
> as the design.
> 1. We classify all gluster operations in three different classes :
> Create volume, Delete volume, and volume specific
> 2. At any given point of time, we should allow two simultaneous
> operations (create, delete or volume specific), as long
> as each both the operations are not happening on the same volume.
> 3. If two simultaneous operations are performed on the same volume, the
> operation which manages to acquire the volume
> lock will succeed, while the other will fail.
> In order to achieve this, we propose a locking engine, which will
> receive lock requests from these three types of
How is the locking engine proposed to be implemented? Is it part of
glusterd or a separate process?
>Each such request for a particular volume will contest for
> the same volume lock (based on the volume name
> and the node-uuid). For example, a delete volume command for volume1 and
> a volume status command for volume 1 will
> contest for the same lock (comprising of the volume name, and the uuid
> of the node winning the lock), in which case,
> one of these commands will succeed and the other one will not, failing
> to acquire the lock.
Will volume status need to hold a lock?
> Whereas, if two operations are simultaneously performed on a different
> volumes they should happen smoothly, as both
> these operations would request the locking engine for two different
> locks, and will succeed in locking them in parallel.
How do you propose to manage the op state machine? Right now it is
global in scope - how does that fit into this model?
More information about the Gluster-devel