[Gluster-devel] Posix lock migration design

Anoop C S anoopcs at redhat.com
Tue Mar 1 07:40:03 UTC 2016


On Tue, 2016-03-01 at 11:40 +0530, Raghavendra G wrote:
> 
> 
> On Mon, Feb 29, 2016 at 12:52 PM, Susant Palai <spalai at redhat.com>
> wrote:
> > Hi Raghavendra,
> >    I have a question on the design.
> > 
> >    Currently in case of a client disconnection, pl_flush cleans up
> > the locks associated with the fd created from that client.
> > From the design, rebalance will migrate the locks to the new
> > destination. Now in case client gets disconnected from the
> > destination brick, how it is supposed to clean up the locks as
> > rebalance/brick have no idea whether the client has opened
> > an fd on destination and what the fd is.

>    So the question is how to associate the client fd with locks on
> destination.
> We don't use fds to cleanup the locks during flush. We use lk-owner
> which doesn't change across migration. Note that lk-owner for posix-
> locks is filled by the vfs/kernel where we've glusterfs mount.
A small note:
Since we don't set lk_owner for gfapi-based glusterfs clients, in those
scenarios frame->root->lk_owner is being calculated out of bit-wise
shift operations done on pid of the client.
<rpc_clnt_record>
	. . .
	if (call_frame->root->lk_owner.len)
		au.lk_owner.lk_owner_val = call_frame->root->lk_owner.data;
	else {
		owner[0] = (char)(au.pid & 0xff);
                owner[1] = (char)((au.pid >> 8) & 0xff);
                owner[2] = (char)((au.pid >> 16) & 0xff);
                owner[3] = (char)((au.pid >> 24) & 0xff);
                au.lk_owner.lk_owner_val = owner;
                au.lk_owner.lk_owner_len = 4;
	}
	. . .
</rpc_clnt_record>
To address this issue, we have http://review.gluster.org/#/c/12876/ whi
ch will expose a public api to set lk_owner for gfapi_based clients.
> <pl_flush>
>          pthread_mutex_lock (&pl_inode->mutex);
>         {
>                 __delete_locks_of_owner (pl_inode, frame->root->client,
>                                          &frame->root->lk_owner);
>         }
>         pthread_mutex_unlock (&pl_inode->mutex);
> </pl_flush>
> > > 

> > 
> > Thanks,
> > 
> > Susant
> > 

> > 
> > ----- Original Message -----
> > 
> > From: "Susant Palai" <spalai at redhat.com>
> > 
> > To: "Gluster Devel" <gluster-devel at gluster.org>
> > 
> > Sent: Friday, 29 January, 2016 3:15:14 PM
> > 
> > Subject: [Gluster-devel] Posix lock migration design
> > 

> > 
> > Hi,
> > 
> >    Here, [1]
> > 
https://docs.google.com/document/d/17SZAKxx5mhM-cY5hdE4qRq9icmFqy3LBaTdewofOXYc/edit?usp=sharing
> > 
> > is a google document about proposal for "POSIX_LOCK_MIGRATION". Problem statement and design are explained in the document it self.
> > 

> > 
> >   Requesting the devel list to go through the document and comment/analyze/suggest, to take the thoughts forward (either on the
> > 
> > google doc itself or here on the devel list).
> > 

> > 

> > 
> > Thanks,
> > 
> > Susant
> > 
> > _______________________________________________
> > 
> > Gluster-devel mailing list
> > 
Gluster-devel at gluster.org
> > 
http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> > _______________________________________________
> > 
> > Gluster-devel mailing list
> > 
Gluster-devel at gluster.org
> > 
http://www.gluster.org/mailman/listinfo/gluster-devel
> > 


> 

> > -- 
> Raghavendra G> 


> _______________________________________________
> Gluster-devel mailing list
> 
Gluster-devel at gluster.org> 
http://www.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160301/fd47e7ce/attachment.html>


More information about the Gluster-devel mailing list