[Gluster-devel] Posix lock migration design

Susant Palai spalai at redhat.com
Thu Mar 3 09:39:06 UTC 2016


Update on Lock migration design.

For lock migration we are planning to get rid of fd association with the lock. Rather we will base our lock operations
based on lk-owner(equivalent of pid) which is POSIX standard. The fd association does not suit the need of lock migration
as migrated fd will not be valid on the destination. Where as Working with lk-owner is much flexible as it does not change
across different server.

The current model of posix lock infrastructure associate fd with lock for the following operations which we are planning to
replace with lk-owner.

1) lock cleanup for protocol client disconnects based on fd

2) release call on fd 

3) fuse fd migration (triggered by a graph switch)

The new design being worked out and will update here once ready.

Please post your suggestions/comments here :)

Thanks,
Susant

----- Original Message -----
> From: "Raghavendra G" <raghavendra at gluster.com>
> To: "Susant Palai" <spalai at redhat.com>
> Cc: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Tuesday, 1 March, 2016 11:40:54 AM
> Subject: Re: [Gluster-devel] Posix lock migration design
> 
> On Mon, Feb 29, 2016 at 12:52 PM, Susant Palai <spalai at redhat.com> wrote:
> 
> > Hi Raghavendra,
> >    I have a question on the design.
> >
> >    Currently in case of a client disconnection, pl_flush cleans up the
> > locks associated with the fd created from that client.
> > From the design, rebalance will migrate the locks to the new destination.
> > Now in case client gets disconnected from the
> > destination brick, how it is supposed to clean up the locks as
> > rebalance/brick have no idea whether the client has opened
> > an fd on destination and what the fd is.
> >
> 
> >    So the question is how to associate the client fd with locks on
> > destination.
> >
> 
> We don't use fds to cleanup the locks during flush. We use lk-owner which
> doesn't change across migration. Note that lk-owner for posix-locks is
> filled by the vfs/kernel where we've glusterfs mount.
> 
> <pl_flush>
>          pthread_mutex_lock (&pl_inode->mutex);
>         {
>                 __delete_locks_of_owner (pl_inode, frame->root->client,
>                                          &frame->root->lk_owner);
>         }
>         pthread_mutex_unlock (&pl_inode->mutex);
> </pl_flush>
> 
> 
> > Thanks,
> > Susant
> >
> > ----- Original Message -----
> > From: "Susant Palai" <spalai at redhat.com>
> > To: "Gluster Devel" <gluster-devel at gluster.org>
> > Sent: Friday, 29 January, 2016 3:15:14 PM
> > Subject: [Gluster-devel] Posix lock migration design
> >
> > Hi,
> >    Here, [1]
> >
> > https://docs.google.com/document/d/17SZAKxx5mhM-cY5hdE4qRq9icmFqy3LBaTdewofOXYc/edit?usp=sharing
> > is a google document about proposal for "POSIX_LOCK_MIGRATION". Problem
> > statement and design are explained in the document it self.
> >
> >   Requesting the devel list to go through the document and
> > comment/analyze/suggest, to take the thoughts forward (either on the
> > google doc itself or here on the devel list).
> >
> >
> > Thanks,
> > Susant
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 
> 
> 
> --
> Raghavendra G
> 


More information about the Gluster-devel mailing list