[Gluster-devel] RFC/Review: libgfapi object handle based extensions

Shyamsundar Ranganathan srangana at redhat.com
Mon Oct 7 10:49:00 UTC 2013


----- Original Message ----- 

> From: "Anand Avati" <avati at gluster.org>
> To: "Shyamsundar Ranganathan" <srangana at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at nongnu.org>
> Sent: Tuesday, October 1, 2013 7:46:59 AM
> Subject: Re: RFC/Review: libgfapi object handle based extensions

> > > Now consider what happens in case of READDIRPLUS. A list of names and
> > > handles
> 
> > > are returned to the client. The list of names can possibly include names
> 
> > > which were previously looked up as well. Both are supposed to represent
> > > the
> 
> > > same "gfid", but here will be returning new glfs_objects. When a client
> 
> > > performs an operation on a GFID, on which glfs_object will the operation
> > > be
> 
> > > performed at the gfapi layer? This part seems very ambiguous and not
> > > clear.
> 

> > I should have made a note for readdirplus earlier, this would default to
> > the
> > fd based version of the same, not a handle/object based version of the
> > same.
> > So we would transition from an handle to an fd via glfs_h_opendir and then
> > continue with the readdir variants. if I look at the POSIX *at routines,
> > this seem about right, but of course we may have variances here.
> 

> You would get an fd for the directory on which the READDIRPLUS is attempted.
> I was referring to the replies, where every entry needs to be returned with
> its own handle (on which operations can arrive without LOOKUP). Think of
> READDIRPLUS as bulk LOOKUP.

> > > What would really help is if you can tell what a glfs_object is supposed
> > > to
> 
> > > represent? - an on disk inode (i.e GFID)? an in memory per-graph inode
> > > (i.e
> 
> > > inode_t)? A dentry? A per-operation handle to an on disk inode? A
> 
> > > per-operation handle to an in memory per-graph inode? A per operation
> > > handle
> 
> > > to a dentry? In the current form, it does not seem to fit any of the
> > > these
> 
> > > categories.
> 

> > Well I think of it as a handle to an file system object. Having said that,
> > if
> > we just returned the inode pointer as this handle, the graph switches can
> > cause a problem, in which case we need to default to the (as per my
> > understanding) the FUSE manner of working. keeping the handle 1:1 via other
> > infrastructure does not seem beneficial ATM. I think you cover this in the
> > subsequent mail so let us continue there.
> 

> That is correct, using inode_t will force us to behave like FUSE. As
> mentioned in the other mail, we are probably better off fixing that and
> using inode_t in a cleaner way in both FUSE and gfapi.

Based on discussion about new instances glfs_object pointers being returned for the same object on multiple lookups and/or on creates in the mailing list, a call between reviewers was set and it is currently determined to proceed as follows,

- The change to return the inode pointer as an opaque glfs_object * will not impact the interface
- The change to move this to inode * is required, at the very least from the readdirplus implementation when needed.
- When moving to the inode * method internally, the nlookup of the inode should be incremented rather than the refs (to be in sync with the FUSE code)
- When moving to the inode * method, an additional forget/nclose implementation would be needed to facilitate consumers of the API to forget _n_ instances of the same objects looked up (rather than multiple glfs_h_close)

Thanks Avati and Amar for the catching the same, feedback and resolving the intricacies around this. Please add if anything in this regard is missed out.

The code put up for review hence has the following changes,

The current patchset has no changes in regard to glfs_object becoming an inode, we will take a bug to move the gluster inode infrastructure to be graph independent and then later accommodate the change into the glfs_h_* APIs (to be filed if and when this patch makes upstream).

This patch set contains the changes for other comments till now, and also a reference implementation of the UID/GID/supplementary groups setting for multi-threaded consumers of these APIs to leverage identity handling.

Shyam

> Avati




More information about the Gluster-devel mailing list