[Gluster-devel] Need inputs on patch #17985
Raghavendra G
raghavendra.hg at gmail.com
Tue Sep 12 06:01:44 UTC 2017
Update. Two more days to go for the deadline. Till now, there are no open
issues identified against this patch.
On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> ----- Original Message -----
> > From: "FNU Raghavendra Manjunath" <rabhat at redhat.com>
> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>
> > Cc: "Raghavendra G" <raghavendra.hg at gmail.com>, "Nithya Balachandran" <
> nbalacha at redhat.com>, anoopcs at redhat.com,
> > "Gluster Devel" <gluster-devel at gluster.org>, "Raghavendra Bhat" <
> raghavendra at redhat.com>
> > Sent: Thursday, September 7, 2017 6:44:51 PM
> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> >
> > From snapview client perspective one important thing to note. For
> building
> > the context for the entry point (by default ".snaps") a explicit lookup
> has
> > to be done on it. The dentry for ".snaps" is not returned when readdir is
> > done on its parent directory (Not even when ls -a is done). So for
> building
> > the context of .snaps (in the context snapview client saves the
> information
> > about whether it is a real inode or virtual inode) we need a lookup.
>
> Since the dentry corresponding to ".snaps" is not returned, there won't be
> an inode for this directory linked in itable. Also, glusterfs wouldn't have
> given nodeid corresponding to ".snaps" during readdir response (as dentry
> itself is not returned). So, kernel would do an explicit lookup before
> doing any operation on ".snaps" (unlike for those dentries which contain
> nodeid kernel can choose to skip a lookup) and we are safe. So, #17985 is
> safe in its current form.
>
> >
> > From snapview server perspective as well a lookup might be needed. In
> > snapview server a glfs handle is established between the snapview server
> > and the snapshot brick. So a inode in snapview server process contains
> the
> > glfs handle for the object being accessed from snapshot. In snapview
> > server readdirp does not build the inode context (which contains the glfs
> > handle etc) because glfs handle is returned only in lookup.
>
> Same argument I've given holds good for this case too. Important point to
> note is that "there is no dentry and hence no nodeid corresponding to
> .snaps is passed to kernel and kernel is forced to do an explicit lookup".
>
> >
> > Regards,
> > Raghavendra
> >
> >
> > On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <
> rgowdapp at redhat.com>
> > wrote:
> >
> > >
> > >
> > > ----- Original Message -----
> > > > From: "Raghavendra G" <raghavendra.hg at gmail.com>
> > > > To: "Nithya Balachandran" <nbalacha at redhat.com>
> > > > Cc: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, anoopcs at redhat.com
> ,
> > > "Gluster Devel" <gluster-devel at gluster.org>,
> > > > raghavendra at redhat.com
> > > > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > > > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> > > >
> > > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> > > nbalacha at redhat.com>
> > > > wrote:
> > > >
> > > > > It has been a while but iirc snapview client (loaded abt dht/tier
> etc)
> > > had
> > > > > some issues when we ran tiering tests. Rafi might have more info on
> > > this -
> > > > > basically it was expecting to find the inode_ctx populated but it
> was
> > > not.
> > > > >
> > > >
> > > > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > > > ownership of,
> > > >
> > > > * Identifying whether the patch in question causes the issue?
> > >
> > > gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> > > checked whether its the same state stored by gf_svc_lookup_cbk and it
> looks
> > > like the same state. So, I guess readdirp is handled correctly by
> > > snapview-client and an explicit lookup is not required. But, will wait
> for
> > > inputs from rabhat and rafi.
> > >
> > > [1] https://github.com/gluster/glusterfs/blob/master/xlators/
> > > features/snapview-client/src/snapview-client.c#L1962
> > >
> > > > * Send a fix or at least evaluate whether a fix is possible.
> > > >
> > > > @Others,
> > > >
> > > > With the motivation of getting some traction on this, Is it ok if we:
> > > > * Set a deadline of around 15 days to complete the review (or testing
> > > with
> > > > the patch in question) of respective components and to come up with
> > > issues
> > > > (if any).
> > > > * Post the deadline, if there are no open issues, go ahead and merge
> the
> > > > patch?
> > > >
> > > > If time is not enough, let us know and we can come up with a
> reasonable
> > > > time.
> > > >
> > > > regards,
> > > > Raghavendra
> > > >
> > > >
> > > > > On 24 August 2017 at 10:13, Raghavendra G <
> raghavendra.hg at gmail.com>
> > > > > wrote:
> > > > >
> > > > >> Note that we need to consider xlators on brick stack too. I've
> added
> > > > >> maintainers/peers of xlators on brick stack. Please explicitly
> > > ack/nack
> > > > >> whether this patch affects your component.
> > > > >>
> > > > >> For reference, following are the xlators loaded in brick stack
> > > > >>
> > > > >> storage/posix
> > > > >> features/trash
> > > > >> features/changetimerecorder
> > > > >> features/changelog
> > > > >> features/bitrot-stub
> > > > >> features/access-control
> > > > >> features/locks
> > > > >> features/worm
> > > > >> features/read-only
> > > > >> features/leases
> > > > >> features/upcall
> > > > >> performance/io-threads
> > > > >> features/selinux
> > > > >> features/marker
> > > > >> features/barrier
> > > > >> features/index
> > > > >> features/quota
> > > > >> debug/io-stats
> > > > >> performance/decompounder
> > > > >> protocol/server
> > > > >>
> > > > >>
> > > > >> For those not following this thread, the question we need to
> answer
> > > is,
> > > > >> "whether the xlator you are associated with works fine if a
> non-lookup
> > > > >> fop (like open, setattr, stat etc) hits it without a lookup ever
> being
> > > > >> done
> > > > >> on that inode"
> > > > >>
> > > > >> regards,
> > > > >> Raghavendra
> > > > >>
> > > > >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> > > > >> rgowdapp at redhat.com> wrote:
> > > > >>
> > > > >>> Thanks Pranith and Ashish for your inputs.
> > > > >>>
> > > > >>> ----- Original Message -----
> > > > >>> > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> > > > >>> > To: "Ashish Pandey" <aspandey at redhat.com>
> > > > >>> > Cc: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Xavier
> > > Hernandez" <
> > > > >>> xhernandez at datalab.es>, "Gluster Devel"
> > > > >>> > <gluster-devel at gluster.org>
> > > > >>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
> > > > >>> > Subject: Re: Need inputs on patch #17985
> > > > >>> >
> > > > >>> > Raghavendra,
> > > > >>> > As Ashish mentioned, there aren't any known problems if
> upper
> > > > >>> xlators
> > > > >>> > don't send lookups in EC at the moment.
> > > > >>> >
> > > > >>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey <
> > > aspandey at redhat.com>
> > > > >>> wrote:
> > > > >>> >
> > > > >>> > > Raghvendra,
> > > > >>> > >
> > > > >>> > > I have provided my comment on this patch.
> > > > >>> > > I think EC will not have any issue with this approach.
> > > > >>> > > However, I would welcome comments from Xavi and Pranith too
> for
> > > any
> > > > >>> side
> > > > >>> > > effects which I may not be able to foresee.
> > > > >>> > >
> > > > >>> > > Ashish
> > > > >>> > >
> > > > >>> > > ------------------------------
> > > > >>> > > *From: *"Raghavendra Gowdappa" <rgowdapp at redhat.com>
> > > > >>> > > *To: *"Ashish Pandey" <aspandey at redhat.com>
> > > > >>> > > *Cc: *"Pranith Kumar Karampuri" <pkarampu at redhat.com>,
> "Xavier
> > > > >>> Hernandez"
> > > > >>> > > <xhernandez at datalab.es>, "Gluster Devel" <
> > > gluster-devel at gluster.org>
> > > > >>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> > > > >>> > > *Subject: *Need inputs on patch #17985
> > > > >>> > >
> > > > >>> > >
> > > > >>> > > Hi Ashish,
> > > > >>> > >
> > > > >>> > > Following are the blockers for making a decision on whether
> patch
> > > > >>> [1] can
> > > > >>> > > be merged or not:
> > > > >>> > > * Evaluation of dentry operations (like rename etc) in dht
> > > > >>> > > * Whether EC works fine if a non-lookup fop (like open(dir),
> > > stat,
> > > > >>> chmod
> > > > >>> > > etc) hits EC without a single lookup performed on file/inode
> > > > >>> > >
> > > > >>> > > Can you please comment on the patch? I'll take care of dht
> part.
> > > > >>> > >
> > > > >>> > > [1] https://review.gluster.org/#/c/17985/
> > > > >>> > >
> > > > >>> > > regards,
> > > > >>> > > Raghavendra
> > > > >>> > >
> > > > >>> > >
> > > > >>> >
> > > > >>> >
> > > > >>> > --
> > > > >>> > Pranith
> > > > >>> >
> > > > >>> _______________________________________________
> > > > >>> Gluster-devel mailing list
> > > > >>> Gluster-devel at gluster.org
> > > > >>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > > > >>>
> > > > >>> --
> > > > >>> Raghavendra G
> > > > >>>
> > > > >>> <http://lists.gluster.org/mailman/listinfo/gluster-devel>
> > > > >>>
> > > > >>
> > > > >> _______________________________________________
> > > > >> Gluster-devel mailing list
> > > > >> Gluster-devel at gluster.org
> > > > >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > > > >>
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Raghavendra G
> > > >
> > >
> >
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170912/a16c9ca8/attachment-0001.html>
More information about the Gluster-devel
mailing list