<div dir="ltr"><br><div><br></div><div>From snapview client perspective one important thing to note. For building the context for the entry point (by default ".snaps") a explicit lookup has to be done on it. The dentry for ".snaps" is not returned when readdir is done on its parent directory (Not even when ls -a is done). So for building the context of .snaps (in the context snapview client saves the information about whether it is a real inode or virtual inode) we need a lookup. </div><div><br></div><div>From snapview server perspective as well a lookup might be needed. In snapview server a glfs handle is established between the snapview server and the snapshot brick. So a inode in snapview server process contains the glfs handle for the object being accessed from snapshot. In snapview server readdirp does not build the inode context (which contains the glfs handle etc) because glfs handle is returned only in lookup.</div><div><br></div><div>Regards,</div><div>Raghavendra</div><div> </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
----- Original Message -----<br>
> From: "Raghavendra G" <<a href="mailto:raghavendra.hg@gmail.com">raghavendra.hg@gmail.com</a>><br>
> To: "Nithya Balachandran" <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> Cc: "Raghavendra Gowdappa" <<a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>>, <a href="mailto:anoopcs@redhat.com">anoopcs@redhat.com</a>, "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>>,<br>
> <a href="mailto:raghavendra@redhat.com">raghavendra@redhat.com</a><br>
> Sent: Tuesday, August 29, 2017 8:52:28 AM<br>
> Subject: Re: [Gluster-devel] Need inputs on patch #17985<br>
><br>
> On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> wrote:<br>
><br>
> > It has been a while but iirc snapview client (loaded abt dht/tier etc) had<br>
> > some issues when we ran tiering tests. Rafi might have more info on this -<br>
> > basically it was expecting to find the inode_ctx populated but it was not.<br>
> ><br>
><br>
> Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the<br>
> ownership of,<br>
><br>
> * Identifying whether the patch in question causes the issue?<br>
<br>
</span>gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly checked whether its the same state stored by gf_svc_lookup_cbk and it looks like the same state. So, I guess readdirp is handled correctly by snapview-client and an explicit lookup is not required. But, will wait for inputs from rabhat and rafi.<br>
<br>
[1] <a href="https://github.com/gluster/glusterfs/blob/master/xlators/features/snapview-client/src/snapview-client.c#L1962" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>glusterfs/blob/master/xlators/<wbr>features/snapview-client/src/<wbr>snapview-client.c#L1962</a><br>
<div><div class="h5"><br>
> * Send a fix or at least evaluate whether a fix is possible.<br>
><br>
> @Others,<br>
><br>
> With the motivation of getting some traction on this, Is it ok if we:<br>
> * Set a deadline of around 15 days to complete the review (or testing with<br>
> the patch in question) of respective components and to come up with issues<br>
> (if any).<br>
> * Post the deadline, if there are no open issues, go ahead and merge the<br>
> patch?<br>
><br>
> If time is not enough, let us know and we can come up with a reasonable<br>
> time.<br>
><br>
> regards,<br>
> Raghavendra<br>
><br>
><br>
> > On 24 August 2017 at 10:13, Raghavendra G <<a href="mailto:raghavendra.hg@gmail.com">raghavendra.hg@gmail.com</a>><br>
> > wrote:<br>
> ><br>
> >> Note that we need to consider xlators on brick stack too. I've added<br>
> >> maintainers/peers of xlators on brick stack. Please explicitly ack/nack<br>
> >> whether this patch affects your component.<br>
> >><br>
> >> For reference, following are the xlators loaded in brick stack<br>
> >><br>
> >> storage/posix<br>
> >> features/trash<br>
> >> features/changetimerecorder<br>
> >> features/changelog<br>
> >> features/bitrot-stub<br>
> >> features/access-control<br>
> >> features/locks<br>
> >> features/worm<br>
> >> features/read-only<br>
> >> features/leases<br>
> >> features/upcall<br>
> >> performance/io-threads<br>
> >> features/selinux<br>
> >> features/marker<br>
> >> features/barrier<br>
> >> features/index<br>
> >> features/quota<br>
> >> debug/io-stats<br>
> >> performance/decompounder<br>
> >> protocol/server<br>
> >><br>
> >><br>
> >> For those not following this thread, the question we need to answer is,<br>
> >> "whether the xlator you are associated with works fine if a non-lookup<br>
> >> fop (like open, setattr, stat etc) hits it without a lookup ever being<br>
> >> done<br>
> >> on that inode"<br>
> >><br>
> >> regards,<br>
> >> Raghavendra<br>
> >><br>
> >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <<br>
> >> <a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>> wrote:<br>
> >><br>
> >>> Thanks Pranith and Ashish for your inputs.<br>
> >>><br>
> >>> ----- Original Message -----<br>
> >>> > From: "Pranith Kumar Karampuri" <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>><br>
> >>> > To: "Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>><br>
> >>> > Cc: "Raghavendra Gowdappa" <<a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>>, "Xavier Hernandez" <<br>
> >>> <a href="mailto:xhernandez@datalab.es">xhernandez@datalab.es</a>>, "Gluster Devel"<br>
> >>> > <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> >>> > Sent: Wednesday, August 23, 2017 11:55:19 AM<br>
> >>> > Subject: Re: Need inputs on patch #17985<br>
> >>> ><br>
> >>> > Raghavendra,<br>
> >>> > As Ashish mentioned, there aren't any known problems if upper<br>
> >>> xlators<br>
> >>> > don't send lookups in EC at the moment.<br>
> >>> ><br>
> >>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>><br>
> >>> wrote:<br>
> >>> ><br>
> >>> > > Raghvendra,<br>
> >>> > ><br>
> >>> > > I have provided my comment on this patch.<br>
> >>> > > I think EC will not have any issue with this approach.<br>
> >>> > > However, I would welcome comments from Xavi and Pranith too for any<br>
> >>> side<br>
> >>> > > effects which I may not be able to foresee.<br>
> >>> > ><br>
> >>> > > Ashish<br>
> >>> > ><br>
> >>> > > ------------------------------<br>
> >>> > > *From: *"Raghavendra Gowdappa" <<a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>><br>
> >>> > > *To: *"Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>><br>
> >>> > > *Cc: *"Pranith Kumar Karampuri" <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>>, "Xavier<br>
> >>> Hernandez"<br>
> >>> > > <<a href="mailto:xhernandez@datalab.es">xhernandez@datalab.es</a>>, "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> >>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM<br>
> >>> > > *Subject: *Need inputs on patch #17985<br>
> >>> > ><br>
> >>> > ><br>
> >>> > > Hi Ashish,<br>
> >>> > ><br>
> >>> > > Following are the blockers for making a decision on whether patch<br>
> >>> [1] can<br>
> >>> > > be merged or not:<br>
> >>> > > * Evaluation of dentry operations (like rename etc) in dht<br>
> >>> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,<br>
> >>> chmod<br>
> >>> > > etc) hits EC without a single lookup performed on file/inode<br>
> >>> > ><br>
> >>> > > Can you please comment on the patch? I'll take care of dht part.<br>
> >>> > ><br>
> >>> > > [1] <a href="https://review.gluster.org/#/c/17985/" rel="noreferrer" target="_blank">https://review.gluster.org/#/<wbr>c/17985/</a><br>
> >>> > ><br>
> >>> > > regards,<br>
> >>> > > Raghavendra<br>
> >>> > ><br>
> >>> > ><br>
> >>> ><br>
> >>> ><br>
> >>> > --<br>
> >>> > Pranith<br>
> >>> ><br>
> >>> ______________________________<wbr>_________________<br>
> >>> Gluster-devel mailing list<br>
> >>> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> >>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
> >>><br>
> >>> --<br>
> >>> Raghavendra G<br>
> >>><br>
</div></div>> >>> <<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><wbr>><br>
<div class="HOEnZb"><div class="h5">> >>><br>
> >><br>
> >> ______________________________<wbr>_________________<br>
> >> Gluster-devel mailing list<br>
> >> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
> >><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
> Raghavendra G<br>
><br>
</div></div></blockquote></div><br></div>