[Gluster-devel] Report ESTALE as ENOENT

FNU Raghavendra Manjunath rabhat at redhat.com
Thu Mar 24 14:41:19 UTC 2016


I would still prefer not converting all the ESTALE to ENOENT. I think we
need to understand this specific case of parallel rm -rfs getting ESTALE
errors and handle it accordingly.

Regarding, gfapi not honoring the ESTALE errors, I think it would be better
to do revalidates upon getting ESTALE.

Just my 2 cents.

Regards,
Raghavendra


On Thu, Mar 24, 2016 at 10:31 AM, Soumya Koduri <skoduri at redhat.com> wrote:

> Thanks for the information.
>
> On 03/24/2016 07:34 PM, FNU Raghavendra Manjunath wrote:
>
>>
>> Yes. I think the caching example mentioned by Shyam is a good example of
>> ESTALE error. Also User Serviceable Snapshots (USS) relies heavily on
>> ESTALE errors. Because the files/directories from the snapshots are
>> assigned a virtual gfid on the fly when being looked up. If those inodes
>> are purged out of the inode table due to lru list becoming full, then a
>> access to that gfid from the client, will make snapview-server send
>> ESTALE and either fuse (I think our fuse xlator does a revalidate upon
>> getting ESTALE) or NFS client can revalidate via path based resolution.
>>
>
> So wouldn't it be wrong not to send ESTALE to NFS-clients and map it to
> ENOENT, as was intended in the original mail.
>
> NFSv3 rfc [1] mentions that NFS3ERR_STALE is a valid error for REMOVE fop.
>
> Also (at least in gfapi) the resolve code path doesn't seem to be honoring
> ESTALE errors - glfs_resolve_component(..), glfs_refresh_inode_safe(..)
> etc.. Do we need to fix them?
>
>
> Thanks,
> Soumya
>
> [1] https://www.ietf.org/rfc/rfc1813.txt (section# 3.3.12)
>
>
>> Regards,
>> Raghavendra
>>
>>
>> On Thu, Mar 24, 2016 at 9:51 AM, Shyam <srangana at redhat.com
>> <mailto:srangana at redhat.com>> wrote:
>>
>>     On 03/23/2016 12:07 PM, Ravishankar N wrote:
>>
>>         On 03/23/2016 09:16 PM, Soumya Koduri wrote:
>>
>>             If it occurs only when the file/dir is not actually present
>>             at the
>>             back-end, shouldn't we fix the server to send ENOENT then?
>>
>>         I never fully understood it here is the answer:
>>         http://review.gluster.org/#/c/6318/
>>
>>
>>     The intention of ESTALE is to state that the inode#/GFID is stale,
>>     when using that for any operations. IOW, we did not find the GFID in
>>     the backend, that does not mean the name is not present. This hence
>>     means, if you have a pGFID+bname, try resolving with that.
>>
>>     For example, a client side cache can hang onto a GFID for a bname,
>>     but another client could have, in the interim, unlinked the bname
>>     and create a new file there.
>>
>>     A presence test using GFID by the client that cached the result the
>>     first time, is an ESTALE. But a resolution based on pGFID+bname
>>     again by the same client would be a success.
>>
>>     By extension, a GFID based resolution, when not really present in
>>     the backend will return ESTALE, it could very well mean ENOENT, but
>>     that has to be determined by the client again, if possible.
>>
>>     See "A10. What does it mean when my application fails because of an
>>     ESTALE error?" for NFS here [1] and [2] (there maybe better sources
>>     for this information)
>>
>>     [1] http://nfs.sourceforge.net/
>>     [2] https://lwn.net/Articles/272684/
>>
>>
>>
>>         _______________________________________________
>>         Gluster-devel mailing list
>>         Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>>         http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>     _______________________________________________
>>     Gluster-devel mailing list
>>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>>     http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160324/c3feaf0c/attachment-0001.html>


More information about the Gluster-devel mailing list