[Gluster-devel] Report ESTALE as ENOENT

FNU Raghavendra Manjunath rabhat at redhat.com
Thu Mar 24 14:04:47 UTC 2016


Yes. I think the caching example mentioned by Shyam is a good example of
ESTALE error. Also User Serviceable Snapshots (USS) relies heavily on
ESTALE errors. Because the files/directories from the snapshots are
assigned a virtual gfid on the fly when being looked up. If those inodes
are purged out of the inode table due to lru list becoming full, then a
access to that gfid from the client, will make snapview-server send ESTALE
and either fuse (I think our fuse xlator does a revalidate upon getting
ESTALE) or NFS client can revalidate via path based resolution.

Regards,
Raghavendra


On Thu, Mar 24, 2016 at 9:51 AM, Shyam <srangana at redhat.com> wrote:

> On 03/23/2016 12:07 PM, Ravishankar N wrote:
>
>> On 03/23/2016 09:16 PM, Soumya Koduri wrote:
>>
>>> If it occurs only when the file/dir is not actually present at the
>>> back-end, shouldn't we fix the server to send ENOENT then?
>>>
>> I never fully understood it here is the answer:
>> http://review.gluster.org/#/c/6318/
>>
>
> The intention of ESTALE is to state that the inode#/GFID is stale, when
> using that for any operations. IOW, we did not find the GFID in the
> backend, that does not mean the name is not present. This hence means, if
> you have a pGFID+bname, try resolving with that.
>
> For example, a client side cache can hang onto a GFID for a bname, but
> another client could have, in the interim, unlinked the bname and create a
> new file there.
>
> A presence test using GFID by the client that cached the result the first
> time, is an ESTALE. But a resolution based on pGFID+bname again by the same
> client would be a success.
>
> By extension, a GFID based resolution, when not really present in the
> backend will return ESTALE, it could very well mean ENOENT, but that has to
> be determined by the client again, if possible.
>
> See "A10. What does it mean when my application fails because of an ESTALE
> error?" for NFS here [1] and [2] (there maybe better sources for this
> information)
>
> [1] http://nfs.sourceforge.net/
> [2] https://lwn.net/Articles/272684/
>
>
>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160324/afc13c83/attachment.html>


More information about the Gluster-devel mailing list