[Gluster-users] [Gluster-Maintainers] Proposal to change gNFS status

Niels de Vos ndevos at redhat.com
Fri Nov 22 12:34:06 UTC 2019


On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> Hi All,
> 
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)

I'd recommend against re-surrecting gNFS. The server is not very
extensible and adding new features is pretty tricky without breaking
other (mostly undocumented) use-cases. Eventhough NFSv3 is stateless,
the actual usage of NFSv3, mounting and locking is definitely not. The
server keeps track of which clients have an export mounted, and which
clients received grants for locks. These things are currently not very
reliable in combination with high-availability. And there is also the by
default disabled duplicate-reply-cache (DRC) that has always been very
buggy (and neither cluster-aware).

If we enable gNFS by default again, we're sending out an incorrect
message to our users. gNFS works fine for certain workloads and
environments, but it should not be advertised as 'clustered NFS'.

Instead of going the gNFS route, I suggest to make it easier to deploy
NFS-Ganesha as that is a more featured, well maintained and can be
configured for much more reliable high-availability than gNFS.

If someone really wants to maintain gNFS, I won't object much, but they
should know that previous maintainers have had many difficulties just
keeping it working well while other components evolved. Addressing some
of the bugs/limitations will be extremely difficult and may require
large rewrites of parts of gNFS.

Until now, I have not read convincing arguments in this thread that gNFS
is stable enough to be consumed by anyone in the community. Users should
be aware of its limitations and be careful what workloads to run on it.

HTH,
Niels



More information about the Gluster-users mailing list