[Gluster-users] [Gluster-Maintainers] Proposal to change gNFS status

Amar Tumballi amarts at gmail.com
Tue Nov 26 04:52:57 UTC 2019


Responses inline.

On Fri, Nov 22, 2019 at 6:04 PM Niels de Vos <ndevos at redhat.com> wrote:

> On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> > Hi All,
> >
> > As per the discussion on https://review.gluster.org/23645, recently we
> > changed the status of gNFS (gluster's native NFSv3 support) feature to
> > 'Depricated / Orphan' state. (ref:
> > https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189
> ).
> > With this email, I am proposing to change the status again to 'Odd Fixes'
> > (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> I'd recommend against re-surrecting gNFS. The server is not very
> extensible and adding new features is pretty tricky without breaking
> other (mostly undocumented) use-cases.


I too am against adding the features/enhancements to gNFS. It doesn't make
sense. We are removing features from glusterfs itself, adding features to
gNFS after 3 years wouldn't even be feasible.

I guess you missed the intention of my proposal. It was not about
'resurrecting' gNFS to 'Maintained' or 'Supported' status. It was about
taking it out of 'Orphan' status, because there are still users who are
'happy' with it. Hence I picked the status as 'Odd Fixes' (as per
MAINTAINERS file, there was nothing else which would give meaning of *'This
feature is still shipped, but we are not adding any features or not
actively maintaining it'. *



> Eventhough NFSv3 is stateless,
> the actual usage of NFSv3, mounting and locking is definitely not. The
> server keeps track of which clients have an export mounted, and which
> clients received grants for locks. These things are currently not very
> reliable in combination with high-availability. And there is also the by
> default disabled duplicate-reply-cache (DRC) that has always been very
> buggy (and neither cluster-aware).
>
> If we enable gNFS by default again, we're sending out an incorrect
> message to our users. gNFS works fine for certain workloads and
> environments, but it should not be advertised as 'clustered NFS'.
>
>
I didn't talk or was intending to go this route. I am not even talking
about making gNFS 'default' enable. That would take away our focus on
glusterfs, and different things we can solve with Gluster alone. Not sure
why my email was taken as there would be focus on gNFS.


> Instead of going the gNFS route, I suggest to make it easier to deploy
> NFS-Ganesha as that is a more featured, well maintained and can be
> configured for much more reliable high-availability than gNFS.
>
>
I believe this is critical, and we surely need to work on it. But doesn't
come in the way of doing 1-2 bug fixes in gNFS (if any) in a release.


> If someone really wants to maintain gNFS, I won't object much, but they
> should know that previous maintainers have had many difficulties just
> keeping it working well while other components evolved. Addressing some
> of the bugs/limitations will be extremely difficult and may require
> large rewrites of parts of gNFS.
>

Yes, that awareness is critical, and it should exist.


> Until now, I have not read convincing arguments in this thread that gNFS
> is stable enough to be consumed by anyone in the community. Users should
> be aware of its limitations and be careful what workloads to run on it.
>

In this thread, Xie mentioned that he is managing gNFS on 1000+ servers
with 2000+ clients (more than 24 gluster cluster overall) for more than 2
years now. If that doesn't sound as 'stability', not sure what sounds as.

I agree that the users should be careful about the proper usecase to use
gNFS. I am even open to say we should add a warning or console log in
gluster CLI when 'gluster volume set <VOL> nfs.disable false' is performed,
saying it is advised to move to NFS-Ganesha based approach, and give a URL
link in that message. But the whole point is, when we make a release, we
should still ship gNFS as there are some users, very happy with gNFS, and
their usecases are properly handled by gNFS in its current form itself. Why
make them unhappy, or shift to other projects?

End of the day, as developers it is our duty to make sure we suggest the
best technologies to users, but the intentions should always be to make
sure we solve problems. If there are already solved problems, why resurface
them in the name of better technology?

So, again, my proposal is, to keep gNFS in the codebase (not as Orphan),
and continue to make releases with gNFS binary shipped when we make
release, not to make the focus of project to start working on enhancements
of gNFS.

Happy to answer if anyone has further queries.

I have sent a patch https://review.gluster.org/23738 for the same, and I
see people commenting already on that. I agree that Xie's contributions to
Gluster may need to increase (specifically in gNFS component) to be called
as MAINTAINER. Happy to introduce him as 'Peer' and change the title later
when it is time. Jiffin, thanks for volunteering to have a look on patches
when you have time till glusterfs-8.

Regards,
Amar



> HTH,
> Niels
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191126/3a5efe2e/attachment.html>


More information about the Gluster-users mailing list