[Gluster-Maintainers] [Gluster-devel] Proposal to change gNFSstatus
Xie Changlong
zgrep at 139.com
Fri Nov 22 06:28:14 UTC 2019
在 2019/11/22 13:39, Yaniv Kaul 写道:
>
>
> On Fri, 22 Nov 2019, 5:03 Xie Changlong <zgrep at 139.com
> <mailto:zgrep at 139.com>> wrote:
>
>
> 在 2019/11/22 5:14, Kaleb Keithley 写道:
>> I personally wouldn't call three years ago — when we started to
>> deprecate it, in glusterfs-3.9 — a recent change.
>>
>> As a community the decision was made to move to NFS-Ganesha as
>> the preferred NFS solution, but it was agreed to keep the old
>> code in the tree for those who wanted it. There have been plans
>> to drop it from the community packages for most of those three
>> years, but we didn't follow through across the board until fairly
>> recently. Perhaps the most telling piece of data is that it's
>> been gone from the packages in the CentOS Storage SIG in
>> glusterfs-4.0, -4.1, -5, -6, and -7 with no complaints ever, that
>> I can recall.
>>
>> Ganesha is a preferable solution because it supports NFSv4,
>> NFSv4.1, NFSv4.2, and pNFS, in addition to legacy NFSv3. More
>> importantly, it is actively developed, maintained, and supported,
>> both in the community and commercially. There are several vendors
>> selling it, or support for it; and there are community packages
>> for it for all the same distributions that Gluster packages are
>> available for.
>>
>> Out in the world, the default these days is NFSv4. Specifically
>> v4.2 or v4.1 depending on how recent your linux kernel is. In the
>> linux kernel, client mounts start negotiating for v4.2 and work
>> down to v4.1, v4.0, and only as a last resort v3. NFSv3 client
>> support in the linux kernel largely exists at this point only
>> because of the large number of legacy servers still running that
>> can't do anything higher than v3. The linux NFS developers would
>> drop the v3 support in a heartbeat if they could.
>>
>> IMO, providing it, and calling it maintained, only encourages
>> people to keep using a dead end solution. Anyone in favor of
>> bringing back NFSv2, SSHv1, or X10R4? No? I didn't think so.
>>
>> The recent issue[1] where someone built gnfs in glusterfs-7.0 on
>> CentOS7 strongly suggests to me that gnfs is not actually working
>> well. Three years of no maintenance seems to have taken its toll.
>>
>> Other people are more than welcome to build their own packages
>> from the src.rpms and/or tarballs that are available from gluster
>> — and support them. It's still in the source and there are no
>> plans to remove it. (Unlike most of the other deprecated features
>> which were recently removed in glusterfs-7.)
>>
>>
>>
>> [1] https://github.com/gluster/glusterfs/issues/764
>>
>
> It seems https://bugzilla.redhat.com/show_bug.cgi?id=1727248 has
> resolved this issue.
>
> Here i'll talk about something from commerical company view. For
> security reasons most government procurement projects only allow
> universal storage protocol(nfs, cifs etc) what means fuse will be
> excluded. Consindering performance requirements, the only option
> is nfs.
>
>
> I don't see how NFSv3 is more secure than newer NFS versions.
>
Here i mean fuse versus nfs. Don't expect to install fuse client on
customer's computer.
> Nfsv4 is stateful protocol, but i see on performance improvement.
> Trust me, nfs-ganesha(v3, v4) shows ~30% performance degradation
> versus gnfs for either small or big files r/w in practice.
> Further, many customers prefer nfs client than cifs in windows,
> because the poor cifs performance, AFAIK nfs-ganesha is not going
> well with windows nfs client.
>
>
> Interesting - we've seen far better performance with Ganesha v4.1 vs.
> gnfs.
> Would be great if you could share the details.
vdbench 6/4 random read/write
> Same for NFS Ganesha and Windows support.
>
ganesha 2.5.5, glusterfs 3.12.2, windows server 2003. Use windows nfsv3
mount nfs-ganesha and test read/write with vdbench50406. Following is
crash bt
Btw, the environment has been redeployed, so i can't share more.
> It's difficult to counterpart without referring to specific issues.
> It's eveb to harder to fix them ;-)
>
> Gnfs is stable enough, we have about ~1000 servers, 4~24 servers
> for a gluster cluster, about ~2000 nfs clients, all works fine
> till the last two years expect some memleak issue.
>
>
> Nice! Would be great for the Gluster community to learn more about the
> use case!
It's my pleasure.
> Y.
>
> Thanks
>
> -Xie
>
>> On Thu, Nov 21, 2019 at 5:31 AM Amar Tumballi <amarts at gmail.com
>> <mailto:amarts at gmail.com>> wrote:
>>
>> Hi All,
>>
>> As per the discussion on https://review.gluster.org/23645,
>> recently we changed the status of gNFS (gluster's native
>> NFSv3 support) feature to 'Depricated / Orphan' state. (ref:
>> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
>> With this email, I am proposing to change the status again to
>> 'Odd Fixes' (ref:
>> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>>
>>
>> TL;DR;
>>
>> I understand the current maintainers are not able to focus on
>> maintaining it as the focus of the project, as earlier
>> described, is keeping NFS-Ganesha based integration with
>> glusterfs. But, I am volunteering along with Xie Changlong
>> (currently working at Chinamobile), to keep the feature
>> running as it used to in previous versions. Hence the status
>> of 'Odd Fixes'.
>>
>> Before sending the patch to make these changes, I am
>> proposing it here now, as gNFS is not even shipped with
>> latest glusterfs-7.0 releases. I have heard from some users
>> that it was working great for them with earlier releases, as
>> all they wanted was NFS v3 support, and not much of features
>> from gNFS. Also note that, even though the packages are not
>> built, none of the regression tests using gNFS are stopped
>> with latest master, so it is working same from at least last
>> 2 years.
>>
>> I request the package maintainers to please add '--with gnfs'
>> (or --enable-gnfs) back to their release script through this
>> email, so those users wanting to use gNFS happily can
>> continue to use it. Also points to users/admins is that, the
>> status is 'Odd Fixes', so don't expect any 'enhancements' on
>> the features provided by gNFS.
>>
>> Happy to hear feedback, if any.
>>
>> Regards,
>> Amar
>>
>> _______________________________________________
>> maintainers mailing list
>> maintainers at gluster.org <mailto:maintainers at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
>>
>> _______________________________________________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge:https://bluejeans.com/441850968
>>
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge:https://bluejeans.com/441850968
>>
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20191122/75b4b169/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nehilhidedagefho.png
Type: image/png
Size: 29338 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20191122/75b4b169/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mncknlfpefcedmbl.png
Type: image/png
Size: 198351 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20191122/75b4b169/attachment-0003.png>
More information about the maintainers
mailing list