[Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)
Shehjar Tikoo
shehjart at gluster.com
Thu Jan 7 12:25:44 UTC 2010
Gordan Bobic wrote:
> Shehjar Tikoo wrote:
>> Gordan Bobic wrote:
>>> Martin Fick wrote:
>>>> --- On Wed, 1/6/10, Gordan Bobic <gordan at bobich.net> wrote:
>>>>
>>>>>> With native NFS there'll be no need to first mount a
>>>>> glusterFS
>>>>>> FUSE based volume and then export it as NFS. The way
>>>>> it has been developed is that
>>>>>> any glusterfs volume in the volfile can be exported
>>>>> using NFS by adding
>>>>>> an NFS volume over it in the volfile. This is
>>>>> something that will become
>>>>>> clearer from the sample vol files when 3.0.1 comes
>>>>> out.
>>>>>
>>>>> It may be worth checking the performance of that solution
>>>>> vs the performance of the standalone unfsd unbound to
>>>>> portmap/mountd over mounted glfs volumes, as I discovered
>>>>> today that the performance feels very similar to native
>>>>> knfsd and server-side AFR, but without the fuse.ko
>>>>> complications of the former and the buggyness of the latter
>>>>> (e.g. see bug 186:
>>>>> http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=186
>>>>> - that bug has been driving me nuts since before 2.0.0 was
>>>>> released)
>>>>>
>>>>> I'd hate to see this be another wasted effort like booster
>>>>> when there is a solution that already works.
>>>>
>>
>> booster was not a wasted effort at all. It has received less
>> attention over the last month or so because of the NFS xlator
>> taking all my time, but before that it provided us and those
>> who tested it for production systems, a short-term solution
>> that performed better than unfsd-over-FUSE. I verified that there
>> were clear performance benefits of using unfsd-booster.
>
> A solution so short term that I missed it entirely while still fighting
> stability issues...
>
>>>> I don't think it would be wasted if it includes NLM since unfsd does
>>>> not do locking!
>>>
>> It does not do decent security either. One of our goals is to
>> implement kerberos5 based authentication. We also want
>> to support NFS over RDMA and NFSACLs. For extending to these,
>> unfsd code is highly limiting.
>
> So why exactly use NFS instead of GlusterFS with server-side brick
> assembly? What is the advantage? I cannot see one either in terms of
> performance or functionality. This is what I would be using if I could
> get that setup to work without bugginess (e.g. bug 186) and crashing
> (see other emails on this thread, will try to re-create and provide
> backtraces).
>
If using GlusterFS fits your setup, you should continue to,
but there are cases where users are not *willing* to or *can* not use
GlusterFS. See later for why.
>>> Arguably it just replicated the functionality of server side volume
>>> assembly and exporting just the assembled volume.
>>
>> Replication of existing functionality is not such a bad
>> thing when you consider the extended functionality and performance
>> goals we are aiming for with native NFS. We figured the benefits were
>> worth the cost.
>
> What are the performance and functionality benefits over using GlusterFS
> protocol as I described above (e.g. in my case server-side AFR, with
> just the AFR-ed volume exported to the client)?
>
Thats not the relevant question here, IMO. Most people who will use
NFS rather than GlusterFS would be those who either do not have a
choice in the matter or do not want the overhead of choosing.
>>> Whether the end client connects via nfs or glfs is largely immaterial
>>> for the sake of installing an additional package on the client. The
>>> bug mentioned above
>>
>> No, it is not immaterial. The overhead of installing additional
>> packages is a real concern in some of the deployments we're
>> aiming for.
>>
>> It is not immaterial wrt how clients connect either. NFS is a
>> well understood protocol. It gives us all the advantages of
>> supporting a standardized protocol.
>
> I thought you were talking about bolting kerberos authentication onto it
> and running it over RDMA. That doesn't sound very standard.
>
But it is standardy. Both, kerberos based authentication and NFS over
RDMA.
> I'm not criticizing the idea per se, I'm just trying to figure out why
> it is actually useful, and I've not been able to work that out yet from
> what has been said.
>
The answer to that lies in another question, "why would anyone use
a standardized NFS over GlusterFS?"
Here are three points from pnfs.com on why:
1. Ensures Interoperability among vendor solutions
2. Allows Choice of best-of-breed products
3. Eliminates Risks of deploying proprietary technology
I'll add a fourth one:
Familiarity of the protocol, is very important, especially
in the storage world, where conservatism is preferred over
fancy technology. NFS has been tried and tested over 2 decades.
HTH
-Shehjar
> Gordan
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
More information about the Gluster-devel
mailing list