[Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

Gordan Bobic gordan at bobich.net
Thu Jan 7 09:58:05 UTC 2010


Shehjar Tikoo wrote:
> Gordan Bobic wrote:
>> Martin Fick wrote:
>>> --- On Wed, 1/6/10, Gordan Bobic <gordan at bobich.net> wrote:
>>>
>>>>> With native NFS there'll be no need to first mount a
>>>> glusterFS
>>>>> FUSE based volume and then export it as NFS. The way
>>>> it has been developed is that
>>>>> any glusterfs volume in the volfile can be exported
>>>> using NFS by adding
>>>>> an NFS volume over it in the volfile. This is
>>>> something that will become
>>>>> clearer from the sample vol files when 3.0.1 comes
>>>> out.
>>>>
>>>> It may be worth checking the performance of that solution
>>>> vs the performance of the standalone unfsd unbound to
>>>> portmap/mountd over mounted glfs volumes, as I discovered
>>>> today that the performance feels very similar to native
>>>> knfsd and server-side AFR, but without the fuse.ko
>>>> complications of the former and the buggyness of the latter
>>>> (e.g. see bug 186: 
>>>> http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=186
>>>> - that bug has been driving me nuts since before 2.0.0 was
>>>> released)
>>>>
>>>> I'd hate to see this be another wasted effort like booster
>>>> when there is a solution that already works.
>>>
> 
> booster was not a wasted effort at all. It has received less
> attention over the last month or so because of the NFS xlator
> taking all my time, but before that it provided us and those
> who tested it for production systems, a short-term solution
> that performed better than unfsd-over-FUSE. I verified that there
> were clear performance benefits of using unfsd-booster.

A solution so short term that I missed it entirely while still fighting 
stability issues...

>>> I don't think it would be wasted if it includes NLM since unfsd does 
>>> not do locking!
>>
> It does not do decent security either. One of our goals is to
> implement kerberos5 based authentication. We also want
> to support NFS over RDMA and NFSACLs. For extending to these,
> unfsd code is highly limiting.

So why exactly use NFS instead of GlusterFS with server-side brick 
assembly? What is the advantage? I cannot see one either in terms of 
performance or functionality. This is what I would be using if I could 
get that setup to work without bugginess (e.g. bug 186) and crashing 
(see other emails on this thread, will try to re-create and provide 
backtraces).

>> Arguably it just replicated the functionality of server side volume 
>> assembly and exporting just the assembled volume.
> 
> Replication of existing functionality is not such a bad
> thing when you consider the extended functionality and performance
> goals we are aiming for with native NFS. We figured the benefits were
> worth the cost.

What are the performance and functionality benefits over using GlusterFS 
protocol as I described above (e.g. in my case server-side AFR, with 
just the AFR-ed volume exported to the client)?

>> Whether the end client connects via nfs or glfs is largely immaterial 
>> for the sake of installing an additional package on the client. The 
>> bug mentioned above 
> 
> No, it is not immaterial. The overhead of installing additional
> packages is a real concern in some of the deployments we're
> aiming for.
> 
> It is not immaterial wrt how clients connect either. NFS is a
> well understood protocol. It gives us all the advantages of
> supporting a standardized protocol.

I thought you were talking about bolting kerberos authentication onto it 
and running it over RDMA. That doesn't sound very standard.

I'm not criticizing the idea per se, I'm just trying to figure out why 
it is actually useful, and I've not been able to work that out yet from 
what has been said.

Gordan





More information about the Gluster-devel mailing list