[Gluster-users] Protocol stacking: gluster over NFS

Jeff White jaw171 at pitt.edu
Mon Sep 17 13:08:08 UTC 2012


I was under the impression that self-mounting NFS of any kind (mount -t 
nfs localhost...) was a dangerous thing.  When I did that with gNFS I 
could cause a server to crash in no time at all with a simple dd into 
the mount point. I was under the impression that kNFS would have the 
same problem though I have not tested in myself (this was discussed in 
#gluster on irc.freenode.net some time ago).  I'm guessing this would be 
a bug in the kernel.  Has anyone seen issues or crashes with locally 
mounted NFS (either gNFS or kNFS)?

Jeff White - GNU+Linux Systems Administrator
University of Pittsburgh - CSSD

On 09/14/2012 03:22 PM, John Mark Walker wrote:
> A note on recent history:
>
> There were past attempts to export GlusterFS client mounts over NFS, but those used the GlusterFS NFS service. I believe this is the first instance "in the wild" of someone trying this with knfsd.
>
> With the former, while there was increased performance, there would invariably be race conditions that would lock up GlusterFS. See the ominous warnings posted on this Q&A thread: http://community.gluster.org/a/nfs-performance-with-fuse-client-redundancy/
>
> I am curious to see if using knfsd, as opposed to GlusterFS' NFS service, yields a long-term solution for this type of workload. Please do continue to keep us updated.
>
> Thanks,
> JM
>
>
>
> ----- Original Message -----
>> Well, it was too clever for me too :) - someone else suggested it
>> when I was
>> describing some of the options we were facing. I admit to initially
>> thinking
>> that it was silly to expect better performance by stacking protocols,
>> but we
>> tried it and it seems to have worked.
>>
>> To your point:
>>
>> the 'client' is the end node that uses the gluster storage - in our
>> case it's
>> a compute node (w/ limited storage) in a research cluster.
>>
>> the 'server' is the collection of nodes that provides the gluster
>> storage.
>>
>> the client mounts the server with the native gluster client,
>> providing all the
>> gluster advantages of single namespace, scalability, reliability,
>> etc. to
>> 'client:/glmount'
>>
>> the client then exports that gluster fs via NFS to itself, so
>> 'client:/glmount' is listed in '/etc/exports' as rw to itself.
>>
>> the client then mounts itself (innuendo and disturbing mental images
>> notwithstanding) via NFS:
>> 'mount -t nfs localhost:/glmount /glnfs'
>> so that the gluster fs (/glmount) is NFS-loopback-mounted on the
>> client
>> (itself):
>>
>> from our test case, simplified:
>> -----------------------------------
>> hmangala at claw5:~
>> $ cat /etc/mtab  # (all non-gluster-related entries deleted)
>> ...
>> pbs1ib:/gli    /glmount          fuse.glusterfs   \
>>               rw,default_permissions,allow_other,max_read=131072  0 0
>> ...
>> claw5:/glmount      /glnfs   nfs       rw,addr=10.255.78.4        0 0
>> ...
>> -----------------------------------
>>
>> in the above extract, pbs1ib:/gli is the gluster fs that is mounted
>> to
>> 'claw5:/glmount'.
>>
>> claw5 then NFS-mounts claw5:/glmount onto /glnfs which users actually
>> use to
>> read/write.
>>
>> I agree, not very intuitive... but it seems to work.
>>
>> This is with NFS3 clients.  NFS4 may provide an additional perf boost
>> by
>> allowing clients to work out of cache until it's forced to sync, but
>> we
>> haven't tried that yet and the test methodology we used wouldn't show
>> a gain
>> anyway.  I'll have to try to create a more realistic test harness.
>>
>>
>> hjm
>>
>> On Friday, September 14, 2012 01:04:59 PM Whit Blauvelt wrote:
>>> On Fri, Sep 14, 2012 at 09:41:42AM -0700, harry mangalam wrote:
>>>>>> What I mean:
>>>>>> - mounting a gluster fs via the native client,
>>>>>> - then NFS-exporting the gluster fs to the client itself
>>>>>> - then mounting that gluster fs via NFS3 to take advantage of
>>>>>> the
>>>>>> client-side caching.
>>> Harry,
>>>
>>> What is "the client itself" here? I'm having trouble picturing
>>> what's doing
>>> what with what. No doubt because it's too clever for me. Maybe a
>>> bit more
>>> description would clarify it nonetheless.
>>>
>>> Thanks,
>>> Whit
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> --
>> Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
>> [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
>> 415 South Circle View Dr, Irvine, CA, 92697 [shipping]
>> MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
>> --
>> What does it say about a society that would rather send its
>> children to kill and die for oil than to get on a bike?
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list