[Gluster-users] NFS file locking with gluster 3.7.3

Anand Subramanian ansubram at redhat.com
Mon Aug 17 03:54:54 UTC 2015


Hi Thibault,

There are a few tuneables that have helped boost ganesha performance and 
I suspect these tuneables on the OS side apply to improve performance 
for several workloads where ganesha is concerned. (And they don't seem 
to be necessary for gluster-nfs at all).

Adding Manoj here, who may be able to point you to these configurables  
(as he has experimented with the ganesha performance).

Thanks,
Anand

On 08/13/2015 07:13 PM, Niels de Vos wrote:
> On Mon, Aug 10, 2015 at 09:19:25AM +0100, Thibault Godouet wrote:
>> Thanks Niel for your helpful answer.
>>
>> Regarding the locking, indeed that solves my issue. Now I'm wondering how
>> to monitor this. The best I have so far is get the list of RPC binds and
>> the TCP/UDP port in particular, and then run a lsof to find out if it is
>> Gluster.  Should work, but a bit indirect. If someone knows a better way
>> I'd be interested to know.
> That is almost how I do it as well. Instead of 'lsof' I use 'netstat' or
> 'ss', depending on the Linux distribution.
>
>> As for Ganesha, I saw articles explaining that it effectively removes
>> layers, hence why I thought NFS v3 via Ganesha would be faster than native
>> Gluster NFS.  Given your answer I take it there are other moving parts /
>> differences.  Is there a general known guideline on which is best when?
>> E.g. does one handle better small files than the other one or something
>> like that?
> I am not aware of any guidelines for this. The difference in performance
> is highly dependent on the workload and use-case. There is little
> difference in the layers between Gluster/NFS and NFS-Ganesha, both are
> userspace nfs-server implementations (neither has context switches for
> the Linux-VFS like fuse mounts have).
>
> If you need the best performance, you should problaby just try both
> configurations, and run your intended workload against the servers.
> Artificial/standard tests most often do not emulate a real workload.
>
> HTH,
> Niels
>
>
>> On 5 Aug 2015 7:06 pm, "Niels de Vos" <ndevos at redhat.com> wrote:
>>
>>> On Wed, Aug 05, 2015 at 04:11:47PM +0100, Thibault Godouet wrote:
>>>> Looking around I get the impression that file locking (NLM) may simply
>>> not
>>>> be supported in glusterfs's built-in NFS server.
>>> This is actually supported. But note that you can not run a userspace
>>> NLM implementation provided by a NFS-server (Gluster/NFS or NFS-Ganesha)
>>> on a system that acts as an NFS-client. The Linux kernel NFS-client uses
>>> the lockd kernel module, and there can be only one NLM implementation be
>>> registered at rpcbind. Whichever service (nfs-client or nfs-server)
>>> starts first, will be able to register itself, the 2nd one will (mostly
>>> silently) fail.
>>>
>>>> I get the impression that Ganesha is aimed at supporting NFS better, and
>>>> presumably supports locking well, so I should give it a try (If I
>>>> understand well the performance is also likely to be higher, which is a
>>>> nice bonus!)
>>> NFS-Ganesha offers more features than Gluster/NFS. The performance is
>>> highly dependent on the workload, Gluster/NFS can be faster for many of
>>> them.
>>>
>>> Cheers,
>>> Niels
>>>
>>>
>>>> If someone could confirm this that would be useful to make sure I'm going
>>>> in the right direction.
>>>>
>>>> Thanks,
>>>> Thibault.
>>>> On 4 Aug 2015 1:23 pm, "Thibault Godouet" <tibo92 at godouet.net> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> I have a cluster of 2 servers running 3.7.3 with replication, and
>>> standard
>>>>> NFS (no ganesha).  This in on CentOS 6.
>>>>>
>>>>> I use CTDB with 2 virtual IPs (one for each server in a normal
>>>>> situation) to share the volume over NFS and CIFS (samba).
>>>>>
>>>>>
>>>>>
>>>>> fnctl() file locking doesn't seem to work when the volume is mounted
>>> over
>>>>> NFS.
>>>>>
>>>>> This is apparent with a 'svn info' (svn 1.8 if it made any difference)
>>> in
>>>>> a local working copy:
>>>>>
>>>>>
>>>>>
>>>>> $ svn info
>>>>> svn: E200033: Another process is blocking the working copy database, or
>>>>> the underlying filesystem does not support file locking; if the working
>>>>> copy is on a network filesystem, make sure file locking has been
>>> enabled on
>>>>> the file server
>>>>> svn: E200033: sqlite[S5]: database is locked, executing statement
>>> 'PRAGMA
>>>>> synchronous=OFF;PRAGMA recursive_triggers=ON;PRAGMA
>>> foreign_keys=OFF;PRAGMA
>>>>> locking_mode = NORMAL;'
>>>>>
>>>>>
>>>>>
>>>>> a strace shows:
>>>>>
>>>>>
>>>>>
>>>>> $ svn info
>>>>> svn: E200033: Another process is blocking the working copy database, or
>>>>> the underlying filesystem does not support file locking; if the working
>>>>> copy is on a network filesystem, make sure file locking has been
>>> enabled on
>>>>> the file server
>>>>> svn: E200033: sqlite[S5]: database is locked, executing statement
>>> 'PRAGMA
>>>>> synchronous=OFF;PRAGMA recursive_triggers=ON;PRAGMA
>>> foreign_keys=OFF;PRAGMA
>>>>> locking_mode = NORMAL;'
>>>>>
>>>>>
>>>>>
>>>>> Everything seems to work fine on native Gluster (FUSE) mounts: the same
>>>>> 'svn info' works nicely.
>>>>>
>>>>> I can't really use native mounts due to the performance hit (many small
>>>>> files) and the fact I would need to install the gluster client
>>> software on
>>>>> every server.
>>>>>
>>>>>
>>>>>
>>>>> Is fnctl() file locking supported in Gluster NFS mounts?  If so, any
>>> idea
>>>>> why it doesn't work for me?
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Thibault.
>>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150817/0b59a6ec/attachment.html>


More information about the Gluster-users mailing list