[Gluster-users] NFS Mount vs. Gluster mount

prmarino1 at gmail.com prmarino1 at gmail.com
Mon Jul 6 14:41:18 UTC 2015


When using the gluster client replication overhead is offloaded to the client so you will see a CPU utilization increase on the client.

Ig you are using NFS with gluster you should consider looking into NFS Ganish (I apologize if I'm not spelling the name correctly on my phone) it's a implementation of NFS 4 which can be put on top of Gluster ‎it resolves all of the locking issues with the built in gluster NFS and works quite well. Red Hat's storage appliance uses it too. Also it is usually installed with CTDB to handle intelligent failover of VIP's.

  Original Message  
From: Niels de Vos
Sent: Monday, July 6, 2015 03:24
To: Jordan R. Willis
Cc: Gluster-users at gluster.org
Subject: Re: [Gluster-users] NFS Mount vs. Gluster mount

On Sun, Jul 05, 2015 at 08:05:02PM -0700, Jordan R. Willis wrote:
> Hello,
> 
> 
> I have been using NFS to mount my gluster volumes and they have been
> working pretty well. But I just realized how easy it is to mount
> volumes using glusterfs.
> 
> 
> mount -t glusterfs glusterserver:/myvol /mymount
> 
> 
> I used NFS because I was just so used to it. So I was wondering will I
> have to look out for any performance hits, pitfalls or gotchas if I
> just use a glusterfs mount? I was looking for some documentation
> comparing them but couldn’t find anything except
> (https://joejulian.name/blog/nfs-mount-for-glusterfs-gives-better-read-performance-for-small-files/)

One thing to keep in mind, is that when you mount the Gluster Volumes on
the local Gluster Storage Servers, you should normally use the GlusterFS
protocol. With NFSv3 it is only possible to have one service on a server
handling locks. This means, either the Gluster/NFS service can track
locking, or the NFS-client can use locks, not both.

It is possible to work around this restriction by disabling locking, but
that is not recommended for most users and their use cases.

Also note that the FUSE mount has an integrated fail-over mechanism,
where as that is not the case for NFS. If the NFS-server goes down that
was used for mounting, the IP-address should probably migrate to an
other storage server that is still available (with pacemaker/ctdb/..).

HTH,
Niels


More information about the Gluster-users mailing list