[Gluster-users] [Gluster-devel] High load on glusterfs!!

Niels de Vos ndevos at redhat.com
Wed Aug 30 09:38:30 UTC 2017


On Wed, Aug 30, 2017 at 01:52:59PM +0530, ABHISHEK PALIWAL wrote:
> What is Gluster/NFS and how can we use this.

Gluster/NFS (or gNFS) is the NFS-server that comes with GlusterFS. It is
a NFSv3 server and can only be used to export Gluster volumes.

You can enable it:
 - install the glusterfs-gnfs RPM (glusterfs >= 3.11)
 - the glusterfs-server RPM might contain the NFS-server (glusterfs < 3.11)
 - build with "./configure --enable-gnfs"
 - enable per volume with: gluster volume set $VOLUME nfs.disable false
 - logs are in /var/log/gluster/nfs.log

But really, NFS-Ganesha is the recommendation. It has many more features
and will receive regular updates for improvements.

Niels


> 
> On Wed, Aug 30, 2017 at 1:24 PM, Niels de Vos <ndevos at redhat.com> wrote:
> 
> > On Thu, Aug 17, 2017 at 12:03:02PM +0530, ABHISHEK PALIWAL wrote:
> > > Hi Team,
> > >
> > > I have an query regarding the usage of ACL on gluster volume. I have
> > > noticed that when we use normal gluster volume (without ACL) CPU load is
> > > low, but when we apply the ACL on gluster volume which internally uses
> > Fuse
> > > ACL, CPU load gets increase about 6x times.
> > >
> > > Could you please let me know is this expected or we can do some other
> > > configuration to reduce this type of overhead on gluster volume with
> > ACLs.
> > >
> > > For more clarification we are using kernel NFS for exporting the gluster
> > > volume.
> >
> > Exporting Gluster volumes over FUSE and kernel NFS is not something we
> > suggest and test. There are (or at least were) certain limitations in
> > FUSE that prevented good support for this.
> >
> > Please use NFS-Ganesha instead, that is the NFS server we actively
> > develop with. Gluster/NFS is still available too, but is only receiving
> > the occasional fixes and is only suggested for legacy users that did not
> > move to NFS-Ganesha yet.
> >
> > HTH,
> > Niels
> >
> 
> 
> 
> -- 
> 
> 
> 
> 
> Regards
> Abhishek Paliwal


More information about the Gluster-users mailing list