[Gluster-users] Configuring legacy Gulster NFS

Olivier Olivier.Nicole at cs.ait.ac.th
Mon May 25 02:49:00 UTC 2020


Strahil Nikolov <hunter86_bg at yahoo.com> writes:

> On May 23, 2020 7:29:23 AM GMT+03:00, Olivier <Olivier.Nicole at cs.ait.ac.th> wrote:
>>Hi,
>>
>>I have been struggling with NFS Ganesha: one gluster node with ganesha
>>serving only one client could not handle the load when dealing with
>>thousand of small files. Legacy gluster NFS works flawlesly with 5 or 6
>>clients.
>>
>>But the documentation for gNFS is scarce, I could not find where to
>>configure the various autorizations, so any pointer is greatly welcome.
>>
>>Best regards,
>>
>>Olivier
>
> Hi Oliver,
>
> Can you hint me why you are using gluster with a single node in the TSP serving only 1 client ?
> Usually, this is not a typical gluster workload.

Hi Strahil,

Of course I have more than one node, other nodes are supporting the
bricks and the data. I am using a node with no data to solve this issue
with NFS. But in my comparison between gNFS and Ganesha, I was using the
same configuration, with one node with no birck accessing the other
nodes for the data. So the only change between what is working and what
was not is the NFS server. Beside, I have been using NFS for over 15
years and know that given my data and type of activity, one single NFS
server should be able to serve 5 to 10 clients without a problem, that
is why I suspected Ganesha from the begining.

If I cannot configure gNFS, I think I could glusterfs_mount the volume
and use the native NFS server of Linux, but that would add overhead and
leave some features behind, that is why my focus is primarily on
configuring gNFS.

>
> Also can you specify:
> - Brick block device type and details (raid type, lvm, vdo, etc )

All nodes are VMware virtual machines, the RAID being at VMware level

> - xfs_info of the brick
> - mount options  for the brick

Bricks are not mounted

> - SELINUX/APPARMOR status
> - sysctl tunables (including tuned profile)

All systems are vanilla Ubuntu with no tuning.

> - gluster volume information and status

sudo gluster volume info gv0

Volume Name: gv0
Type: Distributed-Replicate
Volume ID: cc664830-1dd0-4dd4-9f1c-493578297e79
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gluster3000:/gluster1/br
Brick2: gluster5000:/gluster/br
Brick3: gluster3000:/gluster2/br
Brick4: gluster2000:/gluster/br
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: off
features.cache-invalidation: on
on at gluster3:~$ sudo gluster volume status gv0
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster3000:/gluster1/br              49152     0          Y       1473
Brick gluster5000:/gluster/br               49152     0          Y       724
Brick gluster3000:/gluster2/br              49153     0          Y       1549
Brick gluster2000:/gluster/br               49152     0          Y       723
Self-heal Daemon on localhost               N/A       N/A        Y       1571
NFS Server on localhost                     N/A       N/A        N       N/A
Quota Daemon on localhost                   N/A       N/A        Y       1560
Self-heal Daemon on gluster2000.cs.ait.ac.t
h                                           N/A       N/A        Y       835
NFS Server on gluster2000.cs.ait.ac.th      N/A       N/A        N       N/A
Quota Daemon on gluster2000.cs.ait.ac.th    N/A       N/A        Y       735
Self-heal Daemon on gluster5000.cs.ait.ac.t
h                                           N/A       N/A        Y       829
NFS Server on gluster5000.cs.ait.ac.th      N/A       N/A        N       N/A
Quota Daemon on gluster5000.cs.ait.ac.th    N/A       N/A        Y       736
Self-heal Daemon on fbsd3500                N/A       N/A        Y       2584
NFS Server on fbsd3500                      2049      0          Y       2671
Quota Daemon on fbsd3500                    N/A       N/A        Y       2571

Task Status of Volume gv0
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 53e7c649-27f0-4da0-90dc-af59f937d01f
Status               : completed

> - ganesha settings

MDCACHE
{
Attr_Expiration_Time = 600;
Entries_HWMark = 50000;
LRU_Run_Interval = 90;
FD_HWMark_Percent = 60;
FD_LWMark_Percent = 20;
FD_Limit_Percent = 90;
}
EXPORT
{
        Export_Id = 2;
        etc.
}

> - Network settings + MTU

MTU 1500 (I think it is my switch that never worked with jumbo
frames). I have a dedicated VLAN for NFS and gluster and a VLAN for
users connection.

I hope that helps.

Best regards,

Olivier

>
> Best Regards,
> Strahil Nikolov
>

-- 


More information about the Gluster-users mailing list