[Gluster-users] 回复: Re: Cann't mount NFS,please help!

sz_cuitao at 163.com sz_cuitao at 163.com
Thu Apr 2 01:09:46 UTC 2020


Thanks everyone!

You mean that: Ganesha is new solution ablout NFS Server function  than gNFS, in new version gNFS is not the suggest compoment,
but,if I want using NFS Server ,I should install and configure Ganesha separately, is that ?





sz_cuitao at 163.com
 
From: Strahil Nikolov
Date: 2020-04-02 00:58
To: Erik Jacobson; sz_cuitao at 163.com
CC: gluster-users
Subject: Re: [Gluster-users] Cann't mount NFS,please help!
On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson <erik.jacobson at hpe.com> wrote:
>If you are like me and cannot yet switch to Ganesha (it doesn't work in
>our workload yet; I need to get back to working with the community on
>that...)
>
>What I would have expected in the process list was a glusterfs process
>with
>"nfs" in the name.
>
>here it is from one of my systems:
>
>root     57927     1  0 Mar31 ?        00:00:00 /usr/sbin/glusterfs -s
>localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
>/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
>
>
>My guess - but you'd have to confirm this with the logs - is your
>gluster
>build does not have gnfs built in. Since they wish us to move to
>Ganesha, it is often off by default. For my own builds, I enable it in
>the spec file.
>
>So you should have this installed:
>
>/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
>
>If that isn't there, you likely need to adjust your spec file and
>rebuild.
>
>As others mentioned, the suggestion is to use Ganesha if possible,
>which is a separate project.
>
>I hope this helps!
>
>PS here is a sniip from the spec file I use, with an erikj comment for
>what I adjusted:
>
># gnfs
># if you wish to compile an rpm with the legacy gNFS server xlator
># rpmbuild -ta @PACKAGE_NAME at -@PACKAGE_VERSION at .tar.gz --with gnfs
>%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
>
># erikj force enable
>%global _with_gnfs --enable-gnfs
># end erikj
>
>
>On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cuitao at 163.com wrote:
>> 1.The gluster server has set volume option nfs.disable to: off
>> 
>> Volume Name: gv0
>> Type: Disperse
>> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfs1:/brick1/gv0
>> Brick2: gfs2:/brick1/gv0
>> Brick3: gfs3:/brick1/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: off
>> 
>> 2. The process has start.
>> 
>> [root at gfs1 ~]# ps -ef | grep glustershd
>> root       1117      1  0 10:12 ?        00:00:00 /usr/sbin/glusterfs
>-s
>> localhost --volfile-id shd/gv0 -p
>/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
>> log/glusterfs/glustershd.log -S
>/var/run/gluster/ca97b99a29c04606.socket
>> --xlator-option
>*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
>> --process-name glustershd --client-pid=-6
>> 
>> 
>> 3.But the status of gv0 is not correct,for it's status of NFS Server
>is not
>> online.
>> 
>> [root at gfs1 ~]# gluster volume status gv0
>> Status of volume: gv0
>> Gluster process                             TCP Port  RDMA Port 
>Online  Pid
>>
>------------------------------------------------------------------------------
>> Brick gfs1:/brick1/gv0                      49154     0          Y   
>   4180
>> Brick gfs2:/brick1/gv0                      49154     0          Y   
>   1222
>> Brick gfs3:/brick1/gv0                      49154     0          Y   
>   1216
>> Self-heal Daemon on localhost               N/A       N/A        Y   
>   1117
>> NFS Server on localhost                     N/A       N/A        N   
>   N/A
>> Self-heal Daemon on gfs2                    N/A       N/A        Y   
>   1138
>> NFS Server on gfs2                          N/A       N/A        N   
>   N/A
>> Self-heal Daemon on gfs3                    N/A       N/A        Y   
>   1131
>> NFS Server on gfs3                          N/A       N/A        N   
>   N/A
>> 
>> Task Status of Volume gv0
>>
>------------------------------------------------------------------------------
>> There are no active volume tasks
>> 
>> 4.So, I cann't mount the gv0 on my client.
>> 
>> [root at kvms1 ~]# mount -t nfs  gfs1:/gv0 /mnt/test
>> mount.nfs: Connection refused
>> 
>> 
>> Please Help!
>> Thanks!
>> 
>> 
>> 
>> 
>> 
>>
>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>> sz_cuitao at 163.com
>
>> ________
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968 
>> 
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>
>
>
>Erik Jacobson
>Software Engineer
>
>erik.jacobson at hpe.com
>+1 612 851 0550 Office
>
>Eagan, MN
>hpe.com
>________
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users at gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
 
Helll All,
 
 
As far as  I know, most distributions (at least CentOS does) provide  their binaries with gNFS disabled.
Most probably you need  to rebuild.
 
You can use Ganesha - it ises libgfapi to connect to the pool.
 
Best Regards,
Strahil Nikolov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200402/62014034/attachment.html>


More information about the Gluster-users mailing list