[Gluster-users] 回复: Re: Cann't mount NFS,please help!

sz_cuitao at 163.com sz_cuitao at 163.com
Thu Apr 2 01:36:01 UTC 2020


Ok,I see.

Your answer is very clear!

Thanks!



sz_cuitao at 163.com
 
发件人: Erik Jacobson
发送时间: 2020-04-02 09:29
收件人: sz_cuitao at 163.com
抄送: Strahil Nikolov; Erik Jacobson; gluster-users
主题: Re:回复: Re: [Gluster-users] Cann't mount NFS,please help!
> Thanks everyone!
> 
> You mean that: Ganesha is new solution ablout NFS Server function  than gNFS,
> in new version gNFS is not the suggest compoment,
> but,if I want using NFS Server ,I should install and configure Ganesha
> separately, is that ?
 
I would phrase it this way:
- The community is moving to Ganesha to provide NFS services. Ganesha
  supports several storage solutions, including gluster
 
- Therefore, distros and packages tend to disable the gNFS support in
  gluster since they assume people are moving to Ganesha. It would
  otherwise be a competing solutions for NFS.
 
- Some people still prefer gNFS and do not want to use Ganesha yet, and
  those people need to re-build their package in some cases like was
  outlined in the thread. This then provides the necessary libraries and
  config files to run gNFS
 
- gNFS still works well if you build it as far as I have found
 
- For my use, Ganesha crashes with my "not normal" workload and
  so I can't switch to it yet. I worked with the community some but ran
  out of system time and had to drop the thread. I would like to revisit
  so that I can run Ganesha too some day. My work load is very far away
  from typical.
 
Erik
 
 
> 
> 
> 
> ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
> sz_cuitao at 163.com
> 
>      
>     From: Strahil Nikolov
>     Date: 2020-04-02 00:58
>     To: Erik Jacobson; sz_cuitao at 163.com
>     CC: gluster-users
>     Subject: Re: [Gluster-users] Cann't mount NFS,please help!
>     On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson
>     <erik.jacobson at hpe.com> wrote:
>     >If you are like me and cannot yet switch to Ganesha (it doesn't work in
>     >our workload yet; I need to get back to working with the community on
>     >that...)
>     >
>     >What I would have expected in the process list was a glusterfs process
>     >with
>     >"nfs" in the name.
>     >
>     >here it is from one of my systems:
>     >
>     >root     57927     1  0 Mar31 ?        00:00:00 /usr/sbin/glusterfs -s
>     >localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
>     >/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
>     >
>     >
>     >My guess - but you'd have to confirm this with the logs - is your
>     >gluster
>     >build does not have gnfs built in. Since they wish us to move to
>     >Ganesha, it is often off by default. For my own builds, I enable it in
>     >the spec file.
>     >
>     >So you should have this installed:
>     >
>     >/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
>     >
>     >If that isn't there, you likely need to adjust your spec file and
>     >rebuild.
>     >
>     >As others mentioned, the suggestion is to use Ganesha if possible,
>     >which is a separate project.
>     >
>     >I hope this helps!
>     >
>     >PS here is a sniip from the spec file I use, with an erikj comment for
>     >what I adjusted:
>     >
>     ># gnfs
>     ># if you wish to compile an rpm with the legacy gNFS server xlator
>     ># rpmbuild -ta @PACKAGE_NAME at -@PACKAGE_VERSION at .tar.gz --with gnfs
>     >%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
>     >
>     ># erikj force enable
>     >%global _with_gnfs --enable-gnfs
>     ># end erikj
>     >
>     >
>     >On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cuitao at 163.com wrote:
>     >> 1.The gluster server has set volume option nfs.disable to: off
>     >>
>     >> Volume Name: gv0
>     >> Type: Disperse
>     >> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
>     >> Status: Started
>     >> Snapshot Count: 0
>     >> Number of Bricks: 1 x (2 + 1) = 3
>     >> Transport-type: tcp
>     >> Bricks:
>     >> Brick1: gfs1:/brick1/gv0
>     >> Brick2: gfs2:/brick1/gv0
>     >> Brick3: gfs3:/brick1/gv0
>     >> Options Reconfigured:
>     >> transport.address-family: inet
>     >> storage.fips-mode-rchecksum: on
>     >> nfs.disable: off
>     >>
>     >> 2. The process has start.
>     >>
>     >> [root at gfs1 ~]# ps -ef | grep glustershd
>     >> root       1117      1  0 10:12 ?        00:00:00 /usr/sbin/glusterfs
>     >-s
>     >> localhost --volfile-id shd/gv0 -p
>     >/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
>     >> log/glusterfs/glustershd.log -S
>     >/var/run/gluster/ca97b99a29c04606.socket
>     >> --xlator-option
>     >*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
>     >> --process-name glustershd --client-pid=-6
>     >>
>     >>
>     >> 3.But the status of gv0 is not correct,for it's status of NFS Server
>     >is not
>     >> online.
>     >>
>     >> [root at gfs1 ~]# gluster volume status gv0
>     >> Status of volume: gv0
>     >> Gluster process                             TCP Port  RDMA Port
>     >Online  Pid
>     >>
>     >
>     ------------------------------------------------------------------------------
>     >> Brick gfs1:/brick1/gv0                      49154     0          Y  
>     >   4180
>     >> Brick gfs2:/brick1/gv0                      49154     0          Y  
>     >   1222
>     >> Brick gfs3:/brick1/gv0                      49154     0          Y  
>     >   1216
>     >> Self-heal Daemon on localhost               N/A       N/A        Y  
>     >   1117
>     >> NFS Server on localhost                     N/A       N/A        N  
>     >   N/A
>     >> Self-heal Daemon on gfs2                    N/A       N/A        Y  
>     >   1138
>     >> NFS Server on gfs2                          N/A       N/A        N  
>     >   N/A
>     >> Self-heal Daemon on gfs3                    N/A       N/A        Y  
>     >   1131
>     >> NFS Server on gfs3                          N/A       N/A        N  
>     >   N/A
>     >>
>     >> Task Status of Volume gv0
>     >>
>     >
>     ------------------------------------------------------------------------------
>     >> There are no active volume tasks
>     >>
>     >> 4.So, I cann't mount the gv0 on my client.
>     >>
>     >> [root at kvms1 ~]# mount -t nfs  gfs1:/gv0 /mnt/test
>     >> mount.nfs: Connection refused
>     >>
>     >>
>     >> Please Help!
>     >> Thanks!
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     >> sz_cuitao at 163.com
>     >
>     >> ________
>     >>
>     >>
>     >>
>     >> Community Meeting Calendar:
>     >>
>     >> Schedule -
>     >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>     >> Bridge: https://bluejeans.com/441850968
>     >>
>     >> Gluster-users mailing list
>     >> Gluster-users at gluster.org
>     >> https://lists.gluster.org/mailman/listinfo/gluster-users
>     >
>     >
>     >
>     >Erik Jacobson
>     >Software Engineer
>     >
>     >erik.jacobson at hpe.com
>     >+1 612 851 0550 Office
>     >
>     >Eagan, MN
>     >hpe.com
>     >________
>     >
>     >
>     >
>     >Community Meeting Calendar:
>     >
>     >Schedule -
>     >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>     >Bridge: https://bluejeans.com/441850968
>     >
>     >Gluster-users mailing list
>     >Gluster-users at gluster.org
>     >https://lists.gluster.org/mailman/listinfo/gluster-users
>      
>     Helll All,
>      
>      
>     As far as  I know, most distributions (at least CentOS does) provide  their
>     binaries with gNFS disabled.
>     Most probably you need  to rebuild.
>      
>     You can use Ganesha - it ises libgfapi to connect to the pool.
>      
>     Best Regards,
>     Strahil Nikolov
> 
 
 
Erik Jacobson
Software Engineer
 
erik.jacobson at hpe.com
+1 612 851 0550 Office
 
Eagan, MN
hpe.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200402/d5de33ee/attachment.html>


More information about the Gluster-users mailing list