[Gluster-users] Unable to start nfs server
David Coulson
david at davidcoulson.net
Mon Mar 5 11:59:00 UTC 2012
Yep.
[root at dresproddns01 ~]# service glusterd stop
Stopping glusterd: [ OK ]
[root at dresproddns01 ~]# ps ax | grep nfs
120494 pts/0 S+ 0:00 grep nfs
2167119 ? S 0:00 [nfsiod]
[root at dresproddns01 ~]# service rpcbind stop
Stopping rpcbind: [ OK ]
[root at dresproddns01 ~]# rpcinfo -p
rpcinfo: can't contact portmapper: RPC: Remote system error - No such
file or directory
[root at dresproddns01 ~]# service rpcbind start
Starting rpcbind: [ OK ]
[root at dresproddns01 ~]# service glusterd start
Starting glusterd: [ OK ]
[root at dresproddns01 ~]# rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
Note that I waited a short while between the last two steps. FYI, this
is RHEL6 (the two systems that work are RHEL6 too, so I'm not sure it
matters much).
On 3/5/12 3:27 AM, Bryan Whitehead wrote:
> did you start portmap service before you started gluster?
>
> On Sun, Mar 4, 2012 at 11:53 AM, David Coulson <david at davidcoulson.net
> <mailto:david at davidcoulson.net>> wrote:
>
> I've four systems with multiple 4-way replica volumes. I'm
> migrating a number of volumes from Fuse to NFS for performance
> reasons.
>
> My first two hosts seem to work nicely, but the other two won't
> start the NFS services properly. I looked through the nfs.log, but
> it doesn't give any indication of why it did not register with
> rpcbind. I'm presuming I've got a misconfiguration on two of the
> systems, but there isn't a clear indication of what is not working.
>
> Here is an example from a host which does not work:
>
> [root at dresproddns01 ~]# rpcinfo -p
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
> [root at dresproddns01 ~]# ps ax | grep nfs
> 2167119 ? S 0:00 [nfsiod]
> 2738268 ? Ssl 0:00 /opt/glusterfs/3.2.5/sbin/glusterfs
> -f /etc/glusterd/nfs/nfs-server.vol -p
> /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
> 2934228 pts/0 S+ 0:00 grep nfs
> [root at dresproddns01 ~]# netstat -ntlp | grep 2738268
> tcp 0 0 0.0.0.0:38465
> <http://0.0.0.0:38465> 0.0.0.0:*
> LISTEN 2738268/glusterfs
> tcp 0 0 0.0.0.0:38466
> <http://0.0.0.0:38466> 0.0.0.0:*
> LISTEN 2738268/glusterfs
> tcp 0 0 0.0.0.0:38467
> <http://0.0.0.0:38467> 0.0.0.0:*
> LISTEN 2738268/glusterfs
>
> [root at dresproddns01 ~]# gluster volume info svn
>
> Volume Name: svn
> Type: Replicate
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: rhesproddns01:/gluster/svn
> Brick2: rhesproddns02:/gluster/svn
> Brick3: dresproddns01:/gluster/svn
> Brick4: dresproddns02:/gluster/svn
> Options Reconfigured:
> performance.client-io-threads: 1
> performance.flush-behind: on
> network.ping-timeout: 5
> performance.stat-prefetch: 1
> nfs.disable: off
> nfs.register-with-portmap: on
> auth.allow: 10.250.53.*,10.252.248.*,169.254.*,127.0.0.1
> performance.cache-size: 256Mb
> performance.write-behind-window-size: 128Mb
>
> Only obvious difference with a host which does work is this:
>
> [root at rhesproddns01 named]# rpcinfo -p
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
> 100005 3 tcp 38465 mountd
> 100005 1 tcp 38466 mountd
> 100003 3 tcp 38467 nfs
>
>
> Any ideas where to look for errors?
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120305/211fcc1e/attachment.html>
More information about the Gluster-users
mailing list