[Gluster-users] Unable to start nfs server
David Coulson
david at davidcoulson.net
Mon Mar 5 19:36:58 UTC 2012
http://pastie.org/3528212
SELinux/iptables info is below. FYI I started up the standard RedHat nfs server (with gluster shutdown), and it started cleanly and correctly bound to portmap/rpcbind.
[root at dresproddns02 ~]# getenforce
Permissive
[root at dresproddns02 ~]# iptables-save
# Generated by iptables-save v1.4.7 on Mon Mar 5 14:34:27 2012
*filter
:INPUT ACCEPT [51197275114:5822299813362]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [51184812127:5800516863347]
COMMIT
# Completed on Mon Mar 5 14:34:27 2012
# Generated by iptables-save v1.4.7 on Mon Mar 5 14:34:27 2012
*nat
:PREROUTING ACCEPT [663610:53184995]
:POSTROUTING ACCEPT [4471677:292235853]
:OUTPUT ACCEPT [4471677:292235853]
-A PREROUTING -d 172.31.0.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 169.254.202.2
-A PREROUTING -d 172.31.0.2/32 -p udp -m udp --dport 53 -j DNAT --to-destination 169.254.202.2
COMMIT
# Completed on Mon Mar 5 14:34:27 2012
# Generated by iptables-save v1.4.7 on Mon Mar 5 14:34:27 2012
*mangle
:PREROUTING ACCEPT [330516307:119149868984]
:INPUT ACCEPT [330469748:119144996058]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [319783522:99640884653]
:POSTROUTING ACCEPT [320799566:99775260122]
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m tcp --dport 5222 -j MARK --set-xmark 0x2bc/0xffffffff
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m tcp --dport 5262 -j MARK --set-xmark 0x2bd/0xffffffff
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m multiport --dports 9091,9091 -j MARK --set-xmark 0x2be/0xffffffff
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m tcp --dport 5222 -j MARK --set-xmark 0x2bc/0xffffffff
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m tcp --dport 5262 -j MARK --set-xmark 0x2bd/0xffffffff
-A PREROUTING -d 172.31.0.0/32 -i bond0 -p tcp -m multiport --dports 9091,9091 -j MARK --set-xmark 0x2be/0xffffffff
-A PREROUTING -d 172.31.0.64/32 -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x103/0xffffffff
COMMIT
# Completed on Mon Mar 5 14:34:27 2012
[root at dresproddns02 ~]#
On Mar 5, 2012, at 2:05 PM, Bryan Whitehead wrote:
> Is selinux running? iptables?
>
> Can you http://pastie.org/ the nfs.log in /var/log/glusterfs ?
>
> On Mon, Mar 5, 2012 at 3:59 AM, David Coulson <david at davidcoulson.net> wrote:
> Yep.
>
> [root at dresproddns01 ~]# service glusterd stop
> Stopping glusterd: [ OK ]
>
> [root at dresproddns01 ~]# ps ax | grep nfs
> 120494 pts/0 S+ 0:00 grep nfs
>
> 2167119 ? S 0:00 [nfsiod]
> [root at dresproddns01 ~]# service rpcbind stop
> Stopping rpcbind: [ OK ]
>
> [root at dresproddns01 ~]# rpcinfo -p
> rpcinfo: can't contact portmapper: RPC: Remote system error - No such file or directory
> [root at dresproddns01 ~]# service rpcbind start
> Starting rpcbind: [ OK ]
> [root at dresproddns01 ~]# service glusterd start
> Starting glusterd: [ OK ]
>
> [root at dresproddns01 ~]# rpcinfo -p
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
>
> Note that I waited a short while between the last two steps. FYI, this is RHEL6 (the two systems that work are RHEL6 too, so I'm not sure it matters much).
>
>
> On 3/5/12 3:27 AM, Bryan Whitehead wrote:
>>
>> did you start portmap service before you started gluster?
>>
>> On Sun, Mar 4, 2012 at 11:53 AM, David Coulson <david at davidcoulson.net> wrote:
>> I've four systems with multiple 4-way replica volumes. I'm migrating a number of volumes from Fuse to NFS for performance reasons.
>>
>> My first two hosts seem to work nicely, but the other two won't start the NFS services properly. I looked through the nfs.log, but it doesn't give any indication of why it did not register with rpcbind. I'm presuming I've got a misconfiguration on two of the systems, but there isn't a clear indication of what is not working.
>>
>> Here is an example from a host which does not work:
>>
>> [root at dresproddns01 ~]# rpcinfo -p
>> program vers proto port service
>> 100000 4 tcp 111 portmapper
>> 100000 3 tcp 111 portmapper
>> 100000 2 tcp 111 portmapper
>> 100000 4 udp 111 portmapper
>> 100000 3 udp 111 portmapper
>> 100000 2 udp 111 portmapper
>> [root at dresproddns01 ~]# ps ax | grep nfs
>> 2167119 ? S 0:00 [nfsiod]
>> 2738268 ? Ssl 0:00 /opt/glusterfs/3.2.5/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
>> 2934228 pts/0 S+ 0:00 grep nfs
>> [root at dresproddns01 ~]# netstat -ntlp | grep 2738268
>> tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 2738268/glusterfs
>> tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 2738268/glusterfs
>> tcp 0 0 0.0.0.0:38467 0.0.0.0:* LISTEN 2738268/glusterfs
>>
>> [root at dresproddns01 ~]# gluster volume info svn
>>
>> Volume Name: svn
>> Type: Replicate
>> Status: Started
>> Number of Bricks: 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: rhesproddns01:/gluster/svn
>> Brick2: rhesproddns02:/gluster/svn
>> Brick3: dresproddns01:/gluster/svn
>> Brick4: dresproddns02:/gluster/svn
>> Options Reconfigured:
>> performance.client-io-threads: 1
>> performance.flush-behind: on
>> network.ping-timeout: 5
>> performance.stat-prefetch: 1
>> nfs.disable: off
>> nfs.register-with-portmap: on
>> auth.allow: 10.250.53.*,10.252.248.*,169.254.*,127.0.0.1
>> performance.cache-size: 256Mb
>> performance.write-behind-window-size: 128Mb
>>
>> Only obvious difference with a host which does work is this:
>>
>> [root at rhesproddns01 named]# rpcinfo -p
>> program vers proto port service
>> 100000 4 tcp 111 portmapper
>> 100000 3 tcp 111 portmapper
>> 100000 2 tcp 111 portmapper
>> 100000 4 udp 111 portmapper
>> 100000 3 udp 111 portmapper
>> 100000 2 udp 111 portmapper
>> 100005 3 tcp 38465 mountd
>> 100005 1 tcp 38466 mountd
>> 100003 3 tcp 38467 nfs
>>
>>
>> Any ideas where to look for errors?
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120305/6b81fa29/attachment.html>
More information about the Gluster-users
mailing list