[Gluster-users] NFS not start on localhost
Demeter Tibor
tdemeter at itsmart.hu
Sat Oct 18 10:36:36 UTC 2014
Hi,
I've try out these things:
- nfs.disable on-of
- iptables disable
- volume stop-start
but same.
So, when I make a new volume everything is fine.
After reboot the NFS won't listen on local host (only on server has brick0)
Centos7 with last ovirt
Regards,
Tibor
----- Eredeti üzenet -----
> It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
> You will probably find something out that will help your cause. In general,
> if you just wish to start the thing up without going into the why of it, try
> `gluster volume set engine nfs.disable on` followed by ` gluster volume set
> engine nfs.disable off`. It does the trick quite often for me because it is
> a polite way to askmgmt/glusterd to try and respawn the nfs server process
> if need be. But, keep in mind that this will call a (albeit small) service
> interruption to all clients accessing volume engine over nfs.
> Thanks,
> Anirban
> On Saturday, 18 October 2014 1:03 AM, Demeter Tibor <tdemeter at itsmart.hu>
> wrote:
> Hi,
> I have make a glusterfs with nfs support.
> I don't know why, but after a reboot the nfs does not listen on localhost,
> only on gs01.
> [root at node0 ~]# gluster volume info engine
> Volume Name: engine
> Type: Replicate
> Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gs00.itsmart.cloud:/gluster/engine0
> Brick2: gs01.itsmart.cloud:/gluster/engine1
> Options Reconfigured:
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> auth.allow: *
> nfs.disable: off
> [root at node0 ~]# gluster volume status engine
> Status of volume: engine
> Gluster process Port Online Pid
> ------------------------------------------------------------------------------
> Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
> Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
> NFS Server on localhost N/A N N/A
> Self-heal Daemon on localhost N/A Y 3261
> NFS Server on gs01.itsmart.cloud 2049 Y 5216
> Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223
> Does anybody help me?
> Thanks in advance.
> Tibor
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141018/52403977/attachment.html>
More information about the Gluster-users
mailing list