[Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

Yaniv Kaul ykaul at redhat.com
Wed Apr 3 12:12:16 UTC 2019


On Wed, Apr 3, 2019 at 2:53 PM Michael Scherer <mscherer at redhat.com> wrote:

> Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit :
> > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan <jthottan at redhat.com>
> > wrote:
> >
> > > Hi,
> > >
> > > is_nfs_export_available is just a wrapper around "showmount"
> > > command AFAIR.
> > > I saw following messages in console output.
> > >  mount.nfs: rpc.statd is not running but is required for remote
> > > locking.
> > > 05:06:55 mount.nfs: Either use '-o nolock' to keep locks local, or
> > > start
> > > statd.
> > > 05:06:55 mount.nfs: an incorrect mount option was specified
> > >
> > > For me it looks rpcbind may not be running on the machine.
> > > Usually rpcbind starts automatically on machines, don't know
> > > whether it
> > > can happen or not.
> > >
> >
> > That's precisely what the question is. Why suddenly we're seeing this
> > happening too frequently. Today I saw atleast 4 to 5 such failures
> > already.
> >
> > Deepshika - Can you please help in inspecting this?
>
> So in the past, this kind of stuff did happen with ipv6, so this could
> be a change on AWS and/or a upgrade.
>

We need to enable IPv6, for two reasons:
1. IPv6 is common these days, even if we don't test with it, it should be
there.
2. We should test with IPv6...

I'm not sure, but I suspect we do disable IPv6 here and there. Example[1].
Y.

[1]
https://github.com/gluster/centosci/blob/master/jobs/scripts/glusto/setup-glusto.yml

>
> We are currently investigating a set of failure that happen after
> reboot (resulting in partial network bring up, causing all kind of
> weird issue), but it take some time to verify it, and since we lost 33%
> of the team with Nigel departure, stuff do not move as fast as before.
>
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190403/21240905/attachment.html>


More information about the Gluster-devel mailing list