[Gluster-infra] [Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

Atin Mukherjee amukherj at redhat.com
Wed Apr 3 11:00:42 UTC 2019

On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan <jthottan at redhat.com> wrote:

> Hi,
> is_nfs_export_available is just a wrapper around "showmount" command AFAIR.
> I saw following messages in console output.
>  mount.nfs: rpc.statd is not running but is required for remote locking.
> 05:06:55 mount.nfs: Either use '-o nolock' to keep locks local, or start
> statd.
> 05:06:55 mount.nfs: an incorrect mount option was specified
> For me it looks rpcbind may not be running on the machine.
> Usually rpcbind starts automatically on machines, don't know whether it
> can happen or not.

That's precisely what the question is. Why suddenly we're seeing this
happening too frequently. Today I saw atleast 4 to 5 such failures already.

Deepshika - Can you please help in inspecting this?

> Regards,
> Jiffin
> ----- Original Message -----
> From: "Atin Mukherjee" <amukherj at redhat.com>
> To: "gluster-infra" <gluster-infra at gluster.org>, "Gluster Devel" <
> gluster-devel at gluster.org>
> Sent: Wednesday, April 3, 2019 10:46:51 AM
> Subject: [Gluster-devel] is_nfs_export_available from nfs.rc failing too
>       often?
> I'm observing the above test function failing too often because of which
> arbiter-mount.t test fails in many regression jobs. Such frequency of
> failures wasn't there earlier. Does anyone know what has changed recently
> to cause these failures in regression? I also hear when such failure
> happens a reboot is required, is that true and if so why?
> One of the reference :
> https://build.gluster.org/job/centos7-regression/5340/consoleFull
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20190403/0e3b0b5f/attachment.html>

More information about the Gluster-infra mailing list