[Gluster-infra] [Gluster-devel] rebal-all-nodes-migrate.t always fails now

Nithya Balachandran nbalacha at redhat.com
Fri Apr 5 11:25:58 UTC 2019


On Fri, 5 Apr 2019 at 12:16, Michael Scherer <mscherer at redhat.com> wrote:

> Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit :
> > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit :
> > > I'm not convinced this is solved. Just had what I believe is a
> > > similar
> > > failure:
> > >
> > > *00:12:02.532* A dependency job for rpc-statd.service failed. See
> > > 'journalctl -xe' for details.*00:12:02.532* mount.nfs: rpc.statd is
> > > not running but is required for remote locking.*00:12:02.532*
> > > mount.nfs: Either use '-o nolock' to keep locks local, or start
> > > statd.*00:12:02.532* mount.nfs: an incorrect mount option was
> > > specified
> > >
> > > (of course, it can always be my patch!)
> > >
> > > https://build.gluster.org/job/centos7-regression/5384/console
> >
> > same issue, different builder (206). I will check them all, as the
> > issue is more widespread than I expected (or it did popup since last
> > time I checked).
>
> Deepshika did notice that the issue came back on one server
> (builder202) after a reboot, so the rpcbind issue is not related to the
> network initscript one, so the RCA continue.
>
> We are looking for another workaround involving fiddling with the
> socket (until we find why it do use ipv6 at boot, but not after, when
> ipv6 is disabled).
>

Could this be relevant?
https://access.redhat.com/solutions/2798411


>
> Maybe we could run the test suite on a node without all the ipv6
> disabling to see if that cause a issue ?
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20190405/1733be82/attachment.html>


More information about the Gluster-infra mailing list