[Gluster-infra] [Gluster-devel] NetBSD tests not running to completion.

Jeff Darcy jdarcy at redhat.com
Thu Jan 7 13:54:33 UTC 2016


> > I'd prefer a "defined level of effort" approach which *might* reduce the
> > benefit we derive from NetBSD testing but *definitely* keeps the cost
> > under control.
> 
> Did we identify the worst offenders within the spurious failing tests?
> We could ignore their output on NetBSD (this is how I started)

There do seem to be patterns - ironically, NFS-related tests seem to show up a lot - but I haven't studied this enough to give a detailed answer.  More to the point, is there really much difference between running tests all the time and ignoring certain ones, vs. running them nightly/weekly and triaging the results manually?  Besides resource consumption, I mean.  If we find something in a nightly/weekly test that closer inspection leads us to believe is a generic and serious problem, we should be able to create a Linux reproducer or even block merges by fiat.  Then the only difference is whether we default to allowing merges to occur despite NetBSD failures or default to blocking them.  Either way we can make exceptions.


More information about the Gluster-infra mailing list