[Gluster-devel] Spurious regression of tests/basic/mgmt_v3-locks.t

Atin Mukherjee amukherj at redhat.com
Fri Oct 31 04:47:28 UTC 2014



On 08/24/2014 11:41 PM, Justin Clift wrote:
> On 24/08/2014, at 11:05 AM, Vijay Bellur wrote:
> <snip>
>>
>> On Sat, Aug 23, 2014 at 12:02 PM, Harshavardhana <harsha at harshavardhana.net> wrote:
>> On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
>>> IIRC, we were marking the verified as +1 in case of a known spurious
>>> failure, can't we continue to do the same for the known spurious
>>> failures just to unblock the patches getting merged till the problems
>>> are resolved?
>>
>> While its understood that such is the case, the premise is rather
>> wrong - we should run
>> a spurious failure again and get the "+1" since we know it only fails
>> spuriously :-). If it fails
>> consistently then there is something odd with the patch. All it
>> requires is another trigger in
>> Jenkins.
>>
>> +1. Providing a manual verified vote for spurious test failures is an interim workaround and should not be utilized for an extended period of time. That is one of the prime reasons why we have only very few folks that can provide a +1 verified vote.
>>
>> In addition, we cannot have a test case with spurious failure(s) being in the repository for long.  Carrying such test cases can only confuse those who are not aware of known spurious failures. We need to have a better turnaround time for such test cases or temporarily drop them from the repository.
> 
> I'd be kind of concerned about dropping the test case instead of it
> being fixed.  It sort of seems like these last few spurious failures
> may be due to subtle bugs in GlusterFS (my impression :>), so
> probably better to get them fixed. :)

Justin,

For last three runs, I've observed the same failure. I think its really
the time to debug this without any further delay. Can you please share a
rackspace machine such that I can debug this issue?

Xavi,

Some of the ec regressions are also having spurious failures [1],
locally all of them are getting through though.

http://build.gluster.org/job/rackspace-regression-2GB-triggered/2320/consoleFull

~Atin


> 
> Regards and best wishes,
> 
> Justin Clift
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-devel mailing list