[Gluster-devel] Want more spurious regression failure alerts... ?

Sachin Pandit spandit at redhat.com
Tue Jun 17 06:26:38 UTC 2014


One more spurious failure.

./tests/bugs/bug-1038598.t                      (Wstat: 0 Tests: 28 Failed: 1)
  Failed test:  28
Files=237, Tests=4632, 4619 wallclock secs ( 2.13 usr  1.48 sys + 832.41 cusr 697.97 csys = 1533.99 CPU)
Result: FAIL

Patch : http://review.gluster.org/#/c/8060/
Build URL : http://build.gluster.org/job/rackspace-regression-2GB/186/consoleFull

~ Sachin.


----- Original Message -----
From: "Justin Clift" <justin at gluster.org>
To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
Cc: "Gluster Devel" <gluster-devel at gluster.org>
Sent: Sunday, June 15, 2014 3:55:05 PM
Subject: Re: [Gluster-devel] Want more spurious regression failure alerts...	?

On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:
> On 06/13/2014 06:41 PM, Justin Clift wrote:
>> Hi Pranith,
>> 
>> Do you want me to keep sending you spurious regression failure
>> notification?
>> 
>> There's a fair few of them isn't there?
> I am doing one run on my VM. I will get back with the ones that fail on my VM. You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:

  * bug-859581.t -- SPURIOUS
    * 4846 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140614:14:33:41.tgz
    * 6009 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:20:24:58.tgz
    * 6652 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:22:04:16.tgz
    * 7796 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:14:22:53.tgz
    * 7987 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:15:21:04.tgz
    * 7992 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz
    * 8014 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
    * 8054 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:13:15:50.tgz
    * 8062 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:13:28:48.tgz

  * mgmt_v3-locks.t -- SPURIOUS
    * 6483 - build.gluster.org -> http://build.gluster.org/job/regression/4847/consoleFull
    * 6630 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140614:15:42:39.tgz
    * 6946 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:20:57:27.tgz
    * 7392 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:13:57:20.tgz
    * 7852 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:19:23:17.tgz
    * 8014 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
    * 8015 - http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:14:26:01.tgz
    * 8048 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:18:13:07.tgz

  * bug-918437-sh-mtime.t -- SPURIOUS
    * 6459 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
    * 7493 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
    * 7987 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
    * 7992 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

  * fops-sanity.t -- SPURIOUS
    * 8014 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:18:18:33.tgz
    * 8066 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:21:35:57.tgz

  * bug-857330/xml.t - SPURIOUS
    * 7523 - logs may (?) be hard to parse due to other failure data for this CR in them
    * 8029 - http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:16:46:03.tgz

If we resolve these five, our regression testing should be a *lot* more
predictable. :)

Text file (attached to this email) has the bulk test results.  Manually
cut-n-pasted from browser to the text doc, so be wary of possible typos. ;)


> Give the output of "for i in `cat problematic-ones.txt`; do echo $i $(git log $i | grep Author| tail -1); done"
>> 
>> Maybe we should make 1 BZ for the lot, and attach the logs
>> to that BZ for later analysis?
> I am already using 1092850 for this.

Good info. :)

+ Justin



--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift


_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list