[Gluster-devel] Want more spurious regression failure alerts... ?

Justin Clift justin at gluster.org
Sun Jun 15 10:25:05 UTC 2014


On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:
> On 06/13/2014 06:41 PM, Justin Clift wrote:
>> Hi Pranith,
>> 
>> Do you want me to keep sending you spurious regression failure
>> notification?
>> 
>> There's a fair few of them isn't there?
> I am doing one run on my VM. I will get back with the ones that fail on my VM. You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:

  * bug-859581.t -- SPURIOUS
    * 4846 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140614:14:33:41.tgz
    * 6009 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:20:24:58.tgz
    * 6652 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:22:04:16.tgz
    * 7796 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:14:22:53.tgz
    * 7987 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:15:21:04.tgz
    * 7992 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz
    * 8014 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
    * 8054 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:13:15:50.tgz
    * 8062 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:13:28:48.tgz

  * mgmt_v3-locks.t -- SPURIOUS
    * 6483 - build.gluster.org -> http://build.gluster.org/job/regression/4847/consoleFull
    * 6630 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140614:15:42:39.tgz
    * 6946 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:20:57:27.tgz
    * 7392 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:13:57:20.tgz
    * 7852 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:19:23:17.tgz
    * 8014 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
    * 8015 - http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:14:26:01.tgz
    * 8048 - http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:18:13:07.tgz

  * bug-918437-sh-mtime.t -- SPURIOUS
    * 6459 - http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
    * 7493 - http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
    * 7987 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
    * 7992 - http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

  * fops-sanity.t -- SPURIOUS
    * 8014 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:18:18:33.tgz
    * 8066 - http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:21:35:57.tgz

  * bug-857330/xml.t - SPURIOUS
    * 7523 - logs may (?) be hard to parse due to other failure data for this CR in them
    * 8029 - http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:16:46:03.tgz

If we resolve these five, our regression testing should be a *lot* more
predictable. :)

Text file (attached to this email) has the bulk test results.  Manually
cut-n-pasted from browser to the text doc, so be wary of possible typos. ;)


> Give the output of "for i in `cat problematic-ones.txt`; do echo $i $(git log $i | grep Author| tail -1); done"
>> 
>> Maybe we should make 1 BZ for the lot, and attach the logs
>> to that BZ for later analysis?
> I am already using 1092850 for this.

Good info. :)

+ Justin

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Gluster spurious failures.txt
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140615/42c61fce/attachment-0001.txt>
-------------- next part --------------

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



More information about the Gluster-devel mailing list