[Gluster-devel] Spurious failure report for master branch - 2015-03-03

Poornima Gurusiddaiah pgurusid at redhat.com
Wed Mar 4 04:54:55 UTC 2015


Few more test cases causing spurious failures:

./tests/basic/ec/ec-5-1.t 
Failed test:  69

./tests/basic/ec/ec-5-2.t
Failed test:  69

./tests/bugs/disperse/bug-1187474.t
 Failed tests:  11-12

./tests/basic/ec/nfs.t
 Failed test:  9

The above failures were seen for the patches which were ineffective,
i.e. the code that was modified was never executed as it had no callers.

Regards,
Poornima

----- Original Message -----
> From: "Justin Clift" <justin at gluster.org>
> To: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Wednesday, March 4, 2015 9:57:00 AM
> Subject: [Gluster-devel] Spurious failure report for master branch -	2015-03-03
> 
> Ran 20 x regression tests on our GlusterFS master branch code
> as of a few hours ago, commit 95d5e60afb29aedc29909340e7564d54a6a247c2.
> 
> 5 of them were successful (25%), 15 of them failed in various ways
> (75%).
> 
> We need to get this down to about 5% or less (preferably 0%), as it's
> killing our development iteration speed.  We're wasting huge amounts
> of time working around this. :(
> 
> 
> Spurious failures
> *****************
> 
>   * 5 x tests/bugs/distribute/bug-1117851.t
>   (Wstat: 0 Tests: 24 Failed:
>   1)
>     Failed test:  15
> 
>     This one is causing a 25% failure rate all by itself. :(
> 
>     This needs fixing soon. :)
> 
> 
>   * 3 x tests/bugs/geo-replication/bug-877293.t
>   (Wstat: 0 Tests: 15 Failed: 1)
>     Failed test:  11
> 
>   * 2 x tests/basic/afr/entry-self-heal.t
>   (Wstat: 0 Tests: 180
>   Failed: 2)
>     Failed tests:  127-128
> 
>   * 1 x tests/basic/ec/ec-12-4.t
>   (Wstat: 0 Tests:
>   541 Failed: 2)
>     Failed tests:  409, 441
> 
>   * 1 x tests/basic/fops-sanity.t
>   (Wstat: 0 Tests:
>   11 Failed: 1)
>     Failed test:  10
> 
>   * 1 x tests/basic/uss.t
>   (Wstat: 0
>   Tests: 160 Failed: 1)
>     Failed test:  26
> 
>   * 1 x tests/performance/open-behind.t
>   (Wstat: 0 Tests: 17
>   Failed: 1)
>     Failed test:  17
> 
>   * 1 x tests/bugs/distribute/bug-884455.t
>   (Wstat: 0 Tests: 22 Failed:
>   1)
>     Failed test:  11
> 
>   * 1 x tests/bugs/fuse/bug-1126048.t
>   (Wstat: 0 Tests: 12
>   Failed: 1)
>     Failed test:  10
> 
>   * 1 x tests/bugs/quota/bug-1038598.t
>   (Wstat: 0 Tests: 28
>   Failed: 1)
>     Failed test:  28
> 
> 
> 2 x Coredumps
> *************
> 
>   * http://mirror.salasaga.org/gluster/master/2015-03-03/bulk5/
> 
>     IP - 104.130.74.142
> 
>     This coredump run also failed on:
> 
>       * tests/basic/fops-sanity.t
>       (Wstat: 0
>       Tests: 11 Failed: 1)
>         Failed test:  10
> 
>       * tests/bugs/glusterfs-server/bug-861542.t
>       (Wstat: 0 Tests: 13 Failed:
>       1)
>         Failed test:  10
> 
>       * tests/performance/open-behind.t
>       (Wstat: 0 Tests: 17
>       Failed: 1)
>         Failed test:  17
> 
>   * http://mirror.salasaga.org/gluster/master/2015-03-03/bulk8/
> 
>     IP - 104.130.74.143
> 
>     This coredump run also failed on:
> 
>       * tests/basic/afr/entry-self-heal.t
>       (Wstat: 0 Tests: 180
>       Failed: 2)
>         Failed tests:  127-128
> 
>       * tests/bugs/glusterfs-server/bug-861542.t
>       (Wstat: 0 Tests: 13 Failed:
>       1)
>         Failed test:  10
> 
> Both VMs are also online, in case they're useful to log into
> for investigation (root / the jenkins slave pw).
> 
> If they're not, please let me know so I can blow them away. :)
> 
> 
> 1 x hung host
> *************
> 
> Hung on tests/bugs/posix/bug-1113960.t
> 
> root  12497  1290  0 Mar03 ?  S  0:00  \_ /bin/bash /opt/qa/regression.sh
> root  12504 12497  0 Mar03 ?  S  0:00      \_ /bin/bash ./run-tests.sh
> root  12519 12504  0 Mar03 ?  S  0:03          \_ /usr/bin/perl
> /usr/bin/prove -rf --timer ./tests
> root  22018 12519  0 00:17 ?  S  0:00              \_ /bin/bash
> ./tests/bugs/posix/bug-1113960.t
> root  30002 22018  0 01:57 ?  S  0:00                  \_ mv
> /mnt/glusterfs/0/longernamedir1/longernamedir2/longernamedir3/
> 
> This VM (23.253.53.111) is still online + untouched (still hung),
> if someone wants to log in to investigate.  (root / the jenkins
> slave pw)
> 
> Hope that's helpful. :)
> 
> Regards and best wishes,
> 
> Justin Clift
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-devel mailing list