[Cinder.glusterfs.ci] [Third-party-announce] Cinder-GlusterFS CI job - recent failures

Deepak C Shetty deepakcs at redhat.com
Wed Apr 8 04:57:03 UTC 2015

On 04/07/2015 09:28 PM, John Griffith wrote:
> On Tue, Apr 7, 2015 at 6:27 AM, Deepak C Shetty <deepakcs at redhat.com 
> <mailto:deepakcs at redhat.com>>wrote:
>     Hi CI'ers :)
>         Just wanted to send a quick update on the glusterfs CI job
>     (check-tempest-dsvm-full-glusterfs-nv) currently failing on most
>     patches, is due to the recently enabled test_volume_boot_pattern
>     which is failing for glusterfs backend.
>        I have opened LP bug
>     https://bugs.launchpad.net/cinder/+bug/1441050to track the issue.
>     Bharat (in CC) is actively working on it.
>       I would like to know if we continue with the status-quo or
>     disable this testcase for glusterfs until this bug is fixed ?
>     thanx,
>     deepak
>     _______________________________________________
>     Third-party-announce mailing list
>     Third-party-announce at lists.openstack.org
>     <mailto:Third-party-announce at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-announce
> Seems the trend for Ceph is to add a skip [1].  Personally I'd like to 
> see some more analysis before just skipping, even better actually see 
> the problem fixed.  For the record, I'm not a fan of immediately 
> skipping/disabling for a single backend. We've been pretty hard on 
> Vendors the last few weeks that weren't running all of the same tests 
> as the reference implementation.  But in the case of Ceph and now 
> Gluster it seems we have "different" standards.
> Don't get me wrong, I'm not opposed to this, and I gave my +1 to the 
> Ceph patch (and would give it to the Gluster patch with more info).  
> I'm just saying however that we need to get some consistency here and 
> treat everybody fairly.  I spent "A LOT" of time this release cycle 
> making sure my device and the LVM device worked properly, 
> significantly more on LVM.
> I proposed a temporary skip for LVM once and it was adamantly 
> rejected.  I then proposed a sleep in Nova for the LVM driver, again 
> rejected.  The response has been "The issue needs to be fixed or at 
> least completely understood".  Same holds true here in my opinion.

Thanks John for your verbose opinion :) I am with you on this.

My only intent of skipping was to make sure the CI job doesn't add 
un-necessary noise on the patches and/or community thinking wrongly 
about the CI job, just because 1 of the 300+ tests are failing. We are 
actively working on the fix and we have figured the problem (see the LP 
bug) & working on the solution


> Thanks,
> John
> [1]: https://review.openstack.org/#/c/170903/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/cinder.glusterfs.ci/attachments/20150408/82ad7e15/attachment.html>
-------------- next part --------------
Third-party-announce mailing list
Third-party-announce at lists.openstack.org

More information about the Cinder.glusterfs.ci mailing list