[Gluster-devel] Release 3.12: Glusto run status
spandura at redhat.com
Mon Aug 28 05:05:51 UTC 2017
I have sent a patch to fix this issue last week:
I will send another patch to move all the hard coded timeouts to make it
On Mon, Aug 28, 2017 at 8:57 AM, Nigel Babu <nigelb at redhat.com> wrote:
> Is this time out configurable? Or is it hard-coded into the glusto-tests
> On Sat, Aug 26, 2017 at 1:59 AM, Shyam Ranganathan <srangana at redhat.com>
>> Nigel was kind enough to kick off a glusto run on 3.12 head a couple of
>> days back. The status can be seen here .
>> The run failed, but managed to get past what Glusto does on master (see
>> ). Not that this is a consolation, but just stating the fact.
>> The run  failed at,
>> 17:05:57 functional/bvt/test_cvt.py::TestGlusterHealSanity_dispersed_
>> glusterfs::test_self_heal_when_io_in_progress FAILED
>> The test case failed due to,
>> 17:10:28 E AssertionError: ('Volume %s : All process are not
>> online', 'testvol_dispersed')
>> The test case can be seen here , and the reason for failure is that
>> Glusto did not wait long enough for the down brick to come up (it waited
>> for 10 seconds, but the brick came up after 12 seconds or within the same
>> second as the test for it being up. The log snippets pointing to this
>> problem are here . In short there was no real bug or issue that caused
>> the failure as yet.
>> Glusto as a gating factor for this release was desirable, but having got
>> this far on 3.12 does help.
>> @nigel, we could try post increasing the timeout between bringing the
>> brick up to checking if it is up, and try another run, let me know if that
>> works, and what is needed from me to get this going.
>>  Glusto 3.12 run: https://ci.centos.org/view/Glu
>>  Glusto on master: https://ci.centos.org/view/Glu
>>  Failed test case: https://ci.centos.org/view/Glu
>>  Log analysis pointing to the failed check:
>> "Releases are made better together"
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-devel