[Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

Sanju Rakonde srakonde at redhat.com
Fri Sep 28 08:31:39 UTC 2018


On Wed, Sep 26, 2018 at 7:53 PM Shyam Ranganathan <srangana at redhat.com>
wrote:

> Hi,
>
> Updates on the release and shout out for help is as follows,
>
> RC0 Release packages for testing are available see the thread at [1]
>
> These are the following activities that we need to complete for calling
> the release as GA (with no major regressions i.e):
>
> 1. Release notes (Owner: release owner (myself), will send out an
> initial version for review and to solicit inputs today)
>
> 2. Testing dashboard to maintain release health (new, thanks Nigel)
>   - Dashboard at [2]
>   - We already have 3 failures here as follows, needs attention from
> appropriate *maintainers*,
>     (a)
>
> https://build.gluster.org/job/regression-test-with-multiplex/871/consoleText
>         - Failed with core:
> ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
>     (b)
>
> https://build.gluster.org/job/regression-test-with-multiplex/873/consoleText
>         - Failed with core: ./tests/bugs/snapshot/bug-1275616.t
>         - Also test ./tests/bugs/glusterd/validating-server-quorum.t had
> to be
> retried
>

The test case ./tests/bugs/glusterd/validating-server-quorum.t had to be
retried since, it got timed out at the first run.
I went through the logs of first run, everything looks fine. Looking at all
the time stamps, got to know that cluster_brick_up_status took 45sec
(PROCESS_UP_TIMEOUT) most of the times when it is used. As we clubbed many
of the glusterd test cases into a single test case, the test case might
need some more time to execute. If this test case gets timed out
repeatedly, we will think of the actions need to be taken.

Definition of cluster_brick_up_status for your reference:
function cluster_brick_up_status {
        local vol=$2
        local host=$3
        local brick=$4
        eval \$CLI_$1 volume status $vol $host:$brick --xml | sed -ne
's/.*<status>\([01]\)<\/status>/\1/p'
}

    (c)
> https://build.gluster.org/job/regression-test-burn-in/4109/consoleText
>         - Failed with core: ./tests/basic/mgmt_v3-locks.t
>
> 3. Upgrade testing
>   - Need *volunteers* to do the upgrade testing as stated in the 4.1
> upgrade guide [3] to note any differences or changes to the same
>   - Explicit call out on *disperse* volumes, as we continue to state
> online upgrade is not possible, is this addressed and can this be tested
> and the documentation improved around the same?
>
> 4. Performance testing/benchmarking
>   - I would be using smallfile and FIO to baseline 3.12 and 4.1 and test
> RC0 for any major regressions
>   - If we already know of any please shout out so that we are aware of
> the problems and upcoming fixes to the same
>
> 5. Major testing areas
>   - Py3 support: Need *volunteers* here to test out the Py3 support
> around changed python files, if there is not enough coverage in the
> regression test suite for the same
>
> Thanks,
> Shyam
>
> [1] Packages for RC0:
> https://lists.gluster.org/pipermail/maintainers/2018-September/005044.html
>
> [2] Release testing health dashboard:
> https://build.gluster.org/job/nightly-release-5/
>
> [3] 4.1 upgrade guide:
> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
>
> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Release 5 has been branched today. To backport fixes to the upcoming 5.0
> > release use the tracker bug [1].
> >
> > We intend to roll out RC0 build by end of tomorrow for testing, unless
> > the set of usual cleanup patches (op-version, some messages, gfapi
> > version) land in any form of trouble.
> >
> > RC1 would be around 24th of Sep. with final release tagging around 1st
> > of Oct.
> >
> > I would like to encourage everyone to test out the bits as appropriate
> > and post updates to this thread.
> >
> > Thanks,
> > Shyam
> >
> > [1] 5.0 tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
> > _______________________________________________
> > maintainers mailing list
> > maintainers at gluster.org
> > https://lists.gluster.org/mailman/listinfo/maintainers
> >
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


-- 
Thanks,
Sanju
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180928/3facf8ee/attachment.html>


More information about the Gluster-devel mailing list