[Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

Amar Tumballi Suryanarayan atumball at redhat.com
Mon Feb 25 18:11:34 UTC 2019


Hi all,

We are calling out our users, and developers to contribute in validating
‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
upgrade, stability, and performance.

Some of the key highlights of the release are listed in release-notes draft
<https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
Please note that there are some of the features which are being dropped out
of this release, and hence making sure your setup is not going to have an
issue is critical. Also the default lru-limit option in fuse mount for
Inodes should help to control the memory usage of client processes. All the
good reason to give it a shot in your test setup.

If you are developer using gfapi interface to integrate with other
projects, you also have some signature changes, so please make sure your
project would work with latest release. Or even if you are using a project
which depends on gfapi, report the error with new RPMs (if any). We will
help fix it.

As part of test days, we want to focus on testing the latest upcoming
release i.e. GlusterFS-6, and one or the other gluster volunteers would be
there on #gluster channel on freenode to assist the people. Some of the key
things we are looking as bug reports are:

   -

   See if upgrade from your current version to 6.0rc is smooth, and works
   as documented.
   - Report bugs in process, or in documentation if you find mismatch.
   -

   Functionality is all as expected for your usecase.
   - No issues with actual application you would run on production etc.
   -

   Performance has not degraded in your usecase.
   - While we have added some performance options to the code, not all of
      them are turned on, as they have to be done based on usecases.
      - Make sure the default setup is at least same as your current version
      - Try out few options mentioned in release notes (especially,
      --auto-invalidation=no) and see if it helps performance.
   -

   While doing all the above, check below:
   - see if the log files are making sense, and not flooding with some “for
      developer only” type of messages.
      - get ‘profile info’ output from old and now, and see if there is
      anything which is out of normal expectation. Check with us on the numbers.
      - get a ‘statedump’ when there are some issues. Try to make sense of
      it, and raise a bug if you don’t understand it completely.

<https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
expected on test days.

   -

   We have a tracker bug
   <https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
   - We will attach all the ‘blocker’ bugs to this bug.
   -

   Use this link to report bugs, so that we have more metadata around given
   bugzilla.
   - Click Here
      <https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818&bug_severity=high&component=core&priority=high&product=GlusterFS&status_whiteboard=gluster-test-day&version=6>
      [1]
   -

   The test cases which are to be tested are listed here in this sheet
   <https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
   please add, update, and keep it up-to-date to reduce duplicate efforts.

Lets together make this release a success.

Also check if we covered some of the open issues from Weekly untriaged bugs
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
[3]

For details on build and RPMs check this email
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
[4]

Finally, the dates :-)

   - Wednesday - Feb 27th, and
   - Thursday - Feb 28th

Note that our goal is to identify as many issues as possible in upgrade and
stability scenarios, and if any blockers are found, want to make sure we
release with the fix for same. So each of you, Gluster users, feel
comfortable to upgrade to 6.0 version.

Regards,
Gluster Ants.

-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20190225/b8314670/attachment-0001.html>


More information about the maintainers mailing list