[Gluster-devel] Defining a "good build"
Nigel Babu
nigelb at redhat.com
Mon Mar 6 15:36:53 UTC 2017
Hello folks,
At some point in the distant future, we want to be able to say definitively
that we have a good Gluster build. This conviction needs to be backed by tests
that we run on our builds to confirm that it is good. This conversation is
meant to tease out a definition of good build. This definition will help us
define what tests we to confirm that the build is indeed good. This is a very
important thing to know pre-release.
I started a conversation at the start of February with a few developers to
define a good build. Now is a good time to take this discussion public so we
can narrow this down and use this to focus on our testing efforts.
Most people, when they think about this conversation, think of performance. We
should test functionality before performance. It makes sense to test
performance when we can confirm that the setup we recommend works. Otherwise,
we're working with the assumption that it works unless proven otherwise.
A good build to me would be one that confirms that
* The packages installs and upgrades correctly (packaging bits).
* Mounts and volume types work.
* Integrations that we promise works do work.
* Upgrades work without causing data loss.
* The configurations we focus on works and we can verify that they do actually
work.
* Performance for these configurations have not degraded from the last release.
Jeff recommended we start with configurations for these scenarios to begin
with:
* many large files, sequential read (media service)
* many large files, sequential write (video/IoT/log archiving)
* few large files, random read/write (virtual machines)
* many small directories, read/write, snapshots (containers)
This isn't achievable in a single day. Here's what's good to focus on:
* The package installs we test. We don't test that the packages upgrade yet.
This is something we can do easily as part of our Glusto tests. I mean
public-facing tests here. Our users should be able to verify our claims that
it works.
* The mounts and volume types are tested with QE's verification tests. Shwetha
has done some good work here and we have a decent number of tests that
confirm everything works.
* I'd say we pick *one* use case, list down our recommended configurations for
that type of workload. Then, write a test to setup Gluster in that
configuration and test that everything works. Considering we're still
figuring out Glusto, this is a good goal to begin. Shyam and I are planning
to tackle the video archive workload in this cycle.
* Integration tests is a conversation I'd like to start with the GEDI team. For
projects that we support, we need to confirm that we haven't broken anything
that they depend on. Projects I can think of the top of my head: oVirt,
Container workloads, Tendrl integration.
* Upgrades is something we don't test. It'll be useful to write down how we
recommend upgrades and how to write those tests. Perhaps need to be part of
each scenario's testing on how it handles upgrades.
* Running real performance testing requires some specialized hardware we don't
yet have. We can find and fix memory leaks that the Coverity scans report (86
as of this email). We could also build gluster with ASAN and run our test
suite to see if that catches any memory issues.
This email has several areas for conversations to begin. But please remember
the goal of this thread. We want to define a good build.
--
nigelb
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170306/856ce026/attachment.sig>
More information about the Gluster-devel
mailing list