[Gluster-devel] Regression tests and improvement ideas
Anand Nekkunti
anekkunt at redhat.com
Wed Jun 17 16:49:55 UTC 2015
Hi All
I have suggestion to improve regression test.
Issue : We are calling cleanup function at the starting and ending of
each test (twice for each test ).
#time prove -vf tests/bugs/glusterd/cleanup.t #(cleanup function
called 20 times in loop ) -- find attached file
*real 0m29.140s*
time taken cleanup function = 30/20 = ~1.5 sec
#grep -rn "cleanup" ./tests/ |wc -l
808
808 times cleanup function called during regression test
Time taken by cleanup in regression test =1.5 * 808 =1212 sec = 20.2 min
Total time taken by both regression =20.2 * 2 = 40.4 min
My suggestion : Call cleanup only at the start of each test case , so it
reduce the 40.4/2=*_20.2 min_* in total regression test(both regression ).
Note: These analysis done in below configuration
model name : Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
RAM : 8GB
On 06/17/2015 05:18 PM, Atin Mukherjee wrote:
>
> On 06/17/2015 04:26 PM, Raghavendra Talur wrote:
>> Hi,
>>
>>
>> MSV Bhat and I had presented in Gluster Design Summit some ideas about
>> improving our testing infrastructure.
>>
>> Here is the link to the slides: http://redhat.slides.com/rtalur/distaf#
>>
>> Here are the same suggestions,
>>
>> 1. *A .t file for a bug*
>> When a community user discovers a bug in Gluster, they contact us over
>> irc or email and eventually end up filling a bug in bugzilla.
>> Many times it so happens that we find a bug which we don't know the
>> fix for OR not a bug in our module and also end up filling a bug in
>> bugzilla.
>>
>> If we could rather write a .t test to reproduce the bug and add it to
>> say /tests/bug/yet-to-be-fixed/ folder in gluster repo it would be
>> more helpful. As part of bug-triage we could try doing the same for bugs
>> filed by community users.
>>
>> *What do we get?*
>>
>> a. very easy for a new developer to pick up that bug and fix it.
>> If .t passes then the bug is fixed.
>>
>> b. The regression on daily patch sets would skip this folder; but on a
>> nightly basis we could run a test on this folder to see if any of these
>> tests got fixed while we were fixing some other tests. Yay!
> Attaching a reproducer in the form of .t might be difficult, specially
> for the race conditions. It might pass pre and post fix as well. So it
> *should not* be a must criteria to have .t file.
>>
>>
>> 2. *New gerrit/review work flow*
>>
>> Our gerrit setup currently has a 2 hour average for regression run.
>> Due to long queue of commits the round about time is around 4-6 hours.
>>
>> Kaushal has proposed on how to reduce round about time more in this
>> thread http://www.spinics.net/lists/gluster-devel/msg15798.html.
>>
>>
>> 3. *Make sure tests can be done in docker and run in parallel*
>>
>> To reduce time for one test run from 2 hours we can look at running
>> tests in parallel. I did a prototype and got test time down to 40 mins
>> on a 16 GB RAM and 4 core VM.
>>
>> Current blocked at :
>> Some of the tests fail in docker while they pass in a VM.
>> Note that it is .t failing, Gluster works fine in docker.
>> Need some help on this. More on this in a mail I will be sending later
>> today at gluster-devel.
>>
>>
>> *what do we get?*
>> Running 4 docker containers on our Laptops itself can reduce time
>> taken by test runs down to 90 mins. Running them on powerful machines,
>> it is down to 40 mins as seen in the prototype.
> How about NetBSD, yesterday Niels point out to me that there is no
> docker service for NetBSD.
>>
>> 4. *Test definitions for every .t*
>>
>> May be the time has come to upgrade our test infra to have tests with
>> test definitions. Every .t file could have a corresponding .def file
>> which is
>> A JSON/YAML/XML config
>> Defines the requirements of test
>> Type of volume
>> Special knowledge of brick size required?
>> Which repo source folders should trigger this test
>> Running time
>> Test RUN level
>>
>> *what do we get?*
>> a. Run a partial set of tests on a commit based on git log and test
>> definitions and run complete regression as nightly.
>> b. Order test run based on run times. This combined with fail on first
>> test setting we have, we will fail as early as possible.
>> c. Order tests based on functionality level, which means a mount.t basic
>> test should run before a complex DHT test that makes use of FUSE mount.
>> Again, this will help us to fail as early as possible in failure scenarios.
>> d. With knowledge of type of volume required and number of bricks
>> required, we can re-use volumes that are created for subsequent tests.
>> Even the cleanup() function we have takes time. DiSTAF already has a
>> function equivalent to use_existing_else_create_new.
>>
>>
>> 5. *Testing GFAPI*
>> We don't have a good test framework for gfapi as of today.
>>
>> However, with the recent design proposal at
>> https://docs.google.com/document/d/1yuRLRbdccx_0V0UDAxqWbz4g983q5inuINHgM1YO040/edit?usp=sharing
>>
>>
>> and
>>
>> Craig Cabrey from Facebook developing a set of coreutils using
>> GFAPI as mentioned here
>> http://www.spinics.net/lists/gluster-devel/msg15753.html
>>
>> I guess we have it well covered :)
>>
>>
>> Reviews and suggestions welcome!
>>
>> Thanks,
>> Raghavendra Talur
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150617/415955bf/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cleanup.t
Type: application/x-perl
Size: 211 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150617/415955bf/attachment-0001.pl>
More information about the Gluster-devel
mailing list