[Gluster-devel] IMPORTANT - Adding further volume types to our smoke tests

Atin Mukherjee amukherj at redhat.com
Thu Nov 13 09:23:26 UTC 2014



On 11/13/2014 02:43 PM, Pranith Kumar Karampuri wrote:
> 
> On 11/13/2014 03:51 AM, Jeff Darcy wrote:
>> \> At the moment, our smoke tests in Jenkins only run on a
>>> replicated volume.  Extending that out to other volume types
>>> should (in theory :>) help catch other simple gotchas.
>>>
>>> Xavi has put together a patch for doing just this, which I'd
>>> like to apply and get us running:
>>>
>>>   
>>> https://forge.gluster.org/gluster-patch-acceptance-tests/gluster-patch-acceptance-tests/merge_requests/4
>>>
>>>
>>> What are people's thoughts on the general idea, and on the
>>> above proposed patch?  (The Forge isn't using Gerrit, so
>>> review/comments back here please :>)
>> I'm ambivalent.  On the one hand, I think this is an important
>> step in the right direction.  Sometimes we need to be able to
>> run *all* of our existing tests with some feature enabled, not
>> just a few feature-specific tests.  SSL is an example of this,
>> and transport (or other forms of) multi-threading will be as
>> well.
>>
>> On the other hand, I'm not sure smoke is the place to do this.
>> Smoke is supposed to be a *quick* test to catch *basic* errors
>> (e.g. source fails to build) before we devote hours to a full
>> regression test.  How much does this change throughput on the
>> smoke-test queue?  Should we be doing this in regression
>> instead, or in a third testing tier between the two we have?
> That makes sense. Should we have daily regression runs which will
> contain a lot more things that need to be tested on a regular basis?
> Running regressions per disk fs type is something that we need to do. We
> can improve them going forward with long running tests like disk
> replacement tests/ Rebalance, geo-rep tests etc. Let me know your
> thoughts on this.
> 
> Pranith
Even I feel we should have tuning parameters for running the regression
with DEBUG mode, in some cases only INFO log doesn't give any clue for
debugging certain spurious failures (for eg. mgmt_v3_locks.t). If we
encounter any spurious failures for a run, we can immediately decide the
tuning parameters for the next run? Thoughts?

Also probably we may schedule a nightly regression run with debug mode
enabled?

~Atin
>>
>> My gut feel is that we need to think more about how to run
>> a matrix of M tests across N configurations, instead of just
>> putting feature/regression tests and configuration tests into
>> one big bucket.  Or maybe that's a longer-term thing.
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list