[Gluster-devel] Gluster builder being hit by too much process

Atin Mukherjee amukherj at redhat.com
Sat Oct 7 05:00:43 UTC 2017


On Fri, 6 Oct 2017 at 19:05, Michael Scherer <mscherer at redhat.com> wrote:

> Le vendredi 06 octobre 2017 à 16:53 +0530, Gaurav Yadav a écrit :
> > As gluster cli was failing to create a volume which has tons of brick
> > request in one command.
> > I added this https://review.gluster.org/#/c/18271/5/tests/bugs/cli/bu
> > g-
> > 1490853.t  test to ensure that
> > gluster is able to parse large request.
> > As per the bug "https://bugzilla.redhat.com/show_bug.cgi?id=1490853",
> > create was failing, but I added volume start too
> > in the test case whichideally is not required.
> >
> > I have addressed the same and updated the patch,
>
> Thanks for the fast fix \o/
>
> but so, the underlying iroblem is still here, this did fail in a way
> that our test suite didn't recover. And we should have a more solid
> test suite, cause errors can happen (and are normal, we test
> experimental code).


The previous version of the test was trying to create 1000 bricks on a
single volume (not 1000 volumes) and start the volume. Technically there
was nothing wrong in the test but its just that spawning 1000 brick
processes with out brick multiplexing being turned on would cause memory
pressure on the node and probably that had caused the abnormality. Ideally
the code change didnt need to test the volume start path as the change was
only done in the volume create code path which the latest version of the .t
does now. So now that we’re not spawning any brick processes what other
things we need to look in? FWIW, the latest regression run has failed again.


>
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
- Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20171007/99c2976c/attachment.html>


More information about the Gluster-devel mailing list