<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr">On Tue, Jul 10, 2018, 9:30 AM Amar Tumballi &lt;<a href="mailto:atumball@redhat.com">atumball@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 9, 2018 at 8:10 PM, Nithya Balachandran <span dir="ltr">&lt;<a href="mailto:nbalacha@redhat.com" target="_blank" rel="noreferrer">nbalacha@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">We discussed reducing the number of volumes in the maintainers&#39; meeting.Should we still go ahead and do that?<div><br></div></div><div class="m_-5884681250894484064HOEnZb"><div class="m_-5884681250894484064h5"><div class="gmail_extra"><br></div></div></div></blockquote><div><br></div><div>It would still be a good exercise, IMO. Reducing it to 50-60 volumes from 120 now.</div></div></div></div></blockquote></div></div><div dir="auto">AFAIK, the test case only creates 20 volumes with 6 bricks and hence 120 bricks served from one brick process. This results in 1000+ threads and 14g VIRT 4-5g RES.</div><div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Poornima</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_-5884681250894484064HOEnZb"><div class="m_-5884681250894484064h5"><div class="gmail_extra"><div class="gmail_quote">On 9 July 2018 at 15:45, Xavi Hernandez <span dir="ltr">&lt;<a href="mailto:jahernan@redhat.com" target="_blank" rel="noreferrer">jahernan@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><span><div dir="ltr">On Mon, Jul 9, 2018 at 11:14 AM Karthik Subrahmanya &lt;<a href="mailto:ksubrahm@redhat.com" target="_blank" rel="noreferrer">ksubrahm@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Deepshikha,<div><br></div><div>Are you looking into this failure? I can still see this happening for all the regression runs.</div></div></blockquote><div><br></div></span><div>I&#39;ve executed the failing script on my laptop and all tests finish relatively fast. What seems to take time is the final cleanup. I can see &#39;semanage&#39; taking some CPU during destruction of volumes. The test required 350 seconds to finish successfully.</div><div><br></div><div>Not sure what caused the cleanup time to increase, but I&#39;ve created a bug [1] to track this and a patch [2] to give more time to this test. This should allow all blocked regressions to complete successfully.</div><div><br></div><div>Xavi</div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1599250" target="_blank" rel="noreferrer">https://bugzilla.redhat.com/show_bug.cgi?id=1599250</a></div>[2] <a href="https://review.gluster.org/20482" target="_blank" rel="noreferrer">https://review.gluster.org/20482</a></div><div><div class="m_-5884681250894484064m_3170782629372755415h5"><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Thanks &amp; Regards,</div><div>Karthik</div></div><br><div class="gmail_quote"><div dir="ltr">On Sun, Jul 8, 2018 at 7:18 AM Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com" target="_blank" rel="noreferrer">amukherj@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><a href="https://build.gluster.org/job/regression-test-with-multiplex/794/display/redirect" target="_blank" rel="noreferrer">https://build.gluster.org/job/regression-test-with-multiplex/794/display/redirect</a> has the same test failing. Is the reason of the failure different given this is on jenkins?</div></div><div><br><div class="gmail_quote"><div dir="ltr">On Sat, 7 Jul 2018 at 19:12, Deepshikha Khandelwal &lt;<a href="mailto:dkhandel@redhat.com" target="_blank" rel="noreferrer">dkhandel@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi folks,<br>
<br>
The issue[1] has been resolved. Now the softserve instance will be<br>
having 2GB RAM i.e. same as that of the Jenkins builder&#39;s sizing<br>
configurations.<br>
<br>
[1] <a href="https://github.com/gluster/softserve/issues/40" rel="noreferrer noreferrer" target="_blank">https://github.com/gluster/softserve/issues/40</a><br>
<br>
Thanks,<br>
Deepshikha Khandelwal<br>
<br>
On Fri, Jul 6, 2018 at 6:14 PM, Karthik Subrahmanya &lt;<a href="mailto:ksubrahm@redhat.com" target="_blank" rel="noreferrer">ksubrahm@redhat.com</a>&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Fri 6 Jul, 2018, 5:18 PM Deepshikha Khandelwal, &lt;<a href="mailto:dkhandel@redhat.com" target="_blank" rel="noreferrer">dkhandel@redhat.com</a>&gt;<br>
&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Hi Poornima/Karthik,<br>
&gt;&gt;<br>
&gt;&gt; We&#39;ve looked into the memory error that this softserve instance have<br>
&gt;&gt; showed up. These machine instances have 1GB RAM which is not in the<br>
&gt;&gt; case with the Jenkins builder. It&#39;s 2GB RAM there.<br>
&gt;&gt;<br>
&gt;&gt; We&#39;ve created the issue [1] and will solve it sooner.<br>
&gt;<br>
&gt; Great. Thanks for the update.<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; Sorry for the inconvenience.<br>
&gt;&gt;<br>
&gt;&gt; [1] <a href="https://github.com/gluster/softserve/issues/40" rel="noreferrer noreferrer" target="_blank">https://github.com/gluster/softserve/issues/40</a><br>
&gt;&gt;<br>
&gt;&gt; Thanks,<br>
&gt;&gt; Deepshikha Khandelwal<br>
&gt;&gt;<br>
&gt;&gt; On Fri, Jul 6, 2018 at 3:44 PM, Karthik Subrahmanya &lt;<a href="mailto:ksubrahm@redhat.com" target="_blank" rel="noreferrer">ksubrahm@redhat.com</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt; &gt; Thanks Poornima for the analysis.<br>
&gt;&gt; &gt; Can someone work on fixing this please?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; ~Karthik<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On Fri, Jul 6, 2018 at 3:17 PM Poornima Gurusiddaiah<br>
&gt;&gt; &gt; &lt;<a href="mailto:pgurusid@redhat.com" target="_blank" rel="noreferrer">pgurusid@redhat.com</a>&gt;<br>
&gt;&gt; &gt; wrote:<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; The same test case is failing for my patch as well [1]. I requested for<br>
&gt;&gt; &gt;&gt; a<br>
&gt;&gt; &gt;&gt; regression system and tried to reproduce it.<br>
&gt;&gt; &gt;&gt; From my analysis, the brick process (mutiplexed) is consuming a lot of<br>
&gt;&gt; &gt;&gt; memory, and is being OOM killed. The regression has 1GB ram and the<br>
&gt;&gt; &gt;&gt; process<br>
&gt;&gt; &gt;&gt; is consuming more than 1GB. 1GB for 120 bricks is acceptable<br>
&gt;&gt; &gt;&gt; considering<br>
&gt;&gt; &gt;&gt; there is 1000 threads in that brick process.<br>
&gt;&gt; &gt;&gt; Ways to fix:<br>
&gt;&gt; &gt;&gt; - Increase the regression system RAM size OR<br>
&gt;&gt; &gt;&gt; - Decrease the number of volumes in the test case.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; But what is strange is why the test passes sometimes for some patches.<br>
&gt;&gt; &gt;&gt; There could be some bug/? in memory consumption.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; Regards,<br>
&gt;&gt; &gt;&gt; Poornima<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; On Fri, Jul 6, 2018 at 2:11 PM, Karthik Subrahmanya<br>
&gt;&gt; &gt;&gt; &lt;<a href="mailto:ksubrahm@redhat.com" target="_blank" rel="noreferrer">ksubrahm@redhat.com</a>&gt;<br>
&gt;&gt; &gt;&gt; wrote:<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Hi,<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; $subject is failing on centos regression for most of the patches with<br>
&gt;&gt; &gt;&gt;&gt; timeout error.<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; 07:32:34<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; ================================================================================<br>
&gt;&gt; &gt;&gt;&gt; 07:32:34 [07:33:05] Running tests in file<br>
&gt;&gt; &gt;&gt;&gt; ./tests/bugs/core/bug-1432542-mpx-restart-crash.t<br>
&gt;&gt; &gt;&gt;&gt; 07:32:34 Timeout set is 300, default 200<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34 ./tests/bugs/core/bug-1432542-mpx-restart-crash.t timed out<br>
&gt;&gt; &gt;&gt;&gt; after 300 seconds<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34 ./tests/bugs/core/bug-1432542-mpx-restart-crash.t: bad status<br>
&gt;&gt; &gt;&gt;&gt; 124<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34        *********************************<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34        *       REGRESSION FAILED       *<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34        * Retrying failed tests in case *<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34        * we got some spurious failures *<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34        *********************************<br>
&gt;&gt; &gt;&gt;&gt; 07:37:34<br>
&gt;&gt; &gt;&gt;&gt; 07:42:34 ./tests/bugs/core/bug-1432542-mpx-restart-crash.t timed out<br>
&gt;&gt; &gt;&gt;&gt; after 300 seconds<br>
&gt;&gt; &gt;&gt;&gt; 07:42:34 End of test ./tests/bugs/core/bug-1432542-mpx-restart-crash.t<br>
&gt;&gt; &gt;&gt;&gt; 07:42:34<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; ================================================================================<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Can anyone take a look?<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Thanks,<br>
&gt;&gt; &gt;&gt;&gt; Karthik<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; _______________________________________________<br>
&gt;&gt; &gt;&gt;&gt; Gluster-devel mailing list<br>
&gt;&gt; &gt;&gt;&gt; <a href="mailto:Gluster-devel@gluster.org" target="_blank" rel="noreferrer">Gluster-devel@gluster.org</a><br>
&gt;&gt; &gt;&gt;&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; _______________________________________________<br>
&gt;&gt; &gt; Gluster-infra mailing list<br>
&gt;&gt; &gt; <a href="mailto:Gluster-infra@gluster.org" target="_blank" rel="noreferrer">Gluster-infra@gluster.org</a><br>
&gt;&gt; &gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-infra" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-infra</a><br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank" rel="noreferrer">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div></div>-- <br><div dir="ltr" class="m_-5884681250894484064m_3170782629372755415m_-7009236854911016900gmail-m_8038808095630511528m_-4859419650416721006gmail_signature">- Atin (atinm)</div>
</blockquote></div>
_______________________________________________<br>
Gluster-infra mailing list<br>
<a href="mailto:Gluster-infra@gluster.org" target="_blank" rel="noreferrer">Gluster-infra@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-infra" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-infra</a></blockquote></div></div></div></div>
<br>_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank" rel="noreferrer">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Gluster-infra mailing list<br>
<a href="mailto:Gluster-infra@gluster.org" target="_blank" rel="noreferrer">Gluster-infra@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-infra" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-infra</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="m_-5884681250894484064gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>
</div></div>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank" rel="noreferrer">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a></blockquote></div></div></div>