<div><br><div class="gmail_quote"><div dir="auto">On Wed, 30 Aug 2017 at 00:23, Shwetha Panduranga <<a href="mailto:spandura@redhat.com">spandura@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>Hi Shyam, we are already doing it. we wait for rebalance status to be complete. We loop. we keep checking if the status is complete for '20' minutes or so. </div></div></blockquote><div dir="auto"><br></div><div dir="auto">Are you saying in this test rebalance status was executed multiple times till it succeed? If yes then the test shouldn't have failed. Can I get to access the complete set of logs?</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br></div><div>-Shwetha<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 29, 2017 at 7:04 PM, Shyam Ranganathan <span><<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 08/29/2017 09:31 AM, Atin Mukherjee wrote:<span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
On Tue, Aug 29, 2017 at 4:13 AM, Shyam Ranganathan <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a> <mailto:<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>>> wrote:<br>
<br>
Nigel, Shwetha,<br>
<br>
The latest Glusto run [a] that was started by Nigel, post fixing the<br>
prior timeout issue, failed (much later though) again.<br>
<br>
I took a look at the logs and my analysis is here [b]<br>
<br>
@atin, @kaushal, @ppai can you take a look and see if the analysis<br>
is correct?<br>
<br>
<br>
I took a look at the logs and here is my theory:<br>
<br>
glusterd starts the rebalance daemon through runner framework with nowait mode which essentially means that even though glusterd reports back a success back to CLI for rebalance start, one of the node might take some more additional time to start the rebalance process and establish rpc connection. In this case we hit a race where while one of the node was still trying to start the rebalance process a rebalance status command was triggered which eventually failed on the node as rpc connection wasn't successful and originator glusterd's commit op failed with ""Received commit RJT from uuid: 6f9524e6-9f9e-44aa-b2f4-393404adfd9d" failure. Technically to avoid all these spurious time out issues we try to check the status in a loop till a certain timeout. Isn't that the case in glusto? If my analysis is correct, you shouldn't be seeing this failure on the 2nd attempt as its a race.<br>
</blockquote>
<br></span>
Thanks Atin.<br>
<br>
In this case there is no second check or a timed check (sleep or otherwise (EXPECT_WITHIN like constructs)).<br>
<br>
@Shwetha, can we fix up this test and give it another go?<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_5450831027571291893h5">
<br>
<br>
In short glusterd has got an error when checking for rebalance stats<br>
from one of the nodes as:<br>
"Received commit RJT from uuid: 6f9524e6-9f9e-44aa-b2f4-393404adfd9d"<br>
<br>
and the rebalance deamon on the node with that UUID is not really<br>
ready to serve requests when this was called, hence I am assuming<br>
this is causing the error. But need a once over by one of you folks.<br>
<br>
@Shwetha, can we add a further timeout between rebalance start and<br>
checking the status, just so that we avoid this timing issue on<br>
these nodes.<br>
<br>
Thanks,<br>
Shyam<br>
<br>
[a] glusto run:<br>
<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/377/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/377/</a><br>
<<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/377/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/377/</a>><br>
<br>
[b] analysis of the failure:<br>
<a href="https://paste.fedoraproject.org/paste/mk6ynJ0B9AH6H9ncbyru5w" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/paste/mk6ynJ0B9AH6H9ncbyru5w</a><br>
<<a href="https://paste.fedoraproject.org/paste/mk6ynJ0B9AH6H9ncbyru5w" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/paste/mk6ynJ0B9AH6H9ncbyru5w</a>><br>
<br>
On 08/25/2017 04:29 PM, Shyam Ranganathan wrote:<br>
<br>
Nigel was kind enough to kick off a glusto run on 3.12 head a<br>
couple of days back. The status can be seen here [1].<br>
<br>
The run failed, but managed to get past what Glusto does on<br>
master (see [2]). Not that this is a consolation, but just<br>
stating the fact.<br>
<br>
The run [1] failed at,<br>
17:05:57<br>
functional/bvt/test_cvt.py::TestGlusterHealSanity_dispersed_glusterfs::test_self_heal_when_io_in_progress<br>
FAILED<br>
<br>
The test case failed due to,<br>
17:10:28 E AssertionError: ('Volume %s : All process are<br>
not online', 'testvol_dispersed')<br>
<br>
The test case can be seen here [3], and the reason for failure<br>
is that Glusto did not wait long enough for the down brick to<br>
come up (it waited for 10 seconds, but the brick came up after<br>
12 seconds or within the same second as the test for it being<br>
up. The log snippets pointing to this problem are here [4]. In<br>
short there was no real bug or issue that caused the failure as yet.<br>
<br>
Glusto as a gating factor for this release was desirable, but<br>
having got this far on 3.12 does help.<br>
<br>
@nigel, we could try post increasing the timeout between<br>
bringing the brick up to checking if it is up, and try another<br>
run, let me know if that works, and what is needed from me to<br>
get this going.<br>
<br>
Shyam<br>
<br>
[1] Glusto 3.12 run:<br>
<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/365/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/365/</a><br>
<<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/365/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/365/</a>><br>
<br>
[2] Glusto on master:<br>
<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/360/testReport/functional.bvt.test_cvt/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/360/testReport/functional.bvt.test_cvt/</a><br>
<<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/360/testReport/functional.bvt.test_cvt/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/360/testReport/functional.bvt.test_cvt/</a>><br>
<br>
<br>
[3] Failed test case:<br>
<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/365/testReport/functional.bvt.test_cvt/TestGlusterHealSanity_dispersed_glusterfs/test_self_heal_when_io_in_progress/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/365/testReport/functional.bvt.test_cvt/TestGlusterHealSanity_dispersed_glusterfs/test_self_heal_when_io_in_progress/</a><br>
<<a href="https://ci.centos.org/view/Gluster/job/gluster_glusto/365/testReport/functional.bvt.test_cvt/TestGlusterHealSanity_dispersed_glusterfs/test_self_heal_when_io_in_progress/" rel="noreferrer" target="_blank">https://ci.centos.org/view/Gluster/job/gluster_glusto/365/testReport/functional.bvt.test_cvt/TestGlusterHealSanity_dispersed_glusterfs/test_self_heal_when_io_in_progress/</a>><br>
<br>
<br>
[4] Log analysis pointing to the failed check:<br>
<a href="https://paste.fedoraproject.org/paste/znTPiFLrc2~vsWuoYRToZA" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/paste/znTPiFLrc2~vsWuoYRToZA</a><br></div></div>
<<a href="https://paste.fedoraproject.org/paste/znTPiFLrc2%7EvsWuoYRToZA" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/paste/znTPiFLrc2%7EvsWuoYRToZA</a>><span><br>
<br>
"Releases are made better together"<br>
_______________________________________________<br>
Gluster-devel mailing list<br></span>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-devel</a><span><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-devel</a>><br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br></span>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-devel</a>><br>
<br>
<br>
</blockquote>
</blockquote></div><br></div>
</blockquote></div></div><div dir="ltr">-- <br></div><div class="gmail_signature" data-smartmail="gmail_signature">- Atin (atinm)</div>