<div dir="ltr">My take is, lets disable sdfs for 6.1 (we also have issues with its performance anyways). We will fix it properly by 6.2 or 7.0. Continue with marking sdfs-sanity.t tests as bad in that case.<div><br></div><div>-Amar</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 17, 2019 at 8:04 AM Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 10:27 PM Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Status: Tagging pending<br>
<br>
Waiting on patches:<br>
(Kotresh/Atin) - glusterd: fix loading ctime in client graph logic<br>
<a href="https://review.gluster.org/c/glusterfs/+/22579" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/22579</a></blockquote><div><br></div><div>The regression doesn't pass for the mainline patch. I believe master is broken now. With latest master sdfs-sanity.t always fail. We either need to fix it or mark it as bad test.<br></div></div></div></div></blockquote><div><br></div><div>commit 3883887427a7f2dc458a9773e05f7c8ce8e62301 (HEAD)<br>Author: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br>Date: Mon Apr 1 11:14:56 2019 +0530<br><br> features/locks: error-out {inode,entry}lk fops with all-zero lk-owner<br><br> Problem:<br> Sometimes we find that developers forget to assign lk-owner for an<br> inodelk/entrylk/lk before writing code to wind these fops. locks<br> xlator at the moment allows this operation. This leads to multiple<br> threads in the same client being able to get locks on the inode<br> because lk-owner is same and transport is same. So isolation<br> with locks can't be achieved.<br><br> Fix:<br> Disallow locks with lk-owner zero.<br><br> fixes bz#1624701<br> Change-Id: I1c816280cffd150ebb392e3dcd4d21007cdd767f<br> Signed-off-by: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br></div><div><br></div><div>With the above commit sdfs-sanity.t started failing. But when I looked at the last regression vote at <a href="https://build.gluster.org/job/centos7-regression/5568/consoleFull" target="_blank">https://build.gluster.org/job/centos7-regression/5568/consoleFull</a> I saw it voted back positive but the bell rang when I saw the overall regression took less than 2 hours and when I opened the regression link I saw the test actually failed but still this job voted back +1 at gerrit. </div><div><br></div><div><b>Deepshika</b> - <b>This is a bad CI bug we have now and have to be addressed at earliest. Please take a look at <a href="https://build.gluster.org/job/centos7-regression/5568/consoleFull" target="_blank">https://build.gluster.org/job/centos7-regression/5568/consoleFull</a> and investigate why the regression vote wasn't negative.</b></div><div><br></div><div>Pranith - I request you to investigate on the sdfs-sanity.t failure because of this patch.</div></div></div></div></div></div></blockquote><div><br></div><div>sdfs is supposed to serialize entry fops by doing entrylk, but all the
locks are being done with all-zero lk-owner. In essence sdfs doesn't
achieve its goal of mutual exclusion when conflicting operations are
executed by same client because two locks on same entry with same
all-zero-owner will get locks. The patch which lead to sdfs-sanity.t failure treats inodelk/entrylk/lk fops with all-zero lk-owner as Invalid request to prevent these kinds of bugs. So it exposed the bug in sdfs. I sent a fix for sdfs @ <a href="https://review.gluster.org/#/c/glusterfs/+/22582" target="_blank">https://review.gluster.org/#/c/glusterfs/+/22582</a></div></div></div></div></blockquote><div><br></div><div>Since this patch hasn't passed the regression and now that I see tests/bugs/replicate/bug-1386188-sbrain-fav-child.t hanging and timing out in the latest nightly regression runs because of the above commit (tested locally and confirm) I still request that we first revert this commit, get master back to stable and then put back the required fixes.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><div><b>@Maintainers - Please open up every regression link to see the actual status of the job and don't blindly trust on the +1 vote back at gerrit till this is addressed.</b></div><div><b><br></b></div><div>As per the policy, I'm going to revert this commit, watch out for the patch. I request this to be directly pushed with out waiting for the regression vote as we had done before in such breakage. Amar/Shyam - I believe you have this permission? <br></div></div></div></div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div></div><div><br></div><div>root@a5f81bd447c2:/home/glusterfs# prove -vf tests/basic/sdfs-sanity.t <br>tests/basic/sdfs-sanity.t .. <br>1..7<br>ok 1, LINENUM:8<br>ok 2, LINENUM:9<br>ok 3, LINENUM:11<br>ok 4, LINENUM:12<br>ok 5, LINENUM:13<br>ok 6, LINENUM:16<br>mkdir: cannot create directory ‘/mnt/glusterfs/1/coverage’: Invalid argument<br>stat: cannot stat '/mnt/glusterfs/1/coverage/dir': Invalid argument<br>tests/basic/rpc-coverage.sh: line 61: test: ==: unary operator expected<br>not ok 7 , LINENUM:20<br>FAILED COMMAND: tests/basic/rpc-coverage.sh /mnt/glusterfs/1<br>Failed 1/7 subtests <br><br>Test Summary Report<br>-------------------<br>tests/basic/sdfs-sanity.t (Wstat: 0 Tests: 7 Failed: 1)<br> Failed test: 7<br>Files=1, Tests=7, 14 wallclock secs ( 0.02 usr 0.00 sys + 0.58 cusr 0.67 csys = 1.27 CPU)<br>Result: FAIL</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
Following patches will not be taken in if CentOS regression does not<br>
pass by tomorrow morning Eastern TZ,<br>
(Pranith/KingLongMee) - cluster-syncop: avoid duplicate unlock of<br>
inodelk/entrylk<br>
<a href="https://review.gluster.org/c/glusterfs/+/22385" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/22385</a><br>
(Aravinda) - geo-rep: IPv6 support<br>
<a href="https://review.gluster.org/c/glusterfs/+/22488" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/22488</a><br>
(Aravinda) - geo-rep: fix integer config validation<br>
<a href="https://review.gluster.org/c/glusterfs/+/22489" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/22489</a><br>
<br>
Tracker bug status:<br>
(Ravi) - Bug 1693155 - Excessive AFR messages from gluster showing in<br>
RHGSWA.<br>
All patches are merged, but none of the patches adds the "Fixes"<br>
keyword, assume this is an oversight and that the bug is fixed in this<br>
release.<br>
<br>
(Atin) - Bug 1698131 - multiple glusterfsd processes being launched for<br>
the same brick, causing transport endpoint not connected<br>
No work has occurred post logs upload to bug, restart of bircks and<br>
possibly glusterd is the existing workaround when the bug is hit. Moving<br>
this out of the tracker for 6.1.<br>
<br>
(Xavi) - Bug 1699917 - I/O error on writes to a disperse volume when<br>
replace-brick is executed<br>
Very recent bug (15th April), does not seem to have any critical data<br>
corruption or service availability issues, planning on not waiting for<br>
the fix in 6.1<br>
<br>
- Shyam<br>
On 4/6/19 4:38 AM, Atin Mukherjee wrote:<br>
> Hi Mohit,<br>
> <br>
> <a href="https://review.gluster.org/22495" rel="noreferrer" target="_blank">https://review.gluster.org/22495</a> should get into 6.1 as it’s a<br>
> regression. Can you please attach the respective bug to the tracker Ravi<br>
> pointed out?<br>
> <br>
> <br>
> On Sat, 6 Apr 2019 at 12:00, Ravishankar N <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
> <mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>>> wrote:<br>
> <br>
> Tracker bug is <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1692394" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1692394</a>, in<br>
> case anyone wants to add blocker bugs.<br>
> <br>
> <br>
> On 05/04/19 8:03 PM, Shyam Ranganathan wrote:<br>
> > Hi,<br>
> ><br>
> > Expected tagging date for release-6.1 is on April, 10th, 2019.<br>
> ><br>
> > Please ensure required patches are backported and also are passing<br>
> > regressions and are appropriately reviewed for easy merging and<br>
> tagging<br>
> > on the date.<br>
> ><br>
> > Thanks,<br>
> > Shyam<br>
> > _______________________________________________<br>
> > Gluster-devel mailing list<br>
> > <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
> <br>
> <br>
> -- <br>
> - Atin (atinm)<br>
> <br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
> <br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a></blockquote></div></div></div>
</blockquote></div></div></div></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail-m_-2092600727912839467gmail-m_5614842088724992572gmail-m_3842161542096429310m_2156697411616619764gmail_signature"><div dir="ltr">Pranith<br></div></div></div></div>
</blockquote></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>