[Gluster-Maintainers] [Gluster-devel] Release 6.1: Expected tagging on April 10th
Sankarshan Mukhopadhyay
sankarshan.mukhopadhyay at gmail.com
Tue Apr 16 17:50:45 UTC 2019
On Tue, Apr 16, 2019 at 10:27 PM Atin Mukherjee <amukherj at redhat.com> wrote:
> On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee <amukherj at redhat.com> wrote:
>> On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan <srangana at redhat.com> wrote:
>>>
>>> Status: Tagging pending
>>>
>>> Waiting on patches:
>>> (Kotresh/Atin) - glusterd: fix loading ctime in client graph logic
>>> https://review.gluster.org/c/glusterfs/+/22579
>>
>>
>> The regression doesn't pass for the mainline patch. I believe master is broken now. With latest master sdfs-sanity.t always fail. We either need to fix it or mark it as bad test.
>
>
> commit 3883887427a7f2dc458a9773e05f7c8ce8e62301 (HEAD)
> Author: Pranith Kumar K <pkarampu at redhat.com>
> Date: Mon Apr 1 11:14:56 2019 +0530
>
> features/locks: error-out {inode,entry}lk fops with all-zero lk-owner
>
> Problem:
> Sometimes we find that developers forget to assign lk-owner for an
> inodelk/entrylk/lk before writing code to wind these fops. locks
> xlator at the moment allows this operation. This leads to multiple
> threads in the same client being able to get locks on the inode
> because lk-owner is same and transport is same. So isolation
> with locks can't be achieved.
>
> Fix:
> Disallow locks with lk-owner zero.
>
> fixes bz#1624701
> Change-Id: I1c816280cffd150ebb392e3dcd4d21007cdd767f
> Signed-off-by: Pranith Kumar K <pkarampu at redhat.com>
>
> With the above commit sdfs-sanity.t started failing. But when I looked at the last regression vote at https://build.gluster.org/job/centos7-regression/5568/consoleFull I saw it voted back positive but the bell rang when I saw the overall regression took less than 2 hours and when I opened the regression link I saw the test actually failed but still this job voted back +1 at gerrit.
>
> Deepshika - This is a bad CI bug we have now and have to be addressed at earliest. Please take a look at https://build.gluster.org/job/centos7-regression/5568/consoleFull and investigate why the regression vote wasn't negative.
Atin, we (Deepshikha and I) agree with your assessment.
This is the kind of situation that reduces the trust in our
application build pipeline. This is a result of a minor change
introduced to fix the constant issue we have observed with non-voting.
This is something that should not have slipped through and it did. We
will be observing a random sampling of the jobs to ensure that we gate
any such incidents that reduce the utility value of the pipeline. We
will be reviewing the change to the scripts which have since also had
the fix for the issue which led to this situation in the first place.
More information about the maintainers
mailing list