[Gluster-devel] Revert of 56e5fdae (SSL change) - why?

Joe Julian joe at julianfamily.org
Mon Jan 8 03:19:33 UTC 2018


The point is, I believe, that one shouldn't have to go digging through 
external resources to find out why a commit exists. Please ensure the 
commit message has adequate accurate information.


On 01/07/2018 07:11 PM, Atin Mukherjee wrote:
> Also please refer 
> http://lists.gluster.org/pipermail/gluster-devel/2017-December/054103.html 
> . Some of the tests like ssl-cipher.t, trash.t were failing frequently 
> in brick multiplexing enabled regression jobs. When I reverted this 
> patch, I couldn't reproduce any of those test failures.
>
> On Mon, Jan 8, 2018 at 8:36 AM, Nithya Balachandran 
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
>
>
>
>     On 7 January 2018 at 18:54, Jeff Darcy <jeff at pl.atyp.us
>     <mailto:jeff at pl.atyp.us>> wrote:
>
>         There's no explanation, or reference to one, in the commit
>         message. In the comments, there's a claim that seems a bit
>         exaggerated.
>
>         > This is causing almost all the regressions to fail.
>         durbaility-off.t is the most affected test.
>
>
>     This patch does seem to be the cause. Running this test in a loop
>     on my local system with the patch caused it to fail several times
>     (it is an intermittent failure). The test passed every time after
>     I tried the same after reverting the patch. When it fails, it is
>     because the mount process does not connect to all bricks.
>
>
>         This patch was merged on December 13. Regressions have passed
>         many times since then. If almost all regressions have started
>         failing recently, I suggest we look for a more recent cause.
>         For example, if this was collateral damage from debugging the
>         dict-change issue, then the patch should be reinstated (which
>         I see has not been done). Alternatively, is the above supposed
>         to mean that this patch has been observed to cause
>         *occasional* failures in many other tests? If so, which tests
>         and when? There's no way to search for these in Gerrit or
>         Jenkins. If specific logs or core-dump analyses point toward
>         this conclusion and the subsequent action, then it would be
>         very helpful for those to be brought forward so we can debug
>         the underlying problem. That's likely to be hard enough
>         without trying to do it blind.
>         _______________________________________________
>         Gluster-devel mailing list
>         Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>         http://lists.gluster.org/mailman/listinfo/gluster-devel
>         <http://lists.gluster.org/mailman/listinfo/gluster-devel>
>
>
>
>     _______________________________________________
>     Gluster-devel mailing list
>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>     http://lists.gluster.org/mailman/listinfo/gluster-devel
>     <http://lists.gluster.org/mailman/listinfo/gluster-devel>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180107/770e9d48/attachment-0001.html>


More information about the Gluster-devel mailing list