[Gluster-devel] Tests that fail with multiplexing turned on
Atin Mukherjee
amukherj at redhat.com
Tue May 2 16:58:47 UTC 2017
On Tue, May 2, 2017 at 2:36 AM, Jeff Darcy <jeff at pl.atyp.us> wrote:
> Since the vast majority of our tests run without multiplexing, I'm going
> to start running regular runs of all tests with multiplexing turned on.
> You can see the patch here:
>
> https://review.gluster.org/#/c/17145/
>
> There are currently two tests that fail with multiplexing. Note that
> these are all tests that passed as of when multiplexing was introduced.
> I don't know about these specific tests, but most tests had passed with
> multiplexing turned *many times* - sometimes literally over a hundred
> because I did more runs that that during development. These are tests
> that have been broken since then, because without regular tests the
> people making changes could not have known how their changes interact
> with multiplexing.
>
> 19:14:41
> ./tests/bugs/glusterd/bug-1367478-volume-start-validation-after-glusterd-
> restart.t
> ..
> 19:14:41 not ok 17 Got "0" instead of "1", LINENUM:37
> 19:14:41 FAILED COMMAND: 1 brick_up_status_1 patchy1 127.1.1.2
> /d/backends/2/patchy12
>
This is one of the problem we are trying to address through
https://review.gluster.org/#/c/17101 and this test was broken by
https://review.gluster.org/16866 .
20:52:10 ./tests/features/trash.t ..
> 20:52:10 not ok 53 Got "2" instead of "1", LINENUM:221
> 20:52:10 FAILED COMMAND: 1 online_brick_count
> 20:52:10 ok 54, LINENUM:223
> 20:52:10 ok 55, LINENUM:226
> 20:52:10 not ok 56 Got "3" instead of "2", LINENUM:227
> 20:52:10 FAILED COMMAND: 2 online_brick_count
> 20:52:10 ok 57, LINENUM:228
> 20:52:10 ok 58, LINENUM:233
> 20:52:10 ok 59, LINENUM:236
> 20:52:10 ok 60, LINENUM:237
> 20:52:10 not ok 61 , LINENUM:238
> 20:52:10 FAILED COMMAND: [ -e /mnt/glusterfs/0/abc -a ! -e
> /mnt/glusterfs/0/.trashcan ]
>
IMO, nothing specific to brick-mux. online_brick_count function has a flaw.
It basically looks for pids for all the processes instead of looking for
only the bricks. In this test one of the volume was replicate and hence shd
was up and you'd see one additional pidfile placed. This was actually
caught by Mohit while we were (and still are) working on patch 17101. The
last failure needs to be looked at.
>
> Do we have any volunteers to look into these? I looked at the first one
> a bit and didn't find any obvious clues; I haven't looked at the second.
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170502/22881c73/attachment.html>
More information about the Gluster-devel
mailing list