[Gluster-devel] Release 3.10 spurious(?) regression failures in the past week
Shyam
srangana at redhat.com
Mon Jan 30 20:00:47 UTC 2017
Hi,
The following is a list of spurious(?) regression failures in the 3.10
branch last week (from fstat.gluster.org).
Request component owner or other devs to take a look at the failures,
and weed out real issues.
Regression failures 3.10:
Summary:
1) https://build.gluster.org/job/centos6-regression/2960/consoleFull
./tests/basic/ec/ec-background-heals.t
2) https://build.gluster.org/job/centos6-regression/2963/consoleFull
<glusterd Core dumped>
./tests/basic/volume-snapshot.t
3) https://build.gluster.org/job/netbsd7-regression/2694/consoleFull
./tests/basic/afr/self-heald.t
4) https://build.gluster.org/job/centos6-regression/2954/consoleFull
./tests/basic/tier/legacy-many.t
5) https://build.gluster.org/job/centos6-regression/2858/consoleFull
./tests/bugs/bitrot/bug-1245981.t
6) https://build.gluster.org/job/netbsd7-regression/2637/consoleFull
./tests/basic/afr/self-heal.t
7) https://build.gluster.org/job/netbsd7-regression/2624/consoleFull
./tests/encryption/crypt.t
Thanks,
Shyam
Some details from the test logs, to possibly accelerate analysis due to
familiarity:
1) https://build.gluster.org/job/centos6-regression/2960/consoleFull
./tests/basic/ec/ec-background-heals.t
23:36:37 ok 42, LINENUM:61
23:36:37 not ok 43 Got "0" instead of "2", LINENUM:63
23:36:37 FAILED COMMAND: 2 get_pending_heal_count patchy
23:36:37 ok 44, LINENUM:64
2) https://build.gluster.org/job/centos6-regression/2963/consoleFull
<glusterd Core dumped>
03:02:17 [11:02:17] Running tests in file ./tests/basic/volume-snapshot.t
03:02:38 allocation/use_blkid_wiping=1 configuration setting is set
while LVM is not compiled with blkid wiping support.
03:02:38 Falling back to native LVM signature detection.
03:02:40 allocation/use_blkid_wiping=1 configuration setting is set
while LVM is not compiled with blkid wiping support.
03:02:40 Falling back to native LVM signature detection.
03:02:42 allocation/use_blkid_wiping=1 configuration setting is set
while LVM is not compiled with blkid wiping support.
03:02:42 Falling back to native LVM signature detection.
03:02:56 umount2: Invalid argument
03:02:56 umount: /mnt/glusterfs/0: not mounted
03:02:57 umount2: Invalid argument
03:02:57 umount: /mnt/glusterfs/0: not mounted
03:02:58 umount2: Invalid argument
03:02:58 umount: /mnt/glusterfs/0: not mounted
03:02:59 umount2: Invalid argument
03:02:59 umount: /mnt/glusterfs/0: not mounted
03:03:00 umount2: Invalid argument
03:03:00 umount: /mnt/glusterfs/0: not mounted
03:03:03 -:1: parser error : Start tag expected, '<' not found
03:03:03 Connection failed. Please check if gluster daemon is
operational.
03:03:03 ^
03:03:03 -:1: parser error : Start tag expected, '<' not found
03:03:03 Connection failed. Please check if gluster daemon is
operational.
03:03:03 ^
03:03:04 -:1: parser error : Start tag expected, '<' not found
03:03:04 Connection failed. Please check if gluster daemon is
operational.
03:03:04 ^
03:03:05 -:1: parser error : Start tag expected, '<' not found
03:03:05 Connection failed. Please check if gluster daemon is
operational.
03:03:05 ^
03:03:10 -:1: parser error : Start tag expected, '<' not found
03:03:10 Connection failed. Please check if gluster daemon is
operational.
03:03:10 ^
03:03:10 -:1: parser error : Start tag expected, '<' not found
03:03:10 Connection failed. Please check if gluster daemon is
operational.
03:03:10 ^
03:03:10 volume delete: patchy2: failed: Cannot delete Volume patchy2
,as it has 1 snapshots. To delete the volume, first delete all the
snapshots under it.
03:03:23 ./tests/basic/volume-snapshot.t ..
03:03:23 1..49
03:03:23 ok 1, LINENUM:82
...
03:03:23 ok 17, LINENUM:110
03:03:23 Connection failed. Please check if gluster daemon is
operational.
03:03:23 Connection failed. Please check if gluster daemon is
operational.
03:03:23 not ok 18 Got "" instead of "Stopped", LINENUM:114
03:03:23 FAILED COMMAND: Stopped snapshot_status patchy_snap
03:03:23 not ok 19 Got "" instead of "Stopped", LINENUM:115
03:03:23 FAILED COMMAND: Stopped snapshot_status patchy2_snap
3) https://build.gluster.org/job/netbsd7-regression/2694/consoleFull
./tests/basic/afr/self-heald.t
20:34:28 ok 37, LINENUM:17
20:34:28 not ok 38 Got "2" instead of "1", LINENUM:136
20:34:28 FAILED COMMAND: 1 get_pending_heal_count patchy
...
20:34:28 ok 52, LINENUM:150
20:34:28 not ok 53 Got "0" instead of "1", LINENUM:151
20:34:28 FAILED COMMAND: 1 get_pending_heal_count patchy
20:34:28 ok 54, LINENUM:152
...
20:34:28 ok 67, LINENUM:164
20:34:28 not ok 68 , LINENUM:168
20:34:28 FAILED COMMAND: test 0 -eq 2 -o 0 -eq 4
20:34:28 ok 69, LINENUM:169
4) https://build.gluster.org/job/centos6-regression/2954/consoleFull
./tests/basic/tier/legacy-many.t
21:20:10 ok 18, LINENUM:76
21:20:10 not ok 19 Got "1" instead of "0", LINENUM:80
21:20:10 FAILED COMMAND: 0 check_counters 15 0
5) https://build.gluster.org/job/centos6-regression/2858/consoleFull
./tests/bugs/bitrot/bug-1245981.t
11:07:55 getfattr: Removing leading '/' from absolute path names
11:07:55 /d/backends/patchy0/filezero: trusted.bit-rot.signature: No
such attribute
11:07:55 getfattr: Removing leading '/' from absolute path names
11:07:55 getfattr: Removing leading '/' from absolute path names
11:07:55 ./tests/bugs/bitrot/bug-1245981.t ..
...
11:07:55 ok 9, LINENUM:33
11:07:55 not ok 10 , LINENUM:50
11:07:55 FAILED COMMAND: getfattr -m . -n trusted.bit-rot.signature
/d/backends/patchy0/filezero
11:07:55 ok 11, LINENUM:53
6) https://build.gluster.org/job/netbsd7-regression/2637/consoleFull
./tests/basic/afr/self-heal.t
07:46:29 mkdir: /mnt/glusterfs/0/jkl/mno: Socket is not connected
07:46:30 chown: /mnt/glusterfs/0/def/ghi/file2.txt: Socket is not
connected
07:46:40 ls: /d/backends/brick0/def/ghi/file1.txt: No such file or
directory
07:46:40 ls: /d/backends/brick0/def/ghi/file2.txt: No such file or
directory
07:46:40 ls: /d/backends/brick0/jkl/mno/file.txt: No such file or
directory
07:48:33 getfattr: Removing leading '/' from absolute path names
07:48:33 getfattr: Removing leading '/' from absolute path names
07:48:33 ./tests/basic/afr/self-heal.t ..
07:48:33 1..145
07:48:33 ok 1, LINENUM:18
...
07:48:33 not ok 18 , LINENUM:44
07:48:33 FAILED COMMAND: mkdir -p /mnt/glusterfs/0/def/ghi
/mnt/glusterfs/0/jkl/mno
07:48:33 not ok 19 , LINENUM:45
07:48:33 FAILED COMMAND: dd if=/dev/urandom
of=/mnt/glusterfs/0/def/ghi/file1.txt bs=1024k count=2
07:48:33 not ok 20 , LINENUM:46
07:48:33 FAILED COMMAND: dd if=/dev/urandom
of=/mnt/glusterfs/0/def/ghi/file2.txt bs=1024k count=3
07:48:33 not ok 21 , LINENUM:47
07:48:33 FAILED COMMAND: dd if=/dev/urandom
of=/mnt/glusterfs/0/jkl/mno/file.txt bs=1024k count=4
07:48:33 not ok 22 , LINENUM:48
07:48:33 FAILED COMMAND: chown 36:36 /mnt/glusterfs/0/def/ghi/file2.txt
...
07:48:33 ok 29, LINENUM:56
07:48:33 not ok 30 , LINENUM:60
07:48:33 FAILED COMMAND: ls /d/backends/brick0/def/ghi/file1.txt
07:48:33 not ok 31 , LINENUM:61
07:48:33 FAILED COMMAND: ls /d/backends/brick0/def/ghi/file2.txt
07:48:33 not ok 32 , LINENUM:62
07:48:33 FAILED COMMAND: ls /d/backends/brick0/jkl/mno/file.txt
07:48:33 ok 33, LINENUM:63
...
07:48:33 ./tests/basic/afr/self-heal.t: 1 new core files
7) https://build.gluster.org/job/netbsd7-regression/2624/consoleFull
./tests/encryption/crypt.t
22:05:09 ok 18, LINENUM:42
22:05:09 not ok 19 , LINENUM:48
22:05:09 FAILED COMMAND: ./tests/encryption/frag
/mnt/glusterfs/0/testfile /tmp/patchy-goodfile 262144 500
22:05:09 ok 20, LINENUM:52
More information about the Gluster-devel
mailing list