[Gluster-devel] NetBSD tests not running to completion.

Pranith Kumar Karampuri pkarampu at redhat.com
Sun Jan 10 10:53:41 UTC 2016



On 01/10/2016 02:04 PM, Pranith Kumar Karampuri wrote:
>
>
> On 01/10/2016 11:08 AM, Emmanuel Dreyfus wrote:
>> Pranith Kumar Karampuri <pkarampu at redhat.com> wrote:
>>
>>> tests/basic/afr/arbiter-statfs.t
>> I posted patches to fix this one (but it seems Jenkins is down? No
>> regression is running)
>>
>>> tests/basic/afr/self-heal.t
> It seems like in this run, self-heal.t and quota.t are running at the 
> same time. Not sure why that can happen. So for now not going to 
> investigate more.
> [2016-01-08 07:58:55.6N]:++++++++++ 
> G_LOG:./tests/basic/afr/self-heal.t: TEST: 88 88 test -d 
> /d/backends/brick0/file ++++++++++
> [2016-01-08 07:58:55.6N]:++++++++++ 
> G_LOG:./tests/basic/afr/self-heal.t: TEST: 89 89 diff /dev/fd/63 
> /dev/fd/62 ++++++++++
> [2016-01-08 07:58:55.6N]:++++++++++ G_LOG:./tests/basic/quota.t: TEST: 
> 124 124 gluster --mode=script --wignore volume quota patchy 
> limit-usage /addbricktest/dir8 100MB ++++++++++
> [2016-01-08 07:58:55.6N]:++++++++++ 
> G_LOG:./tests/basic/afr/self-heal.t: TEST: 92 92 rm -rf 
> /mnt/glusterfs/0/addbricktest ++++++++++
> [2016-01-08 07:58:55.6N]:++++++++++ G_LOG:./tests/basic/quota.t: TEST: 
> 124 124 gluster --mode=script --wignore volume quota patchy 
> limit-usage /addbricktest/dir9 100MB ++++++++++
>
>>> tests/basic/afr/entry-self-heal.t
> This seem to have a bit of history. We have more data points that 
> keeps failing once in a while considering that michael posted a patch: 
> http://review.gluster.org/12938

I tried to look into 3 instances of this failure:
1) 
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12574/consoleFull

same issue as above, two tests are running in parallel.
[2015-12-10 07:03:52.6N]:++++++++++ 
G_LOG:./tests/basic/afr/arbiter-statfs.t: TEST: 27 27 gluster 
--mode=script --wignore volume start patchy ++++++++++
[2015-12-10 07:03:06.6N]:++++++++++ 
G_LOG:./tests/basic/glusterd/heald.t: TEST: 58 58 [0-9][0-9]* 
get_shd_process_pid ++++++++++
[2015-12-10 07:03:58.047476]  : volume start patchy : SUCCESS
[2015-12-10 07:03:58.6N]:++++++++++ 
G_LOG:./tests/basic/afr/arbiter-statfs.t: TEST: 28 28 glusterfs 
--volfile-server=nbslave74.cloud.gluster.org --volfile-id=patchy 
/mnt/glusterfs/0 ++++++++++

2) 
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12569/consoleFull

same issue, self-heald.t and entry-self-heal.t are executing in parallel:
[2015-12-10 05:00:05.6N]:++++++++++ 
G_LOG:./tests/basic/afr/entry-self-heal.t: TEST: 167 167 1 
afr_child_up_status patchy 0 ++++++++++
[2015-12-10 05:00:07.6N]:++++++++++ 
G_LOG:./tests/basic/afr/self-heald.t: TEST: 30 30 1 
afr_child_up_status_in_shd patchy 4 ++++++++++
[2015-12-10 05:00:08.401698] I [rpc-clnt.c:1834:rpc_clnt_reconfig] 
0-patchy-client-0: changing port to 49152 (from 0)
[2015-12-10 05:00:08.403526] I [MSGID: 114057] 
[client-handshake.c:1421:select_server_supported_programs] 
0-patchy-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)

3) 
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13285/consoleFull

Looks same again: quota.t and entry-self-heal.t are executing at the 
same time.

[2016-01-08 07:58:07.6N]:++++++++++ G_LOG:./tests/basic/quota.t: TEST: 
75 75 8.0MB quotausage /test_dir ++++++++++
[2016-01-08 07:58:08.294126] I [MSGID: 108006] 
[afr-common.c:4136:afr_local_init] 0-patchy-replicate-0: no subvolumes up
[2016-01-08 07:58:08.6N]:++++++++++ 
G_LOG:./tests/basic/afr/entry-self-heal.t: TEST: 280 280 rm -rf 
/d/backends/patchy0/.glusterfs/indices/xattrop/29bc252c-3f32-4e3e-b3a9-31478c04bb7f 
/d/backends/patchy0/.glusterfs/indices/xattrop/50adf186-8323-4f01-98fb-5621b8d9edee 
/d/backends/patchy0/.glusterfs/indices/xattrop/690c83b4-3e17-4558-a025-d08775742814 
/d/backends/patchy0/.glusterfs/indices/xattrop/952e518c-aaa3-4697-a2a7-a25c906635bc 
/d/backends/patchy0/.glusterfs/indices/xattrop/be2d1bee-a81c-4c63-8fcc-f06f0fc40e9b 
/d/backends/patchy0/.glusterfs/indices/xattrop/dfa00115-a11a-4b6e-93cd-b03e02ac8727 
/d/backends/patchy0/.glusterfs/indices/xattrop/e25b0f17-aac0-4f0f-b2d5-23f3a6493c0d 
/d/backends/patchy0/.glusterfs/indices/xattrop/fb2b4f42-fe9f-48dc-a8d3-1c4419166bf0 
/d/backends/patchy0/.glusterfs/indices/xattrop/fc89b498-fb47-4218-8304-693bbdc6bfc6 
/d/backends/patchy0/.glusterfs/indices/xattrop/xattrop-10fe0390-68cf-42f6-9838-ca243fe26635 
/d/backends/patchy0/.glusterfs/indices/xattrop/xattrop-f4d7f633-fec7-4cbc-829b-5e54c66f60b1 
/d/backends/patchy1/.glusterfs/indices/xattrop/1ed0b466-4f82-4e89-8aa0-d33f3cbec8bf 
/d/backends/patchy1/.glusterfs/indices/xattrop/29bc252c-3f32-4e3e-b3a9-31478c04bb7f 
/d/backends/patchy1/.glusterfs/indices/xattrop/338a302d-8e5a-4276-966d-3479aa3051ed 
/d/backends/patchy1/.glusterfs/indices/xattrop/4291d3cb-7c96-41d9-8cb7-25360398590b 
/d/backends/patchy1/.glusterfs/indices/xattrop/48f788c0-48b1-4072-97aa-e136c97c1d88 
/d/backends/patchy1/.glusterfs/indices/xattrop/50adf186-8323-4f01-98fb-5621b8d9edee 
/d/backends/patchy1/.glusterfs/indices/xattrop/592847dd-2592-4fab-bc6a-25a771b89e98 
/d/backends/patchy1/.glusterfs/indices/xattrop/773f509d-ba4e-47fc-869a-88b3946637ee 
/d/backends/patchy1/.glusterfs/indices/xattrop/8259e98d-e4d7-4e34-ad18-5ceb4990635e 
/d/backends/patchy1/.glusterfs/indices/xattrop/9062e2c0-3ca7-4b0b-8039-2fbc92c024f0 
/d/backends/patchy1/.glusterfs/indices/xattrop/abe439f0-6439-43aa-bca0-c8884b3b0903 
/d/backends/patchy1/.glusterfs/indices/xattrop/b2cb3267-8c48-4278-be43-846606d6884e 
/d/backends/patchy1/.glusterfs/indices/xattrop/b34b929d-9b73-4777-b04f-7a6afdd96675 
/d/backends/patchy1/.glusterfs/indices/xattrop/b86c5c5d-7a56-4cdd-9c48-c300915d42a4 
/d/backends/patchy1/.glusterfs/indices/xattrop/be2d1bee-a81c-4c63-8fcc-f06f0fc40e9b 
/d/backends/patchy1/.glusterfs/indices/xattrop/c92066bb-65e9-44ca-a72b-2015b5a8391a 
/d/backends/patchy1/.glusterfs/indices/xattrop/d7505495-502d-41f2-b238-846ca94dbc23 
/d/backends/patchy1/.glusterfs/indices/xattrop/dfa00115-a11a-4b6e-93cd-b03e02ac8727 
/d/backends/patchy1/.glusterfs/indices/xattrop/fee367e1-9f35-4c37-98f0-b6677f3fae76 
/d/backends/patchy1/.glusterfs/indices/xattrop/xattrop-17767941-d24e-4855-af93-e2d38cccfb2b 
/d/backends/patchy1/.glusterfs/indices/xattrop/xattrop-cc232f0f-b751-4ee7-9a14-680af97858a1 
/d/backends/patchy1/.glusterfs/indices/xattrop/xattrop-d7cd2726-3ddc-4fb6-830c-75979d385a8e 
++++++++++
[2016-01-08 07:58:10.6N]:++++++++++ 
G_LOG:./tests/basic/afr/entry-self-heal.t: TEST: 283 283 1 
afr_child_up_status patchy 1 ++++++++++

Pranith
>
> Will be looking into this more now.
>> That two ones are still to be investigated, and it seems
>> tests/basic/afr/split-brain-resolution.t is now reliabily broken as
>> well.
> Will take a look at this today after entry-self-heal.t
>
> Pranith
>>
>>> tests/basic/quota-nfs.t
>> That one is marked as bad test and should not cause harm on spurious
>> failure as its result is ignored.
>>
>> I am trying to reproduce a spurious VM reboot during tests by looping on
>> the whole test suite on nbslave70, with reboot on panic disabled (it
>> will drop into kernel debugger instead). No result so far.
>>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



More information about the Gluster-devel mailing list