[Gluster-devel] netbsd regression update : cdc.t

Atin Mukherjee amukherj at redhat.com
Mon May 4 03:50:45 UTC 2015



On 05/03/2015 11:26 PM, Atin Mukherjee wrote:
> 
> 
> On 05/02/2015 08:52 PM, Emmanuel Dreyfus wrote:
>> Atin Mukherjee <amukherj at redhat.com> wrote:
>>
>>> Although I couldn't reproduce cdc.t failure but georep-setup.t failed
>>> consistently and glusterd backtrace showed that it hangs on gverify.sh
>>
>> That suggests the script itself blocks forever. Runnig with -x may be
>> insightful.
>>
>>> If you happen to see cdc.t failure again please ring a bell :)
>>
>> It is 100% reproductible on nbslave78
>>
>> nbslave78# cd /autobuild//glusterfs/
>> nbslave78# ./run-tests.sh  -f ./tests/basic/cdc.t
> In nbslave78, test case gets stuck at volume stop's
> gd_syncop_mgmt_brick_op () with the following bt:
> 
> (gdb) t a a bt
> 
> Thread 6 (LWP 2):
> #0  0xbb35d7d7 in _sys___nanosleep50 () from /usr/lib/libc.so.12
> #1  0xbb688aa7 in __nanosleep50 () from /usr/lib/libpthread.so.1
> #2  0xbb75f422 in gf_timer_proc () from
> /autobuild/install/lib/libglusterfs.so.0
> #3  0xbb68cbca in ?? () from /usr/lib/libpthread.so.1
> #4  0xbb3acbb0 in __mknod50 () from /usr/lib/libc.so.12
> #5  0xbb192000 in ?? ()
> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
> 
> Thread 5 (LWP 3):
> #0  0xbb3ac8b7 in ____sigtimedwait50 () from /usr/lib/libc.so.12
> 
> Thread 4 (LWP 4):
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> 
> Thread 3 (LWP 5):
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> 
> Thread 2 (LWP 6):
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> 
> Thread 1 (LWP 1):
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> (gdb) t 3
> [Switching to thread 3 (LWP 5)]
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> (gdb) bt
> #0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
> 
> Surprisingly, If I take a different commit as head in the same vm
> (/home/jenkins/root/workspace/rackspace-netbsd7-regression-triggered)
> its not reproduced. So my initial suspect is the delta between these two
> heads.
> 
> Ccing Joseph to cross check as I see some of his patches in the delta.
I see the following log from the brick process:

[2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
4-tcp.patchy-server: binding to  failed: Address already in use
[2015-05-04 03:43:50.309815] E [socket.c:826:__socket_server_bind]
4-tcp.patchy-server: Port is already in use
[2015-05-04 03:43:50.309871] W [rpcsvc.c:1602:rpcsvc_transport_create]
4-rpc-service: listening on transport failed
[2015-05-04 03:43:50.309921] W [MSGID: 115045] [server.c:1001:init]
4-patchy-server: creation of listener failed
[2015-05-04 03:43:50.309972] E [xlator.c:426:xlator_init]
0-patchy-server: Initialization of volume 'patchy-server' failed, review yo
ur volfile again
>>
>> ... GlusterFS Test Framework ...
>>
>>
>> The following required tools are missing:
>>
>>   * dbench
>>
>> Running tests in file ./tests/basic/cdc.t
>> [15:17:13] ./tests/basic/cdc.t .. 52/55 
>> not ok 52 
>> not ok 53 Got "Started" instead of "Stopped"
>> volume delete: patchy: failed: Another transaction is in progress for
>> patchy. Please try again after sometime.
>> not ok 54 
>> [15:17:13] ./tests/basic/cdc.t .. 55/55 not ok 55 
>> [15:17:13] ./tests/basic/cdc.t .. Failed 4/55 subtests 
>> [15:19:50]
>>
>> Test Summary Report
>> -------------------
>> ./tests/basic/cdc.t (Wstat: 0 Tests: 55 Failed: 4)
>>   Failed tests:  52-55
>> Files=1, Tests=55, 157 wallclock secs ( 0.03 usr  0.05 sys +  4.01 cusr
>> 11.98 csys = 16.07 CPU)
>> Result: FAIL
>> Failed tests ./tests/basic/cdc.t
>>
> 

-- 
~Atin


More information about the Gluster-devel mailing list