[Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

Raghavendra Talur raghavendra.talur at gmail.com
Wed Jul 1 13:12:07 UTC 2015


On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes <josferna at redhat.com>
wrote:

> Hi All,
>
> TEST 4-5 are failing i.e the following
>
> TEST $CLI volume start $V0
> TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST
> $H0:$B0/${V0}$CACHE_BRICK_LAST
>
> Glusterd Logs say:
> [2015-07-01 07:33:25.053412] I [rpc-clnt.c:965:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2015-07-01 07:33:25.053851]  [run.c:190:runner_log] (-->
> /build/install/lib/libglusterfs.so.0(_gf_log_callingfn+0x240)[0x7fe8349bfb82]
> (--> /build/install/lib/libglusterfs.so.0(runner_log+0x192)[0x7fe834a29426]
> (-->
> /build/install/lib/glusterfs/3.8dev/xlator/mgmt/glusterd.so(glusterd_volume_start_glusterfs+0xae7)[0x7fe829e475d7]
> (-->
> /build/install/lib/glusterfs/3.8dev/xlator/mgmt/glusterd.so(glusterd_brick_start+0x151)[0x7fe829e514e3]
> (-->
> /build/install/lib/glusterfs/3.8dev/xlator/mgmt/glusterd.so(glusterd_start_volume+0xba)[0x7fe829ebd534]
> ))))) 0-: Starting GlusterFS: /build/install/sbin/glusterfsd -s
> slave26.cloud.gluster.org --volfile-id
> patchy.slave26.cloud.gluster.org.d-backends-patchy3 -p
> /var/lib/glusterd/vols/patchy/run/slave26.cloud.gluster.org-d-backends-patchy3.pid
> -S /var/run/gluster/e511d04af0bd91bfc3b030969b789d95.socket --brick-name
> /d/backends/patchy3 -l /var/log/glusterfs/bricks/d-backends-patchy3.log
> --xlator-option *-posix.glusterd-uuid=aff38c34-7744-4c
>  c0-9aa4-a9fab5a71b2f --brick-port 49172 --xlator-option
> patchy-server.listen-port=49172
> [2015-07-01 07:33:25.070284] I [MSGID: 106144]
> [glusterd-pmap.c:269:pmap_registry_remove] 0-pmap: removing brick (null) on
> port 49172
> [2015-07-01 07:33:25.071022] E [MSGID: 106005]
> [glusterd-utils.c:4448:glusterd_brick_start] 0-management: Unable to start
> brick slave26.cloud.gluster.org:/d/backends/patchy3
> [2015-07-01 07:33:25.071053] E [MSGID: 106123]
> [glusterd-syncop.c:1416:gd_commit_op_phase] 0-management: Commit of
> operation 'Volume Start' failed on localhost
>
>
> The volume is 2x2 :
> LAST_BRICK=3
> TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0..$LAST_BRICK}
>
> When looked into the 3 bricks are fine but when looked at the 4th brick
> log:
>
> [2015-07-01 07:33:25.056463] I [MSGID: 100030] [glusterfsd.c:2296:main]
> 0-/build/install/sbin/glusterfsd: Started running
> /build/install/sbin/glusterfsd version 3.8dev (args:
> /build/install/sbin/glusterfsd -s slave26.cloud.gluster.org --volfile-id
> patchy.slave26.cloud.gluster.org.d-backends-patchy3 -p
> /var/lib/glusterd/vols/patchy/run/slave26.cloud.gluster.org-d-backends-patchy3.pid
> -S /var/run/gluster/e511d04af0bd91bfc3b030969b789d95.socket --brick-name
> /d/backends/patchy3 -l /var/log/glusterfs/bricks/d-backends-patchy3.log
> --xlator-option *-posix.glusterd-uuid=aff38c34-7744-4cc0-9aa4-a9fab5a71b2f
> --brick-port 49172 --xlator-option patchy-server.listen-port=49172)
> [2015-07-01 07:33:25.064879] I [MSGID: 101190]
> [event-epoll.c:627:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2015-07-01 07:33:25.068992] I [MSGID: 101173]
> [graph.c:268:gf_add_cmdline_options] 0-patchy-server: adding option
> 'listen-port' for volume 'patchy-server' with value '49172'
> [2015-07-01 07:33:25.069034] I [MSGID: 101173]
> [graph.c:268:gf_add_cmdline_options] 0-patchy-posix: adding option
> 'glusterd-uuid' for volume 'patchy-posix' with value
> 'aff38c34-7744-4cc0-9aa4-a9fab5a71b2f'
> [2015-07-01 07:33:25.069313] I [MSGID: 115034]
> [server.c:392:_check_for_auth_option] 0-/d/backends/patchy3: skip format
> check for non-addr auth option auth.login./d/backends/patchy3.allow
> [2015-07-01 07:33:25.069316] I [MSGID: 101190]
> [event-epoll.c:627:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2015-07-01 07:33:25.069330] I [MSGID: 115034]
> [server.c:392:_check_for_auth_option] 0-/d/backends/patchy3: skip format
> check for non-addr auth option
> auth.login.18b50c0d-38fb-4b49-bb5e-b203f4217223.password
> [2015-07-01 07:33:25.069580] I
> [rpcsvc.c:2210:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured
> rpc.outstanding-rpc-limit with value 64
> [2015-07-01 07:33:25.069647] W [MSGID: 101002]
> [options.c:952:xl_opt_validate] 0-patchy-server: option 'listen-port' is
> deprecated, preferred is 'transport.socket.listen-port', continuing with
> correction
> [2015-07-01 07:33:25.069736] E [socket.c:818:__socket_server_bind]
> 0-tcp.patchy-server: binding to  failed: Address already in use
> [2015-07-01 07:33:25.069750] E [socket.c:821:__socket_server_bind]
> 0-tcp.patchy-server: Port is already in use
> [2015-07-01 07:33:25.069763] W [rpcsvc.c:1599:rpcsvc_transport_create]
> 0-rpc-service: listening on transport failed
> [2015-07-01 07:33:25.069774] W [MSGID: 115045] [server.c:996:init]
> 0-patchy-server: creation of listener failed
> [2015-07-01 07:33:25.069788] E [MSGID: 101019] [xlator.c:423:xlator_init]
> 0-patchy-server: Initialization of volume 'patchy-server' failed, review
> your volfile again
> [2015-07-01 07:33:25.069798] E [MSGID: 101066]
> [graph.c:323:glusterfs_graph_init] 0-patchy-server: initializing translator
> failed
> [2015-07-01 07:33:25.069808] E [MSGID: 101176]
> [graph.c:669:glusterfs_graph_activate] 0-graph: init failed
> [2015-07-01 07:33:25.070183] W [glusterfsd.c:1214:cleanup_and_exit] (-->
> 0-: received signum (0), shutting down
>
>
> Looks like it is assigned a port which is already in used.
>

Saw the same error in another test failing for another patch set.
Here is the link:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11740/consoleFull

A port assigned by Glusterd for a brick is found to be in use already by
the brick. Any changes in Glusterd recently which can cause this?

Or is it a test infra problem?



>
> The status of the volume in glusterd is not started, as a result
> attach-tier command fails, i.e tiering rebalancer cannot run.
>
> [2015-07-01 07:33:25.275092] E [MSGID: 106301]
> [glusterd-op-sm.c:4086:glusterd_op_ac_send_stage_op] 0-management: Staging
> of operation 'Volume Rebalance' failed on localhost : Volume patchy needs
> to be started to perform rebalance
>
> but the volume is running in the crippled mode, as a result mount works
> fine.
>
> i.e TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0; works fine
>
> TEST 9-12 failed as attach has failed.
>
>
> Regards,
> Joe
>
> ----- Original Message -----
> From: "Joseph Fernandes" <josferna at redhat.com>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Wednesday, July 1, 2015 1:59:41 PM
> Subject: Re: [Gluster-devel] spurious failures
> tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t
>
> Yep will have a look
>
> ----- Original Message -----
> From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> To: "Joseph Fernandes" <josferna at redhat.com>, "Gluster Devel" <
> gluster-devel at gluster.org>
> Sent: Wednesday, July 1, 2015 1:44:44 PM
> Subject: spurious failures
> tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t
>
> hi,
>
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/11757/consoleFull
> has the logs. Could you please look into it.
>
> Pranith
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
*Raghavendra Talur *
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150701/f53267e2/attachment.html>


More information about the Gluster-devel mailing list