[Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
Niels de Vos
ndevos at redhat.com
Thu Jul 10 12:41:20 UTC 2014
On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
> On 07/08/2014 01:54 PM, Avra Sengupta wrote:
> >In the test case, we are checking gluster snap status to see if all the
> >bricks are alive. One of the snap bricks fail to start up, and hence we
> >see the failure. The brick fails to bind with "Address already in use"
> >error. But if we see clearly that same log also says "binding to
> >failed", where the address is missing. So it might be trying to bind to
> >the wrong(or empty) address.
> >
> >Following are the brick logs for the same:
> >
> >[2014-07-07 11:20:15.662573] I
> >[rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
> >Configured rpc.outstanding-rpc-limit with value 64
> >[2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate]
> >0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is
> >deprecated, preferred is 'transport.socket.listen-port', continuing with
> >correction
> >[2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind]
> >0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to failed:
> >Address already in use
> >[2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind]
> >0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
> >[2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create]
> >0-rpc-service: listening on transport failed
> >[2014-07-07 11:20:15.662810] W [server.c:920:init]
> >0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
> >[2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init]
> >0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume
> >'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again
> >[2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init]
> >0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
> >[2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate]
> >0-graph: init failed
> >[2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (-->
> >0-: received signum (0), shutting down
> >
> >Regards,
> >Avra
> >
> >On 07/08/2014 11:28 AM, Joseph Fernandes wrote:
> >>Hi Pranith,
> >>
> >>I am looking into this issue. Will keep you posted on the process by EOD
> >>
> >>Regards,
> >>~Joe
> >>
> >>----- Original Message -----
> >>From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> >>To: josferna at redhat.com
> >>Cc: "Gluster Devel" <gluster-devel at gluster.org>, "Rajesh Joseph"
> >><rjoseph at redhat.com>, "Sachin Pandit" <spandit at redhat.com>,
> >>asengupt at redhat.com
> >>Sent: Monday, July 7, 2014 8:42:24 PM
> >>Subject: Re: [Gluster-devel] regarding spurious failure
> >>tests/bugs/bug-1112559.t
> >>
> >>
> >>On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:
> >>>Joseph,
> >>> Any updates on this? It failed 5 regressions today.
> >>>http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
> >>>http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull
> >>>
> >>>
> >>>http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull
> >>>
> >>>
> >>>http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull
> >>>
> >>>
> >>>http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull
> >>>
> >>>
> >>One more :
> >>http://build.gluster.org/job/rackspace-regression-2GB/543/console
> >>
> >>Pranith
> >>
> >>>CC some more folks who work on snapshot.
> >>>
>
> A lot of regression runs are failing because of this test unit.
> Given feature freeze is around the corner, shall we provide a +1
> verified manually for those patchsets that fail this test?
I don't think that is easily possible. We also need to remove the -1
verified that the "Gluster Build System" sets. I'm not sure how we
should be doing that. Maybe its better to disable (parts of) the
test-case?
Niels
More information about the Gluster-devel
mailing list