[Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
Avra Sengupta
asengupt at redhat.com
Tue Jul 8 08:26:11 UTC 2014
Adding rhs-gabbar
On 07/08/2014 01:54 PM, Avra Sengupta wrote:
> In the test case, we are checking gluster snap status to see if all
> the bricks are alive. One of the snap bricks fail to start up, and
> hence we see the failure. The brick fails to bind with "Address
> already in use" error. But if we see clearly that same log also says
> "binding to failed", where the address is missing. So it might be
> trying to bind to the wrong(or empty) address.
>
> Following are the brick logs for the same:
>
> [2014-07-07 11:20:15.662573] I
> [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
> Configured rpc.outstanding-rpc-limit with value 64
> [2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate]
> 0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is
> deprecated, preferred is 'transport.socket.listen-port', continuing
> with correction
> [2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind]
> 0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to failed:
> Address already in use
> [2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind]
> 0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
> [2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create]
> 0-rpc-service: listening on transport failed
> [2014-07-07 11:20:15.662810] W [server.c:920:init]
> 0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
> [2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init]
> 0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume
> 'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile
> again
> [2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init]
> 0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
> [2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate]
> 0-graph: init failed
> [2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit]
> (--> 0-: received signum (0), shutting down
>
> Regards,
> Avra
>
> On 07/08/2014 11:28 AM, Joseph Fernandes wrote:
>> Hi Pranith,
>>
>> I am looking into this issue. Will keep you posted on the process by EOD
>>
>> Regards,
>> ~Joe
>>
>> ----- Original Message -----
>> From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
>> To: josferna at redhat.com
>> Cc: "Gluster Devel" <gluster-devel at gluster.org>, "Rajesh Joseph"
>> <rjoseph at redhat.com>, "Sachin Pandit" <spandit at redhat.com>,
>> asengupt at redhat.com
>> Sent: Monday, July 7, 2014 8:42:24 PM
>> Subject: Re: [Gluster-devel] regarding spurious failure
>> tests/bugs/bug-1112559.t
>>
>>
>> On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:
>>> Joseph,
>>> Any updates on this? It failed 5 regressions today.
>>> http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull
>>>
>>>
>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull
>>>
>>>
>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull
>>>
>>>
>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull
>>>
>>>
>> One more :
>> http://build.gluster.org/job/rackspace-regression-2GB/543/console
>>
>> Pranith
>>
>>> CC some more folks who work on snapshot.
>>>
>>> Pranith
>>>
>>> On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:
>>>> hi Joseph,
>>>> The test above failed on a documentation patch, so it has got to
>>>> be a spurious failure.
>>>> Check
>>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull
>>>>
>>>> for more information
>>>>
>>>> Pranith
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
More information about the Gluster-devel
mailing list