[Gluster-Maintainers] [Gluster-devel] Master branch lock down status
Ashish Pandey
aspandey at redhat.com
Wed Aug 8 07:01:20 UTC 2018
I think the problem with this failure is the same which Shyam suspected for other EC failure.
Connection to bricks are not being setup after killing bricks and starting volume using force.
./tests/basic/ec/ec-1468261.t
-----------------------------
Failure reported -
23:03:05 ok 34, LINENUM:79 23:03:05 not ok 35 Got "5" instead of "6", LINENUM:80 23:03:05 FAILED COMMAND: 6 ec_child_up_count patchy 0 23:03:05 not ok 36 Got "1298" instead of "^0$", LINENUM:83 23:03:05 FAILED COMMAND: ^0$ get_pending_heal_count patchy 23:03:05 ok 37, LINENUM:86 23:03:05 ok 38, LINENUM:87 23:03:05 not ok 39 Got "3" instead of "4", LINENUM:88
----------------------------
When I see the glustershd log, I can see that there is an issue while starting the volume by force to starte the killed bricks.
The bricks are not getting connected.
I am seeing following logs in glustershd----
==================================
[2018-08-06 23:05:45.077699] I [MSGID: 101016] [glusterfs3.h:739:dict_to_xdr] 0-dict: key 'trusted.ec.size' is would not be sent on wire in future [Invalid argument]
[2018-08-06 23:05:45.077724] I [MSGID: 101016] [glusterfs3.h:739:dict_to_xdr] 0-dict: key 'trusted.ec.dirty' is would not be sent on wire in future [Invalid argument]
[2018-08-06 23:05:45.077744] I [MSGID: 101016] [glusterfs3.h:739:dict_to_xdr] 0-dict: key 'trusted.ec.version' is would not be sent on wire in future [Invalid argument]
[2018-08-06 23:05:46.695719] I [rpc-clnt.c:2087:rpc_clnt_reconfig] 0-patchy-client-1: changing port to 49152 (from 0)
[2018-08-06 23:05:46.699766] W [MSGID: 114043] [client-handshake.c:1061:client_setvolume_cbk] 0-patchy-client-1: failed to set the volume [Resource temporarily unavailable]
[2018-08-06 23:05:46.699809] W [MSGID: 114007] [client-handshake.c:1090:client_setvolume_cbk] 0-patchy-client-1: failed to get 'process-uuid' from reply dict [Invalid argument]
[2018-08-06 23:05:46.699833] E [MSGID: 114044] [client-handshake.c:1096:client_setvolume_cbk] 0-patchy-client-1: SETVOLUME on remote-host failed: cleanup flag is set for xlator. Try again later [Resource temporarily unavailable]
[2018-08-06 23:05:46.699855] I [MSGID: 114051] [client-handshake.c:1201:client_setvolume_cbk] 0-patchy-client-1: sending CHILD_CONNECTING event
[2018-08-06 23:05:46.699920] I [MSGID: 114018] [client.c:2255:client_rpc_notify] 0-patchy-client-1: disconnected from patchy-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2018-08-06 23:05:50.702806] I [rpc-clnt.c:2087:rpc_clnt_reconfig] 0-patchy-client-1: changing port to 49152 (from 0)
[2018-08-06 23:05:50.706726] W [MSGID: 114043] [client-handshake.c:1061:client_setvolume_cbk] 0-patchy-client-1: failed to set the volume [Resource temporarily unavailable]
[2018-08-06 23:05:50.706783] W [MSGID: 114007] [client-handshake.c:1090:client_setvolume_cbk] 0-patchy-client-1: failed to get 'process-uuid' from reply dict [Invalid argument]
[2018-08-06 23:05:50.706808] E [MSGID: 114044] [client-handshake.c:1096:client_setvolume_cbk] 0-patchy-client-1: SETVOLUME on remote-host failed: cleanup flag is set for xlator. Try again later [Resource temporarily unavailable]
[2018-08-06 23:05:50.706831] I [MSGID: 114051] [client-handshake.c:1201:client_setvolume_cbk] 0-patchy-client-1: sending CHILD_CONNECTING event
[2018-08-06 23:05:50.706904] I [MSGID: 114018] [client.c:2255:client_rpc_notify] 0-patchy-client-1: disconnected from patchy-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2018-08-06 23:05:54.713490] I [rpc-clnt.c:2087:rpc_clnt_reconfig] 0-patchy-client-1: changing port to 49152 (from 0)
[2018-08-06 23:05:54.717417] W [MSGID: 114043] [client-handshake.c:1061:client_setvolume_cbk] 0-patchy-client-1: failed to set the volume [Resource temporarily unavailable]
[2018-08-06 23:05:54.717483] W [MSGID: 114007] [client-handshake.c:1090:client_setvolume_cbk] 0-patchy-client-1: failed to get 'process-uuid' from reply dict [Invalid argument]
[2018-08-06 23:05:54.717508] E [MSGID: 114044] [client-handshake.c:1096:client_setvolume_cbk] 0-patchy-client-1: SETVOLUME on remote-host failed: cleanup flag is set for xlator. Try again later [Resource temporarily unavailable]
[2018-08-06 23:05:54.717530] I [MSGID: 114051] [client-handshake.c:1201:client_setvolume_cbk] 0-patchy-client-1: sending CHILD_CONNECTING event
[2018-08-06 23:05:54.717605] I [MSGID: 114018] [client.c:2255:client_rpc_notify] 0-patchy-client-1: disconnected from patchy-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2018-08-06 23:05:58.204494]:++++++++++ G_LOG:./tests/basic/ec/ec-1468261.t: TEST: 83 ^0$ get_pending_heal_count patchy ++++++++++
There are many more such logs in this duration
============================
Time at which test at line no 80 started -
[2018-08-06 23:05:38.652297]:++++++++++ G_LOG:./tests/basic/ec/ec-1468261.t: TEST: 80 6 ec_child_up_count patchy 0 ++++++++++
Time at which next test started to get pending heal count -
[2018-08-06 23:05:58.204494]:++++++++++ G_LOG:./tests/basic/ec/ec-1468261.t: TEST: 83 ^0$ get_pending_heal_count patchy ++++++++++
Time gap is 20 second which is exactly the time for which we wait for all the connection to come up in our tests.
78 #Bring up the down bricks
79 TEST $CLI volume start $V0 force
80 EXPECT_WITHIN $CHILD_UP_TIMEOUT "6" ec_child_up_count $V0 0
81
82 #Wait for heal to complete
83 EXPECT_WITHIN $HEAL_TIMEOUT "^0$" get_pending_heal_count $V0
In this time, 20 seconds, bricks did not come up. That's why test on line no 80 failed. As bricks did not come UP, heal did not happen and that lead to failure of test at line no 83.
---
Ashish
----- Original Message -----
From: "Shyam Ranganathan" <srangana at redhat.com>
To: "GlusterFS Maintainers" <maintainers at gluster.org>, "Gluster Devel" <gluster-devel at gluster.org>
Sent: Wednesday, August 8, 2018 5:07:36 AM
Subject: [Gluster-devel] Master branch lock down status
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180808/0fb98a95/attachment-0001.html>
More information about the maintainers
mailing list