[Gluster-users] gluster 3.7.11 qemu+libgfapi problem
Dmitry Melekhov
dm at belkam.com
Tue Apr 26 14:41:16 UTC 2016
26.04.2016 18:37, FNU Raghavendra Manjunath пишет:
>
> Hi,
>
> Can you please check if glusterd on the node "192.168.22.28
> <http://192.168.22.28:24007/>" is ruuning?
>
> "service glusterd status" or "ps aux | grep glusterd".
>
> Regards,
> Raghavendra
>
Hello!
It is, definetely, not running- as I said I turned link to this node off
on purpose - to test failure scenario and
looks like test is not passed...
>
> On Tue, Apr 26, 2016 at 7:26 AM, Dmitry Melekhov <dm at belkam.com
> <mailto:dm at belkam.com>> wrote:
>
> Hello!
>
> I have 3 servers setup- centos 7 and gluster 3.7.11
> and don't know did it work with previous versions or not...
>
>
> Volume is replicated 3.
>
> If I shutdown port on switch for one of nodes, then qemu can't
> start, because it can't connects to gluster:
>
>
>
> [2016-04-26 10:51:53.881654] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-pool-client-7: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
> [2016-04-26 10:51:53.882271] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-pool-client-7:
> Connected to pool-client-7, attached to remote volume
> '/wall/pool/brick'.
> [2016-04-26 10:51:53.882299] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-pool-client-7:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-04-26 10:51:53.882620] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk]
> 0-pool-client-7: Server lk version = 1
> [2016-04-26 10:51:55.373983] E
> [socket.c:2279:socket_connect_finish] 0-pool-client-8: connection
> to 192.168.22.28:24007 <http://192.168.22.28:24007> failed (No
> route to host)
> [2016-04-26 10:51:55.416522] I [MSGID: 108031]
> [afr-common.c:1900:afr_local_discovery_cbk] 0-pool-replicate-0:
> selecting local read_child pool-client-6
> [2016-04-26 10:51:55.416919] I [MSGID: 104041]
> [glfs-resolve.c:869:__glfs_active_subvol] 0-pool: switched to
> graph 66617468-6572-2d35-3334-372d32303136 (0)
> qemu: terminating on signal 15 from pid 9767
> [2016-04-26 10:53:36.418693] I [MSGID: 114021]
> [client.c:2115:notify] 0-pool-client-6: current graph is no longer
> active, destroying rpc_client
> [2016-04-26 10:53:36.418802] I [MSGID: 114021]
> [client.c:2115:notify] 0-pool-client-7: current graph is no longer
> active, destroying rpc_client
> [2016-04-26 10:53:36.418840] I [MSGID: 114021]
> [client.c:2115:notify] 0-pool-client-8: current graph is no longer
> active, destroying rpc_client
> [2016-04-26 10:53:36.418870] I [MSGID: 114018]
> [client.c:2030:client_rpc_notify] 0-pool-client-6: disconnected
> from pool-client-6. Client process will keep trying to connect to
> glusterd until brick's port is avai
> lable
> [2016-04-26 10:53:36.418880] I [MSGID: 114018]
> [client.c:2030:client_rpc_notify] 0-pool-client-7: disconnected
> from pool-client-7. Client process will keep trying to connect to
> glusterd until brick's port is avai
> lable
> [2016-04-26 10:53:36.418949] W [MSGID: 108001]
> [afr-common.c:4090:afr_notify] 0-pool-replicate-0: Client-quorum
> is not met
> [2016-04-26 10:53:36.419002] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-pool-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
>
>
> 192.168.22.28 is node , which is not available.
>
> I don't see any errors in bricks logs, only
> [2016-04-26 10:53:41.807032] I [dict.c:473:dict_get]
> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac)
> [0x7f7405415cbc]
> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
> [0x
> 7f73f59da917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
> [0x7f74054060fc] ) 0-dict: !this || key=() [Invalid argument]
>
> But I guess it is not related.
>
>
> Could you tell me what can cause this problem ?
>
> Thank you!
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160426/5f951d54/attachment.html>
More information about the Gluster-users
mailing list