[Gluster-users] geo-replication
Rosemond, Sonny
sonny at lanl.gov
Tue Jan 27 14:53:28 UTC 2015
I have established the passwordless ssh session rsa keys and have tested that successfully. And I have tried
gluster system:: execute gsec_create then issuing the the create push-pem which fails as well. So I can never get to the point where I can actually issue the start command.
gluster volume geo-replication volume1 gfs7::geo1 create push-pem
geo-replication command failed
tail -100 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
[2015-01-27 14:42:39.283376] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2015-01-27 14:42:39.283444] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2015-01-27 14:42:39.283502] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:39.288607] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:39.292514] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:39.296403] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:39.300274] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:39.304174] I [glusterd-store.c:3501:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
[2015-01-27 14:42:39.305438] I [glusterd.c:146:glusterd_uuid_init] 0-management: retrieved UUID: b5977829-dd93-46e7-bf85-26bd4426d28c
Final graph:
+------------------------------------------------------------------------------+
1: volume management
2: type mgmt/glusterd
3: option rpc-auth.auth-glusterfs on
4: option rpc-auth.auth-unix on
5: option rpc-auth.auth-null on
6: option transport.socket.listen-backlog 128
7: option ping-timeout 30
8: option transport.socket.read-fail-log off
9: option transport.socket.keepalive-interval 2
10: option transport.socket.keepalive-time 10
11: option transport-type rdma
12: option working-directory /var/lib/glusterd
13: end-volume
14:
+------------------------------------------------------------------------------+
[2015-01-27 14:42:39.306445] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /brick12 on port 49153
[2015-01-27 14:42:39.309511] I [glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501
[2015-01-27 14:42:39.312927] I [glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37, host: gfs7, port: 0
[2015-01-27 14:42:39.314506] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:40.318334] I [glusterd-utils.c:6267:glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV3 successfully
[2015-01-27 14:42:40.318807] I [glusterd-utils.c:6272:glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV1 successfully
[2015-01-27 14:42:40.319189] I [glusterd-utils.c:6277:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3 successfully
[2015-01-27 14:42:40.319573] I [glusterd-utils.c:6282:glusterd_nfs_pmap_deregister] 0-: De-registered NLM v4 successfully
[2015-01-27 14:42:40.319956] I [glusterd-utils.c:6287:glusterd_nfs_pmap_deregister] 0-: De-registered NLM v1 successfully
[2015-01-27 14:42:40.320289] I [glusterd-utils.c:6292:glusterd_nfs_pmap_deregister] 0-: De-registered ACL v3 successfully
[2015-01-27 14:42:40.323761] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:40.323981] W [socket.c:2992:socket_connect] 0-management: Ignore failed connection attempt on , (No such file or directory)
[2015-01-27 14:42:41.328151] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-01-27 14:42:41.328378] W [socket.c:2992:socket_connect] 0-management: Ignore failed connection attempt on , (No such file or directory)
[2015-01-27 14:42:41.330849] I [glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501
[2015-01-27 14:42:41.344200] I [glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.344336] I [glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gfs10 (0), ret: 0
[2015-01-27 14:42:41.347457] I [glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501
[2015-01-27 14:42:41.348833] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.348871] I [glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce, host: gfs9, port: 0
[2015-01-27 14:42:41.351174] I [glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.351215] I [glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-01-27 14:42:41.351281] I [glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0, host: gfs11, port: 0
[2015-01-27 14:42:41.352828] I [glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6, host: gfs10, port: 0
[2015-01-27 14:42:41.354176] I [glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e, host: gfs8, port: 0
[2015-01-27 14:42:41.355654] I [glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.355695] I [glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-01-27 14:42:41.357233] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/9843a94b3cf9f35a106b698a4f381ce5.socket failed (Invalid argument)
[2015-01-27 14:42:41.357271] I [MSGID: 106006] [glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management: nfs has disconnected from glusterd.
[2015-01-27 14:42:41.357311] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/977065d1f56211919ec151c343f47b17.socket failed (Invalid argument)
[2015-01-27 14:42:41.357330] I [MSGID: 106006] [glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management: glustershd has disconnected from glusterd.
[2015-01-27 14:42:41.357397] I [glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.357427] I [glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-01-27 14:42:41.357482] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.357511] I [glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.357605] I [glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gfs9 (0), ret: 0
[2015-01-27 14:42:41.360301] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.360338] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.360443] I [glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.360551] I [glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gfs11 (0), ret: 0
[2015-01-27 14:42:41.363272] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.363498] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.363554] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.363580] I [glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.363603] I [glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-01-27 14:42:41.363671] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.364300] I [glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.364327] I [glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-01-27 14:42:41.365251] I [glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501
[2015-01-27 14:42:41.372951] I [glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501
[2015-01-27 14:42:41.377523] I [glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.377659] I [glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gfs7 (0), ret: 0
[2015-01-27 14:42:41.380502] I [glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.380649] I [glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gfs8 (0), ret: 0
[2015-01-27 14:42:41.383212] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.383247] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.383278] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37
[2015-01-27 14:42:41.383295] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 6185d0b4-8297-4d1d-8d66-b48a9d91543e
[2015-01-27 14:42:41.384272] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.384326] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.384378] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.384469] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 9c1306c9-e86c-452b-9d4f-d99a98bf51ce
[2015-01-27 14:42:41.401008] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.401101] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.401243] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.401305] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.401367] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6
[2015-01-27 14:42:41.413272] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.414746] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.414797] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:42:41.414865] I [glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e8840179-74ae-450e-b31a-24da9df005d0
[2015-01-27 14:44:03.848119] W [glusterd-op-sm.c:4021:glusterd_op_modify_op_ctx] 0-management: op_ctx modification failed
[2015-01-27 14:44:03.849829] I [glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management: Received status volume req for volume geo1
[2015-01-27 14:44:03.854705] I [glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management: Received status volume req for volume volume1
tail -100 /var/log/glusterfs/geo-replication-slaves/slave.log
22: option send-gids true
23: end-volume
24:
25: volume volume1-replicate-0
26: type cluster/replicate
27: subvolumes volume1-client-0 volume1-client-1
28: end-volume
29:
30: volume volume1-client-2
31: type protocol/client
32: option ping-timeout 42
33: option remote-host gfs10
34: option remote-subvolume /brick10
35: option transport-type socket
36: option username 1337bfad-e466-432b-b8f0-b60947c35cc9
37: option password 1040a923-4937-441f-a43e-7b7170b632de
38: option transport.socket.ssl-enabled off
39: option send-gids true
40: end-volume
41:
42: volume volume1-client-3
43: type protocol/client
44: option ping-timeout 42
45: option remote-host gfs9
46: option remote-subvolume /brick9
47: option transport-type socket
48: option username 1337bfad-e466-432b-b8f0-b60947c35cc9
49: option password 1040a923-4937-441f-a43e-7b7170b632de
50: option transport.socket.ssl-enabled off
51: option send-gids true
52: end-volume
53:
54: volume volume1-replicate-1
55: type cluster/replicate
56: subvolumes volume1-client-2 volume1-client-3
57: end-volume
58:
59: volume volume1-dht
60: type cluster/distribute
61: option lookup-unhashed off
62: subvolumes volume1-replicate-0 volume1-replicate-1
63: end-volume
64:
65: volume volume1-write-behind
66: type performance/write-behind
67: subvolumes volume1-dht
68: end-volume
69:
70: volume volume1-read-ahead
71: type performance/read-ahead
72: subvolumes volume1-write-behind
73: end-volume
74:
75: volume volume1-io-cache
76: type performance/io-cache
77: subvolumes volume1-read-ahead
78: end-volume
79:
80: volume volume1-quick-read
81: type performance/quick-read
82: subvolumes volume1-io-cache
83: end-volume
84:
85: volume volume1-open-behind
86: type performance/open-behind
87: subvolumes volume1-quick-read
88: end-volume
89:
90: volume volume1-md-cache
91: type performance/md-cache
92: subvolumes volume1-open-behind
93: end-volume
94:
95: volume volume1
96: type debug/io-stats
97: option latency-measurement off
98: option count-fop-hits off
99: subvolumes volume1-md-cache
100: end-volume
101:
102: volume meta-autoload
103: type meta
104: subvolumes volume1
105: end-volume
106:
+------------------------------------------------------------------------------+
[2015-01-27 14:44:40.701171] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-volume1-client-0: changing port to 49153 (from 0)
[2015-01-27 14:44:40.705323] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-volume1-client-1: changing port to 49154 (from 0)
[2015-01-27 14:44:40.705385] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-volume1-client-2: changing port to 49152 (from 0)
[2015-01-27 14:44:40.705414] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-volume1-client-3: changing port to 49152 (from 0)
[2015-01-27 14:44:40.717072] I [client-handshake.c:1415:select_server_supported_programs] 0-volume1-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2015-01-27 14:44:40.717168] I [client-handshake.c:1415:select_server_supported_programs] 0-volume1-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2015-01-27 14:44:40.717517] I [client-handshake.c:1200:client_setvolume_cbk] 0-volume1-client-1: Connected to volume1-client-1, attached to remote volume '/brick11'.
[2015-01-27 14:44:40.717544] I [client-handshake.c:1212:client_setvolume_cbk] 0-volume1-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2015-01-27 14:44:40.717610] I [MSGID: 108005] [afr-common.c:3553:afr_notify] 0-volume1-replicate-0: Subvolume 'volume1-client-1' came back up; going online.
[2015-01-27 14:44:40.717646] I [client-handshake.c:1200:client_setvolume_cbk] 0-volume1-client-2: Connected to volume1-client-2, attached to remote volume '/brick10'.
[2015-01-27 14:44:40.717656] I [client-handshake.c:1212:client_setvolume_cbk] 0-volume1-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2015-01-27 14:44:40.717689] I [MSGID: 108005] [afr-common.c:3553:afr_notify] 0-volume1-replicate-1: Subvolume 'volume1-client-2' came back up; going online.
[2015-01-27 14:44:40.717743] I [client-handshake.c:188:client_set_lk_version_cbk] 0-volume1-client-1: Server lk version = 1
[2015-01-27 14:44:40.717873] I [client-handshake.c:188:client_set_lk_version_cbk] 0-volume1-client-2: Server lk version = 1
From: M S Vishwanath Bhat <vbhat at redhat.com<mailto:vbhat at redhat.com>>
Date: Tue, 27 Jan 2015 14:41:33 +0530
To: LANL User LANL User <sonny at lanl.gov<mailto:sonny at lanl.gov>>, "gluster-users at gluster.org<mailto:gluster-users at gluster.org>" <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] geo-replication
On 26/01/15 21:35, Rosemond, Sonny wrote:
I have a RHEL7 testing environment consisting of 6 nodes total, all running Gluster 3.6.1. The master volume is distributed/replicated, and the slave volume is distributed. Firewalls and SELinux have been disabled for testing purposes. Passwordless SSH has been established and tested successfully, however when I try to start geo-replication, the process churns for a bit and then drops me back to the command prompt with the message, “geo-replication command failed”.
What should I look for? What am I missing?
Have you created the geo-rep session between master and slave? If yes, I assume you have run geo-rep create with push-pem? And before that you need to collect the pem keys using, "gluster system:: execute gsec_create". And the another thing to be noted, is the passwordless ssh need to be established between master node where you are running "geo-rep create" command and the slave node specified in the "geo-rep create" command.
If you have done all of the above properly but still it's failing, please share glusterd log file of the node where you are running geo-rep start and the slave node specified in the geo-rep start command.
Best Regards,
Vishwanath
~Sonny
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>http://www.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gluster.org_mailman_listinfo_gluster-2Dusers&d=AwMCAw&c=KZJWt40Ec3xgv0ORfhSJcvXZuqMxUG5r9f2nGq4AjVA&r=FbI0NgJbeGXr6k4qBX4I9w&m=_MDZ589jooP84bT5BVk_457KBXs6WXPxM3ZgBVV_Uqc&s=Cog3N1Zsn6gJ9ORPgkoJfXuhjTG3PBQ2v4K_6uNSK2Q&e=>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150127/4bf5950a/attachment.html>
More information about the Gluster-users
mailing list