[Gluster-users] Geo-Replication push-pem actually does'nt append common_secret_pub.pem to authrorized_keys file
PEPONNET, Cyril N (Cyril)
cyril.peponnet at alcatel-lucent.com
Mon Feb 2 16:49:03 UTC 2015
Every node is connected:
root at nodeA geo-replication]# gluster peer status
Number of Peers: 2
Hostname: nodeB
Uuid: 6a9da7fc-70ec-4302-8152-0e61929a7c8b
State: Peer in Cluster (Connected)
Hostname: nodeC
Uuid: c12353b5-f41a-4911-9329-fee6a8d529de
State: Peer in Cluster (Connected)
[root at nodeB ~]# gluster peer status
Number of Peers: 2
Hostname: nodeC
Uuid: c12353b5-f41a-4911-9329-fee6a8d529de
State: Peer in Cluster (Connected)
Hostname: nodeA
Uuid: 2ac172bb-a2d0-44f1-9e09-6b054dbf8980
State: Peer is connected and Accepted (Connected)
[root at nodeC geo-replication]# gluster peer status
Number of Peers: 2
Hostname: nodeA
Uuid: 2ac172bb-a2d0-44f1-9e09-6b054dbf8980
State: Peer in Cluster (Connected)
Hostname: nodeB
Uuid: 6a9da7fc-70ec-4302-8152-0e61929a7c8b
State: Peer in Cluster (Connected)
The only difference is State: Peer is connected and Accepted (Connected) from nodeB about nodeA
When I execute gluster system from node A or C, I have the 3 nodes keys in common pem file. But from nodeB, I only have keys for nodeB and node C. This is infortunate as I try to launch the georeplication job from nodeB (master).
--
Cyril Peponnet
On Feb 2, 2015, at 2:07 AM, Aravinda <avishwan at redhat.com<mailto:avishwan at redhat.com>> wrote:
Looks like node C is in diconnected state. Please let us know the output of `gluster peer status` from all the master nodes and slave nodes.
--
regards
Aravinda
On 01/22/2015 12:27 AM, PEPONNET, Cyril N (Cyril) wrote:
So,
On master node of my 3 node setup:
1) gluster system:: execute gsec_create
in /var/lib/glusterd/geo-replication/common_secret.pub I have pem pub key from master node A and node B (not node C).
On node C in don’t have anything in /v/l/g/geo/ except the gsync template config.
So here I have an issue.
The only error I saw on node C is:
[2015-01-21 18:36:41.179601] E [rpc-clnt.c:208:call_bail]
0-management: bailing out frame type(Peer mgmt) op(—(2)) xid =
0x23 sent = 2015-01-21 18:26:33.031937. timeout = 600 for
xx.xx.xx.xx:24007
On node A, the cli.log looks like:
[2015-01-21 18:49:49.878905] I [socket.c:3561:socket_init]
0-glusterfs: SSL support is NOT enabled
[2015-01-21 18:49:49.878947] I [socket.c:3576:socket_init]
0-glusterfs: using system polling thread
[2015-01-21 18:49:49.879085] I [socket.c:3561:socket_init]
0-glusterfs: SSL support is NOT enabled
[2015-01-21 18:49:49.879095] I [socket.c:3576:socket_init]
0-glusterfs: using system polling thread
[2015-01-21 18:49:49.951835] I
[socket.c:2238:socket_event_handler] 0-transport: disconnecting now
[2015-01-21 18:49:49.972143] I [input.c:36:cli_batch] 0-: Exiting
with: 0
If I run gluster system:: execute gsec_create on node C or node B, the common pem key file contains my 3 nodes pem puk keys. So in some way node A is unable to get the key from node C.
So let’s try to fix this one before going further.
--
Cyril Peponnet
On Jan 20, 2015, at 9:38 PM, Aravinda <avishwan at redhat.com<mailto:avishwan at redhat.com> <mailto:avishwan at redhat.com>> wrote:
On 01/20/2015 11:01 PM, PEPONNET, Cyril N (Cyril) wrote:
Hi,
I’m ready for new testing, I delete the geo-rep session between master and slace, remove the lines in authorized keys file on slave.
I also remove the common secret pem from slave, and from master. There is only the gsyncd_template.conf in /var/lib/gluster now.
Here is our setup:
Site A: gluster 3 nodes
Site B: gluster 1 node (for now, a second will come).
I can issue
gluster systen:: execute gsec_create
what to check?
common_secret.pem.pub is created in /var/lib/glusterd/geo-replication/common_secret.pem.pub, which should contain public keys from all Master nodes(Site A). Should match with contents of /var/lib/glusterd/geo-replication/secret.pem.pub and /var/lib/glusterd/geo-replication/tar_ssh.pem.pub.
then
gluster geo vol geo_test slave::geo_test create push-pem force (force is needed because the slave vol is smaller than the master vol).
What to check ?
Check for any errors in, /var/log/glusterfs/etc-glusterfs-glusterd.vol.log in rpm installation or in /var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log if source installation. In case of any errors related to hook execution, run directly the hook command copied from the log. From your previous mail I understand their is some issue while executing hook script. I will look into the issue in hook script.
I want to use change_detector changelog and not rsync btw.
change_detector is crawling mecanism. Available options are: changelog and xsync. xsync is FS Crawl.
sync mecanisms available are: rsync and tarssh.
Can you guide me to setup this but also debug why it’s not working out of the box ?
If needed I can get in touch with you through IRC.
Sure. IRC nickname is aravindavk.
Thanks for your help.
--
regards
Aravinda
http://aravindavk.in
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150202/9c53e4f7/attachment.html>
More information about the Gluster-users
mailing list