[Bugs] [Bug 1224199] non-root geo-replication session goes to faulty state, when the session is started

bugzilla at redhat.com bugzilla at redhat.com
Mon Jun 22 11:42:11 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1224199

Arthy Loganathan <aloganat at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |aloganat at redhat.com



--- Comment #4 from Arthy Loganathan <aloganat at redhat.com> ---
Have seen the same behaviour with the build glusterfs-3.7.1-4.el6rhs.x86_64,
when I tried creating non root geo replication session from Console.

root at node1 ~]# gluster volume geo-replication vol1
geoaccount at 10.70.46.83::vol1_slave status

MASTER NODE         MASTER VOL    MASTER BRICK           SLAVE USER    SLAVE   
                              SLAVE NODE    STATUS    CRAWL STATUS   
LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------
node1.redhat.com    vol1          /rhgs/brick1/brick1    geoaccount   
geoaccount at 10.70.46.83::vol1_slave    N/A           Faulty    N/A            
N/A                  
node1.redhat.com    vol1          /rhgs/brick2/brick2    geoaccount   
geoaccount at 10.70.46.83::vol1_slave    N/A           Faulty    N/A            
N/A                  
node2.redhat.com    vol1          /rhgs/brick1/brick1    geoaccount   
geoaccount at 10.70.46.83::vol1_slave    N/A           Faulty    N/A            
N/A                  
node2.redhat.com    vol1          /rhgs/brick2/brick2    geoaccount   
geoaccount at 10.70.46.83::vol1_slave    N/A           Faulty    N/A            
N/A                  
[root at node1 ~]# 

Log snippet:

[2015-06-22 15:11:09.842159] I [monitor(monitor):222:monitor] Monitor: starting
gsyncd worker
[2015-06-22 15:11:10.507] I [gsyncd(/rhgs/brick1/brick1):649:main_i] <top>:
syncing: gluster://localhost:vol1 ->
ssh://geoaccount@10.70.46.83:gluster://localhost:vol1_slave
[2015-06-22 15:11:10.6486] I [changelogagent(agent):75:__init__]
ChangelogAgent: Agent listining...
[2015-06-22 15:11:10.133467] E
[syncdutils(/rhgs/brick1/brick1):252:log_raise_exception] <top>: connection to
peer is broken
[2015-06-22 15:11:10.134097] E [resource(/rhgs/brick1/brick1):222:errlog]
Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-e3_3vT/8272e7becbd9f781e1fd12e650628809.sock
geoaccount at 10.70.46.83 /nonexistent/gsyncd --session-owner
5accd3b1-f9ac-4445-90b1-8c8363185894 -N --listen --timeout 120
gluster://localhost:vol1_slave" returned with 255, saying:
[2015-06-22 15:11:10.134476] E [resource(/rhgs/brick1/brick1):226:logerr]
Popen: ssh> Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).
[2015-06-22 15:11:10.135073] I [syncdutils(/rhgs/brick1/brick1):220:finalize]
<top>: exiting.
[2015-06-22 15:11:10.138586] I [monitor(monitor):274:monitor] Monitor:
worker(/rhgs/brick1/brick1) died before establishing connection
[2015-06-22 15:11:10.139452] I [repce(agent):92:service_loop] RepceServer:
terminating on reaching EOF.
[2015-06-22 15:11:10.140661] I [syncdutils(agent):220:finalize] <top>: exiting.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=OCEiTVimtj&a=cc_unsubscribe


More information about the Bugs mailing list