[Gluster-users] Inconsistent slave status output
Michael Watters
wattersm at watters.ws
Thu Oct 5 12:47:33 UTC 2017
Hello,
I have a gluster volume set up using geo-replication on two slaves
however I'm seeing inconsistent status output on the slave nodes.
Here is the status shown by gluster volume geo-replication status on
each node.
[root at foo-gluster-srv3 ~]# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE
STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
foo-gluster-srv1 gv0 /var/mnt/gluster/brick2
root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3
Active Changelog Crawl 2017-10-04 11:04:27
foo-gluster-srv2 gv0 /var/mnt/gluster/brick
root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3
Passive N/A N/A
foo-gluster-srv1 gv0 /var/mnt/gluster/brick2
root ssh://foo-gluster-srv4::slavevol foo-gluster-srv4
Active Changelog Crawl 2017-10-04 11:04:27
foo-gluster-srv2 gv0 /var/mnt/gluster/brick
root ssh://foo-gluster-srv4::slavevol foo-gluster-srv4
Passive N/A N/A
[root at foo-gluster-srv4 ~]# gluster volume geo-replication status
No active geo-replication sessions
Replication to srv4 *is* working despite what the status shows. The
geo-replication logs on this host are not showing any errors either.
Does anybody know what would cause this or how to fix it?
More information about the Gluster-users
mailing list