[Bugs] [Bug 1457976] New: Georeplication status goes faulty after reboot 1 source node
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 1 16:23:26 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1457976
Bug ID: 1457976
Summary: Georeplication status goes faulty after reboot 1
source node
Product: GlusterFS
Version: 3.8
Component: geo-replication
Severity: medium
Assignee: bugs at gluster.org
Reporter: deligatedgeek at yahoo.com
CC: bugs at gluster.org
Description of problem:
Environment 4 CentOS 7.2 servers, 2 in UK, and one each in New York and Sydney
A replica 2 Glusterfs src volume was created using 1 brick from each of 2 Uk
servers, this worked great.
Geo-replication was configured from this source volume to a destination volume
in NY and another in Sydney, this also worked great.
Output showed first src node as Active and second src node as Passive
The first src nodes was shutdown and after a short time the second src node
became Active, replication continued.
The first src node was started and added its brick back into the src vol, but
the geo-replication status for the first node became Faulty, the second node
was Passive. Upon shutting the first node down the second node geo-replication
status became Active, but when started the status became faulty.
I searched the error and found that it may be related to the index being
rotated and thus the first node had lost track, but no instructions on how to
fix this.
I had to delete the geo-replication and the destination node, then recreate
both to fix the issue.
Version-Release number of selected component (if applicable):
3.8.5
How reproducible:
twice so far
Steps to Reproduce:
1.create above environment
2.shutdown one src node
3.start the src node
Actual results:
geo-replication status goes faulty
Expected results:
src node goes Active and replication continues
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list