[Gluster-users] geo-replication fails after upgrade to gluster 3.10
Michael Watters
wattersm at watters.ws
Fri Apr 28 19:08:21 UTC 2017
I've just upgraded my gluster hosts from gluster 3.8 to 3.10 and it
appears that geo-replication on my volume is now broken. Here are the
log entries from the master. I've tried restarting the geo-replication
process several times which did not help. Is there any way to resolve this?
2017-04-28 19:03:56.477895] I [monitor(monitor):275:monitor] Monitor:
starting gsyncd worker(/var/mnt/gluster/brick2). Slave node:
ssh://root@mdct-gluster-srv3:gluster://localhost:slavevol
[2017-04-28 19:03:56.616667] I
[changelogagent(/var/mnt/gluster/brick2):73:__init__] ChangelogAgent:
Agent listining...
[2017-04-28 19:04:07.780885] I
[master(/var/mnt/gluster/brick2):1328:register] _GMaster: Working dir:
/var/lib/misc/glusterfsd/gv0/ssh%3A%2F%2Froot%4010.112.215.10%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol/920a96ad4a5f9c0c2bdbd24a14eeb1af
[2017-04-28 19:04:07.781146] I
[resource(/var/mnt/gluster/brick2):1604:service_loop] GLUSTER: Register
time: 1493406247
[2017-04-28 19:04:08.143078] I
[gsyncdstatus(/var/mnt/gluster/brick2):272:set_active] GeorepStatus:
Worker Status: Active
[2017-04-28 19:04:08.254343] I
[gsyncdstatus(/var/mnt/gluster/brick2):245:set_worker_crawl_status]
GeorepStatus: Crawl Status: History Crawl
[2017-04-28 19:04:08.254739] I
[master(/var/mnt/gluster/brick2):1244:crawl] _GMaster: starting history
crawl... turns: 1, stime: (1493382958, 0), etime: 1493406248,
entry_stime: (1493382958, 0)
[2017-04-28 19:04:09.256428] I
[master(/var/mnt/gluster/brick2):1272:crawl] _GMaster: slave's time:
(1493382958, 0)
[2017-04-28 19:04:09.381602] E
[syncdutils(/var/mnt/gluster/brick2):297:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 204,
in main
main_i()
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 780,
in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1610, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 600,
in crawlwrap
self.crawl()
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1281,
in crawl
self.changelogs_batch_process(changes)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1184,
in changelogs_batch_process
self.process(batch)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1039,
in process
self.process_change(change, done, retry)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 969,
in process_change
entry_stime_to_update[0])
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdstatus.py", line
200, in set_field
return self._update(merger)
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdstatus.py", line
161, in _update
data = mergerfunc(data)
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdstatus.py", line
194, in merger
if data[key] == value:
KeyError: 'last_synced_entry'
[2017-04-28 19:04:09.383002] I
[syncdutils(/var/mnt/gluster/brick2):238:finalize] <top>: exiting.
[2017-04-28 19:04:09.387280] I
[repce(/var/mnt/gluster/brick2):92:service_loop] RepceServer:
terminating on reaching EOF.
[2017-04-28 19:04:09.387507] I
[syncdutils(/var/mnt/gluster/brick2):238:finalize] <top>: exiting.
[2017-04-28 19:04:09.764077] I [monitor(monitor):357:monitor] Monitor:
worker(/var/mnt/gluster/brick2) died in startup phase
[2017-04-28 19:04:09.768179] I
[gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker
Status: Faulty
More information about the Gluster-users
mailing list