[Gluster-users] geo replication
Kotresh Hiremath Ravishankar
khiremat at redhat.com
Wed Mar 7 04:56:24 UTC 2018
Hi,
It is failing to get the virtual xattr value of
"trusted.glusterfs.volume-mark" at master volume root.
Could you share the geo-replication logs under
/var/log/glusterfs/geo-replication/*.gluster.log ?
I think if there are any transient errors, stopping geo-rep and restarting
master volume should fix it.
Thanks,
Kotresh HR
On Tue, Mar 6, 2018 at 2:28 PM, Curt Lestrup <curt at lestrup.se> wrote:
> Hi,
>
>
>
> Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
>
> I can see a “master volinfo unavailable” in master logfile.
>
> Any ideas?
>
>
>
>
>
> Master:
>
> Status of volume: testtomcat
>
> Gluster process TCP Port RDMA Port Online
> Pid
>
> ------------------------------------------------------------
> ------------------
>
> Brick gfstest07:/gfs/testtomcat/mount 49153 0 Y
> 326
>
> Brick gfstest05:/gfs/testtomcat/mount 49153 0 Y
> 326
>
> Brick gfstest01:/gfs/testtomcat/mount 49153 0 Y
> 335
>
> Self-heal Daemon on localhost N/A N/A Y
> 1134
>
> Self-heal Daemon on gfstest07 N/A N/A Y
> 564
>
> Self-heal Daemon on gfstest05 N/A N/A Y
> 1038
>
>
>
>
>
>
>
> Slave:
>
> Status of volume: testtomcat
>
> Gluster process TCP Port RDMA Port Online
> Pid
>
> ------------------------------------------------------------
> ------------------
>
> Brick stogfstest11:/gfs/testtomcat/mount 49152 0 Y
> 294
>
>
>
>
>
> Created & started the session with:
>
> gluster volume geo-replication testtomcat stogfstest11::testtomcat create
> no-verify
>
> gluster volume geo-replication testtomcat stogfstest11::testtomcat start
>
>
>
> getting the following logs:
>
> master:
>
> [2018-03-06 08:32:46.767544] I [gsyncdstatus(monitor):242:set_worker_status]
> GeorepStatus: Worker Status Change status=Initializing...
>
> [2018-03-06 08:32:46.872857] I [monitor(monitor):280:monitor] Monitor:
> starting gsyncd worker brick=/gfs/testtomcat/mount
> slave_node=ssh://root@stogfstest11:gluster://localhost:testtomcat
>
> [2018-03-06 08:32:46.961122] I [changelogagent(/gfs/testtomcat/mount):73:__init__]
> ChangelogAgent: Agent listining...
>
> [2018-03-06 08:32:46.962470] I [resource(/gfs/testtomcat/mount):1771:connect_remote]
> SSH: Initializing SSH connection between master and slave...
>
> [2018-03-06 08:32:48.515974] I [resource(/gfs/testtomcat/mount):1778:connect_remote]
> SSH: SSH connection between master and slave established.
> duration=1.5530
>
> [2018-03-06 08:32:48.516247] I [resource(/gfs/testtomcat/mount):1493:connect]
> GLUSTER: Mounting gluster volume locally...
>
> [2018-03-06 08:32:49.739631] I [resource(/gfs/testtomcat/mount):1506:connect]
> GLUSTER: Mounted gluster volume duration=1.2232
>
> [2018-03-06 08:32:49.739870] I [gsyncd(/gfs/testtomcat/mount):799:main_i]
> <top>: Closing feedback fd, waking up the monitor
>
> [2018-03-06 08:32:51.872872] I [master(/gfs/testtomcat/mount):1518:register]
> _GMaster: Working dir path=/var/lib/misc/glusterfsd/
> testtomcat/ssh%3A%2F%2Froot%40172.16.81.101%3Agluster%3A%
> 2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15d9b079b804c0a8ebf42
>
> [2018-03-06 08:32:51.873176] I [resource(/gfs/testtomcat/mount):1653:service_loop]
> GLUSTER: Register time time=1520325171
>
> [2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/
> mount):299:log_raise_exception] <top>: master volinfo unavailable
>
> [2018-03-06 08:32:51.936203] I [syncdutils(/gfs/testtomcat/mount):271:finalize]
> <top>: exiting.
>
> [2018-03-06 08:32:51.938469] I [repce(/gfs/testtomcat/mount):92:service_loop]
> RepceServer: terminating on reaching EOF.
>
> [2018-03-06 08:32:51.938776] I [syncdutils(/gfs/testtomcat/mount):271:finalize]
> <top>: exiting.
>
> [2018-03-06 08:32:52.743696] I [monitor(monitor):363:monitor] Monitor:
> worker died in startup phase brick=/gfs/testtomcat/mount
>
> [2018-03-06 08:32:52.763276] I [gsyncdstatus(monitor):242:set_worker_status]
> GeorepStatus: Worker Status Change status=Faulty
>
>
>
>
>
> slave:
>
> [2018-03-06 08:32:47.434591] I [resource(slave):1502:connect] GLUSTER:
> Mounting gluster volume locally...
>
> [2018-03-06 08:32:48.490775] I [resource(slave):1515:connect] GLUSTER:
> Mounted gluster volume duration=1.0557
>
> [2018-03-06 08:32:48.493134] I [resource(slave):1012:service_loop]
> GLUSTER: slave listening
>
> [2018-03-06 08:32:51.942531] I [repce(slave):92:service_loop] RepceServer:
> terminating on reaching EOF.
>
> [2018-03-06 08:32:51.955379] I [syncdutils(slave):271:finalize] <top>:
> exiting.
>
>
>
>
>
> /Curt
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180307/11f936db/attachment.html>
More information about the Gluster-users
mailing list