[Gluster-users] Geo-replication started but not replicating
Aravinda
avishwan at redhat.com
Wed Nov 18 05:17:04 UTC 2015
Looks like I/O error on slave while doing keep_alive. We can get more
useful info for the same from Slave log files.
In Slave nodes look for errors in
/var/log/glusterfs/geo-replication-slaves/*.log and
/var/log/glusterfs/geo-replication-slaves/*.gluster.log
regards
Aravinda
On 11/17/2015 10:02 PM, Deepak Ravi wrote:
> I also noted that the second master gfs2 alternates between passive/faulty.
> Not sure if this matters but, I have changed the /etc/hosts file to change
> 127.0.0.1 to gfs1 and so on because my node would not be in peer cluster
> state.
>
> Gluster version : 3.7.6-1
> OS: RHEL 7
>
>
> [root at gfs1 ~]# cat
> /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log
> [2015-11-17 10:30:30.244277] I [monitor(monitor):362:distribute] <top>:
> slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host':
> 'xfs2', 'dir': '/data/brick/xvol'}]
> [2015-11-17 10:30:30.245239] I [monitor(monitor):383:distribute] <top>:
> worker specs: [('/data/brick/gvol', 'ssh://root@xfs2:gluster://localhost:xvol',
> 1)]
> [2015-11-17 10:30:30.433696] I [monitor(monitor):221:monitor] Monitor:
> ------------------------------------------------------------
> [2015-11-17 10:30:30.433882] I [monitor(monitor):222:monitor] Monitor:
> starting gsyncd worker
> [2015-11-17 10:30:30.561599] I [gsyncd(/data/brick/gvol):650:main_i] <top>:
> syncing: gluster://localhost:gvol -> ssh://root@xfs2
> :gluster://localhost:xvol
> [2015-11-17 10:30:30.573781] I [changelogagent(agent):75:__init__]
> ChangelogAgent: Agent listining...
> [2015-11-17 10:30:34.26421] I [master(/data/brick/gvol):83:gmaster_builder]
> <top>: setting up xsync change detection mode
> [2015-11-17 10:30:34.26695] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:34.27324] I [master(/data/brick/gvol):83:gmaster_builder]
> <top>: setting up changelog change detection mode
> [2015-11-17 10:30:34.27477] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:34.27873] I [master(/data/brick/gvol):83:gmaster_builder]
> <top>: setting up changeloghistory change detection mode
> [2015-11-17 10:30:34.28048] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:36.40117] I [master(/data/brick/gvol):1229:register]
> _GMaster: xsync temp directory:
> /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync
> [2015-11-17 10:30:36.40409] I
> [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time:
> 1447774236
> [2015-11-17 10:30:36.65299] I [master(/data/brick/gvol):530:crawlwrap]
> _GMaster: primary master with volume id
> f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ...
> [2015-11-17 10:30:36.67856] I [master(/data/brick/gvol):539:crawlwrap]
> _GMaster: crawl interval: 1 seconds
> [2015-11-17 10:31:36.185137] I [master(/data/brick/gvol):552:crawlwrap]
> _GMaster: 0 crawls, 0 turns
> [2015-11-17 10:32:36.315582] I [master(/data/brick/gvol):552:crawlwrap]
> _GMaster: 0 crawls, 0 turns
> [2015-11-17 10:33:36.438072] I [master(/data/brick/gvol):552:crawlwrap]
> _GMaster: 0 crawls, 0 turns
>
>
> [root at gfs2 ~]#cat
> /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log
> | less
> [2015-11-17 10:30:30.498424] I [monitor(monitor):362:distribute] <top>:
> slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host':
> 'xfs2', 'dir': '/data/brick/xvol'}]
> [2015-11-17 10:30:30.499473] I [monitor(monitor):383:distribute] <top>:
> worker specs: [('/data/brick/gvol', 'ssh://root@xfs1:gluster://localhost:xvol',
> 1)]
> [2015-11-17 10:30:30.679028] I [monitor(monitor):221:monitor] Monitor:
> ------------------------------------------------------------
> [2015-11-17 10:30:30.679259] I [monitor(monitor):222:monitor] Monitor:
> starting gsyncd worker
> [2015-11-17 10:30:30.807980] I [gsyncd(/data/brick/gvol):650:main_i] <top>:
> syncing: gluster://localhost:gvol -> ssh://root@xfs1
> :gluster://localhost:xvol
> [2015-11-17 10:30:30.820440] I [changelogagent(agent):75:__init__]
> ChangelogAgent: Agent listining...
> [2015-11-17 10:30:34.358032] I
> [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up xsync
> change detection mode
> [2015-11-17 10:30:34.358304] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:34.359335] I
> [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up changelog
> change detection mode
> [2015-11-17 10:30:34.359496] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:34.359890] I
> [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up
> changeloghistory change detection mode
> [2015-11-17 10:30:34.360044] I [master(/data/brick/gvol):404:__init__]
> _GMaster: using 'rsync' as the sync engine
> [2015-11-17 10:30:36.371203] I [master(/data/brick/gvol):1229:register]
> _GMaster: xsync temp directory:
> /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync
> [2015-11-17 10:30:36.371514] I
> [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time:
> 1447774236
> [2015-11-17 10:30:36.383291] I [master(/data/brick/gvol):530:crawlwrap]
> _GMaster: primary master with volume id
> f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ...
> [2015-11-17 10:30:36.386276] I [master(/data/brick/gvol):539:crawlwrap]
> _GMaster: crawl interval: 1 seconds
> [2015-11-17 10:30:46.558255] E [repce(/data/brick/gvol):207:__call__]
> RepceClient: call 29036:140624661567232:1447774246.47 (keep_alive) failed
> on peer with OSError
> [2015-11-17 10:30:46.558463] E
> [syncdutils(/data/brick/gvol):276:log_raise_exception] <top>: FAIL:
> Traceback (most recent call last):
> File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306,
> in twrap
> tf(*aa)
> File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 438, in
> keep_alive
> cls.slave.server.keep_alive(vi)
> File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in
> __call__
> return self.ins(self.meth, *a)
> File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in
> __call__
> raise res
> OSError: [Errno 5] Input/output error
>
>
>
>
> -----------
>
> [root at gfs1 ~]# ps aux | grep gsyncd
> root 15837 0.0 1.0 368584 11148 ? Ssl 11:08 0:00
> /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
> --path=/data/brick/gvol --monitor -c
> /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var
> :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e xfs1::xvol
> root 15867 0.0 1.7 884044 18064 ? Ssl 11:08 0:00 python
> /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol
> -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf
> --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e
> xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164
> --local-path /data/brick/gvol --agent --rpc-fd 7,10,9,8
> root 15868 0.0 1.7 847644 17292 ? Sl 11:08 0:00 python
> /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol
> -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf
> --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e
> xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164
> --feedback-fd 12 --local-path /data/brick/gvol --local-id
> .%2Fdata%2Fbrick%2Fgvol --rpc-fd 9,8,7,10 --subvol-num 1 --resource-remote
> ssh://root@xfs2:gluster://localhost:xvol
> root 15879 0.0 0.4 80384 4244 ? S 11:08 0:00 ssh
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-5bwc6n/21cd0d364db39da791c9bc6dcf62c55b.sock root at xfs2
> /nonexistent/gsyncd --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N
> --listen --timeout 120 gluster://localhost:xvol
> root 15887 0.1 3.9 630404 40476 ? Ssl 11:08 0:02
> /usr/sbin/glusterfs --aux-gfid-mount --acl
> --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log
> --volfile-server=localhost --volfile-id=gvol --client-pid=-1
> /tmp/gsyncd-aux-mount-IOxY7_
> root 16540 0.0 0.0 112640 956 pts/0 R+ 11:26 0:00 grep
> --color=auto gsyncd
> --------------
> [root at gfs2 ec2-user]# ps aux | grep gsyncd
> root 3099 0.0 1.3 368488 13568 ? Ssl 11:08 0:00
> /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
> --path=/data/brick/gvol --monitor -c
> /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var
> :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c xfs1::xvol
> root 6618 1.0 1.9 883944 19872 ? Ssl 11:27 0:00 python
> /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol
> -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf
> --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c
> xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164
> --local-path /data/brick/gvol --agent --rpc-fd 8,11,10,9
> root 6619 1.1 1.4 847548 15004 ? Sl 11:27 0:00 python
> /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol
> -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf
> --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c
> xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164
> --feedback-fd 13 --local-path /data/brick/gvol --local-id
> .%2Fdata%2Fbrick%2Fgvol --rpc-fd 10,9,8,11 --subvol-num 1 --resource-remote
> ssh://root@xfs1:gluster://localhost:xvol
> root 6631 0.3 0.4 80384 4240 ? S 11:27 0:00 ssh
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-WIfjHQ/25f2a0dc75697352a40d6471e241edf7.sock root at xfs1
> /usr/libexec/glusterfs/gsyncd --session-owner
> f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout 120
> gluster://localhost:xvol
> root 6638 1.0 3.2 630408 33416 ? Ssl 11:27 0:00
> /usr/sbin/glusterfs --aux-gfid-mount --acl
> --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log
> --volfile-server=localhost --volfile-id=gvol --client-pid=-1
> /tmp/gsyncd-aux-mount-o44DsN
> root 6692 0.0 0.0 112640 960 pts/0 R+ 11:28 0:00 grep
> --color=auto gsyncd
> ---------------------
>
> [root at xfs1 ~]# ps aux | grep gsyncd
> root 2753 0.5 1.2 585232 12576 ? Ssl 11:28 0:00
> /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
> --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout
> 120 gluster://localhost:xvol -c
> /var/lib/glusterd/geo-replication/gsyncd_template.conf
> root 2773 0.3 3.4 630412 34728 ? Ssl 11:28 0:00
> /usr/sbin/glusterfs --aux-gfid-mount --acl
> --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log
> --volfile-server=localhost --volfile-id=xvol --client-pid=-1
> /tmp/gsyncd-aux-mount-une5yr
> root 2793 0.0 0.0 112640 956 pts/0 R+ 11:28 0:00 grep
> --color=auto gsyncd
> [root at xfs1 ~]#
>
> -----------------------
>
> [root at xfs2 ec2-user]# ps aux | grep gsyncd
> root 28921 0.0 1.2 585236 12668 ? Ssl 11:08 0:00
> /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
> --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout
> 120 gluster://localhost:xvol -c
> /var/lib/glusterd/geo-replication/gsyncd_template.conf
> root 28941 0.2 3.7 630412 38280 ? Ssl 11:08 0:02
> /usr/sbin/glusterfs --aux-gfid-mount --acl
> --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log
> --volfile-server=localhost --volfile-id=xvol --client-pid=-1
> /tmp/gsyncd-aux-mount-cZvAEH
> root 29029 0.0 0.0 112640 956 pts/0 R+ 11:29 0:00 grep
> --color=auto gsyncd
> [root at xfs2 ec2-user]#
>
>
>
>
> On Tue, Nov 17, 2015 at 12:39 AM, Aravinda <avishwan at redhat.com> wrote:
>
>> One status row should show Active and other should show Passive. Please
>> provide logs from gfs1 and gfs2
>> nodes(/var/log/glusterfs/geo-replication/gvol/*.log)
>>
>> Also please let us know,
>> 1. Gluster version and OS
>> 2. output of `ps aux | grep gsyncd` from Master nodes and Slave nodes
>>
>> regards
>> Aravinda
>>
>> On 11/17/2015 02:09 AM, Deepak Ravi wrote:
>>
>> Hi all
>>
>> I'm working on a Geo-replication setup that I'm having issues with.
>>
>> Situation :
>>
>> - In the east region of AWS, I Created a replicated volume between 2
>> nodes, lets call this volume *gvol*
>> -
>> *In the west region of AWS, I Created another replicated volume between 2
>> nodes, lets call this volume xvol *
>> - Geo replication was created and started successfully
>> -
>>
>> [root at gfs1 mnt]# gluster volume geo-replication gvol xfs1::xvol status
>>
>> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
>> SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
>> -------------------------------------------------------------------------------------------------------------------------------
>> gfs1 gvol /data/brick/gvol root xfs1::xvol
>> N/A Passive N/A N/A
>> gfs2 gvol /data/brick/gvol root xfs1::xvol
>> N/A Passive N/A N/A
>>
>> The data on nodes(gfs1 and gfs2) was not being replicated to xfs1 at all. I
>> tried restarting the services and it still didn't help. Looking at the log
>> files didn't help me much because I didn't know what I should be looking
>> for.
>>
>> Can someone point me in the right direction?
>>
>> Thanks
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151118/71204483/attachment.html>
More information about the Gluster-users
mailing list