[Gluster-users] geo replication issue
Krishna Verma
kverma at cadence.com
Thu Oct 25 03:33:47 UTC 2018
Hi Sunny,
Thanks for your response. Yes " usr/libexec/glusterfs/python/syncdaemon/gsyncd.py'" was missing at slave.
I have installed " glusterfs-geo-replication.x86_64" rpm and then the session is Active now.
But now I am struggling with the indexing issue. Files more than 5GB in master volume is not getting sync with slave. I have to delete the geo replication session and erase the indexing like below then after creating the new session large files start sync with slave.
How we can avoid this Gluster behavior in geo replication? Also can we monitor the real time data sync between master and slave by any GUI method?
I was also searching for the implementation docs of "geo replication over the internet for distributed volume", but can't find any. Do you have one?
Appreciate for any help.
#gluster volume geo-replication gv1 sj-gluster01::gv1 delete
Deleting geo-replication session between gv1 & sj-gluster01::gv1 has been successful
]# gluster volume set gv1 geo-replication.indexing off
/Krishna
-----Original Message-----
From: Sunny Kumar <sunkumar at redhat.com>
Sent: Wednesday, October 24, 2018 6:33 PM
To: Krishna Verma <kverma at cadence.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] geo replication issue
EXTERNAL MAIL
Hi Krishna,
Please check for this file existance
'/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py' at slave.
- Sunny
On Wed, Oct 24, 2018 at 4:36 PM Krishna Verma <kverma at cadence.com> wrote:
>
>
>
>
>
>
>
> Hi Everyone,
>
>
>
> I have created a 4*4 distributed gluster but when I am starting the start the session its get failed with below errors.
>
>
>
> [2018-10-24 10:02:03.857861] I [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
>
> [2018-10-24 10:02:03.858133] I [monitor(monitor):155:monitor] Monitor: starting gsyncd worker brick=/gfs1/brick1/gv1 slave_node=sj-gluster02
>
> [2018-10-24 10:02:03.954746] I [gsyncd(agent /gfs1/brick1/gv1):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/gv1_sj-gluster01_gv1/gsyncd.conf
>
> [2018-10-24 10:02:03.956724] I [changelogagent(agent /gfs1/brick1/gv1):72:__init__] ChangelogAgent: Agent listining...
>
> [2018-10-24 10:02:03.958110] I [gsyncd(worker /gfs1/brick1/gv1):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/gv1_sj-gluster01_gv1/gsyncd.conf
>
> [2018-10-24 10:02:03.975778] I [resource(worker /gfs1/brick1/gv1):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
>
> [2018-10-24 10:02:07.413379] E [syncdutils(worker /gfs1/brick1/gv1):305:log_raise_exception] <top>: connection to peer is broken
>
> [2018-10-24 10:02:07.414144] E [syncdutils(worker /gfs1/brick1/gv1):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-OE_W1C/cf9a66dce686717c4a5ef9a7c3a7f8be.sock sj-gluster01 /nonexistent/gsyncd slave gv1 sj-gluster01::gv1 --master-node noida-gluster01 --master-node-id 08925454-9fea-4b24-8f82-9d7ad917b870 --master-brick /gfs1/brick1/gv1 --local-node sj-gluster02 --local-node-id f592c041-dcae-493c-b5a0-31e376a5be34 --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=2
>
> [2018-10-24 10:02:07.414386] E [syncdutils(worker /gfs1/brick1/gv1):805:logerr] Popen: ssh> /usr/bin/python2: can't open file '/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py': [Errno 2] No such file or directory
>
> [2018-10-24 10:02:07.422688] I [repce(agent /gfs1/brick1/gv1):80:service_loop] RepceServer: terminating on reaching EOF.
>
> [2018-10-24 10:02:07.422842] I [monitor(monitor):266:monitor] Monitor: worker died before establishing connection brick=/gfs1/brick1/gv1
>
> [2018-10-24 10:02:07.435054] I [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
>
>
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
>
> --------------------------------------------------------------------------------------------------------------------------------------------
>
> noida-gluster01 gv1 /gfs1/brick1/gv1 root sj-gluster01::gv1 N/A Faulty N/A N/A
>
> noida-gluster02 gv1 /gfs1/brick1/gv1 root sj-gluster01::gv1 N/A Faulty N/A N/A
>
> gluster-poc-noida gv1 /gfs1/brick1/gv1 root sj-gluster01::gv1 N/A Faulty N/A N/A
>
> noi-poc-gluster gv1 /gfs1/brick1/gv1 root sj-gluster01::gv1 N/A Faulty N/A N/A
>
>
>
>
>
> Could someone please help?
>
>
>
> /Krishna
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwIFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=tFyHpfTHTZfpevmnaYVXBsAj7G0tAsDVJ2qqfdwrj4A&s=su3l4U1BSkFLRDO-8JIRxpcZg4-KQqAkrkB7g-o6Psw&e=
More information about the Gluster-users
mailing list