[Gluster-users] Unable to setup geo replication
Kotresh Hiremath Ravishankar
khiremat at redhat.com
Sun Dec 1 18:36:32 UTC 2019
Hi,
Please try disabling xattr sync and see geo-rep works fine
gluster vol geo-rep <mastervol> <slavevol>::<slavevol> config sync_xattrs
false
On Thu, Nov 28, 2019 at 1:29 PM Tan, Jian Chern <jian.chern.tan at intel.com>
wrote:
> Alright so it seems to work with some errors and this the output I’m
> getting.
>
> [root at jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> root at pgsotc10.png.intel.com:/mnt/
>
> rsync: rsync_xal_set: lsetxattr("/mnt/file1","security.selinux") failed:
> Operation not supported (95)
>
>
>
> Number of files: 1 (reg: 1)
>
> Number of created files: 0
>
> Number of deleted files: 0
>
> Number of regular files transferred: 1
>
> Total file size: 9 bytes
>
> Total transferred file size: 9 bytes
>
> Literal data: 9 bytes
>
> Matched data: 0 bytes
>
> File list size: 0
>
> File list generation time: 0.003 seconds
>
> File list transfer time: 0.000 seconds
>
> Total bytes sent: 152
>
> Total bytes received: 141
>
>
>
> sent 152 bytes received 141 bytes 65.11 bytes/sec
>
> total size is 9 speedup is 0.03
>
> rsync error: some files/attrs were not transferred (see previous errors)
> (code 23) at main.c(1189) [sender=3.1.3]
>
>
>
> The data is synced over to the other machine when I view the file there
>
> [root at pgsotc10 mnt]# cat file1
>
> testdata
>
> [root at pgsotc10 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Wednesday, November 27, 2019 5:25 PM
> *To:* Tan, Jian Chern <jian.chern.tan at intel.com>
> *Cc:* gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Oh forgot about that. Just setup passwordless ssh and that particular node
> and try with default ssh pem key and remove
> /var/lib/glusterd/geo-replicationsecre.pem from the command line
>
>
>
> On Wed, Nov 27, 2019 at 12:43 PM Tan, Jian Chern <jian.chern.tan at intel.com>
> wrote:
>
> I’m getting this when I run that command so something’s wrong somewhere I
> guess.
>
>
>
> [root at jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> root at pgsotc11.png.intel.com:/mnt/
>
> gsyncd sibling not found
>
> disallowed rsync invocation
>
> rsync: connection unexpectedly closed (0 bytes received so far) [sender]
>
> rsync error: error in rsync protocol data stream (code 12) at io.c(226)
> [sender=3.1.3]
>
> [root at jfsotc22 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Tuesday, November 26, 2019 7:22 PM
> *To:* Tan, Jian Chern <jian.chern.tan at intel.com>
> *Cc:* gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Ok, Then it should work.
> Could you confirm rsync runs successfully when executed manually as below.
>
>
>
> 1. On master node,
> a) # mkdir /mastermnt
> b) Mount master volume on /mastermnt
> b) # echo "test data" /master/file1
>
> 2. On slave node
> a) # mkdir /slavemnt
> b) # Mount slave on /slavemnt
>
> c) # touch /salvemnt/file1
>
> 3. On master node
> a) # cd /mastermnt
>
> b) # rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> root at pgsotc11.png.intel.com:/slavemnt/
>
> 4. Check for content sync
>
> a) cat /slavemnt/file1
>
>
>
> On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern <jian.chern.tan at intel.com>
> wrote:
>
> Rsync on both the slave and master are rsync version 3.1.3 protocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Tuesday, November 26, 2019 3:04 PM
> *To:* Tan, Jian Chern <jian.chern.tan at intel.com>
> *Cc:* gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> The error code 14 related IPC where any of pipe/fork fails in rsync code.
> Please upgrade the rsync if not done. Also check the rsync versions
> between master and slave to be same.
>
> Which version of gluster are you using?
> What's the host OS?
>
> What's the rsync version ?
>
>
>
> On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern <jian.chern.tan at intel.com>
> wrote:
>
> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Initializing...
>
> [2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor:
> starting gsyncd worker brick=/data/glusterimagebrick/jfsotc22-gv0
> slave_node=pgsotc11.png.intel.com
>
> [2019-11-26 05:46:31.90935] I [gsyncd(agent
> /data/glusterimagebrick/jfsotc22-gv0):308:main] <top>: Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.92105] I [changelogagent(agent
> /data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent
> listining...
>
> [2019-11-26 05:46:31.93148] I [gsyncd(worker
> /data/glusterimagebrick/jfsotc22-gv0):308:main] <top>: Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.102422] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH:
> Initializing SSH connection between master and slave...
>
> [2019-11-26 05:46:50.355233] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH
> connection between master and slave established. duration=19.2526
>
> [2019-11-26 05:46:50.355583] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting
> gluster volume locally...
>
> [2019-11-26 05:46:51.404998] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted
> gluster volume duration=1.0492
>
> [2019-11-26 05:46:51.405363] I [subcmds(worker
> /data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] <top>: Worker spawn
> successful. Acknowledging back to monitor
>
> [2019-11-26 05:46:53.431502] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working
> dir
> path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
>
> [2019-11-26 05:46:53.431846] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register
> time time=1574747213
>
> [2019-11-26 05:46:53.445589] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker
> Status Change status=Active
>
> [2019-11-26 05:46:53.446184] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status]
> GeorepStatus: Crawl Status Change status=History Crawl
>
> [2019-11-26 05:46:53.446367] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting
> history crawl turns=1 stime=(1574669325, 0)
> etime=1574747213 entry_stime=None
>
> [2019-11-26 05:46:54.448994] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time
> stime=(1574669325, 0)
>
> [2019-11-26 05:46:54.928395] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time
> Taken job=1 num_files=1 return_code=14 duration=0.0162
>
> [2019-11-26 05:46:54.928607] E [syncdutils(worker
> /data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned
> error cmd=rsync -aR0 --inplace --files-from=- --super --stats
> --numeric-ids --no-implied-dirs --existing --xattrs --acls
> --ignore-missing-args . -e ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
> -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock
> pgsotc11.png.intel.com:/proc/29549/cwd error=14
>
> [2019-11-26 05:46:54.935529] I [repce(agent
> /data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer:
> terminating on reaching EOF.
>
> [2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor:
> worker died in startup phase brick=/data/glusterimagebrick/jfsotc22-gv0
>
> [2019-11-26 05:46:55.412591] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Faulty
>
> [2019-11-26 05:47:05.631944] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Initializing...
>
> ….
>
>
>
> Thanks!
>
> Jian Chern
>
>
>
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191202/194213fd/attachment.html>
More information about the Gluster-users
mailing list