<html><head></head><body>Fwiw, rsync error 3 is:<br>
&quot;Errors selecting input/output files, dirs&quot;<br><br><div class="gmail_quote">On January 19, 2018 7:36:18 AM PST, Dietmar Putz &lt;dietmar.putz@3qsdn.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Dear All,<br><br>we are running a dist. repl. volume on 4 nodes including geo-replication <br>to another location.<br>the geo-replication was running fine for months.<br>since 18th jan. the geo-replication is faulty. the geo-rep log on the <br>master shows following error in a loop while the logs on the slave just <br>show 'I'nformations...<br>somehow suspicious are the frequent 'shutting down connection' messages <br>in the brick log while geo replication is started. they are stopping <br>right in the moment when the geo replication is stopped.<br>unfortunately i did not found any hint in the mailing list or elsewhere <br>to solve this issue.<br>does anybody have already seen such error or can give me some hints how <br>to proceed... ?<br>any help is appreciated.<br><br>best regards<br>Dietmar<br><br><br><br>[2018-01-19 14:23:20.141123] I [monitor(monitor):267:monitor] Monitor: <br><hr><br>[2018-01-19 14:23:20.141457] I [monitor(monitor):268:monitor] Monitor: <br>starting gsyncd worker<br>[2018-01-19 14:23:20.227952] I [gsyncd(/brick1/mvol1):733:main_i] &lt;top&gt;: <br>syncing: gluster://localhost:mvol1 -&gt; <br>ssh://root@gl-slave-01-int:gluster://localhost:svol1<br>[2018-01-19 14:23:20.235563] I [changelogagent(agent):73:__init__] <br>ChangelogAgent: Agent listining...<br>[2018-01-19 14:23:23.55553] I [master(/brick1/mvol1):83:gmaster_builder] <br>&lt;top&gt;: setting up xsync change detection mode<br>[2018-01-19 14:23:23.56019] I [master(/brick1/mvol1):367:__init__] <br>_GMaster: using 'rsync' as the sync engine<br>[2018-01-19 14:23:23.56989] I [master(/brick1/mvol1):83:gmaster_builder] <br>&lt;top&gt;: setting up changelog change detection mode<br>[2018-01-19 14:23:23.57260] I [master(/brick1/mvol1):367:__init__] <br>_GMaster: using 'rsync' as the sync engine<br>[2018-01-19 14:23:23.58098] I [master(/brick1/mvol1):83:gmaster_builder] <br>&lt;top&gt;: setting up changeloghistory change detection mode<br>[2018-01-19 14:23:23.58454] I [master(/brick1/mvol1):367:__init__] <br>_GMaster: using 'rsync' as the sync engine<br>[2018-01-19 14:23:25.123959] I [master(/brick1/mvol1):1249:register] <br>_GMaster: xsync temp directory: <br>/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1/0a6056eb995956f1dc84f32256dae472/xsync<br>[2018-01-19 14:23:25.124351] I <br>[resource(/brick1/mvol1):1528:service_loop] GLUSTER: Register time: <br>1516371805<br>[2018-01-19 14:23:25.127505] I [master(/brick1/mvol1):510:crawlwrap] <br>_GMaster: primary master with volume id <br>2f5de6e4-66de-40a7-9f24-4762aad3ca96 ...<br>[2018-01-19 14:23:25.130393] I [master(/brick1/mvol1):519:crawlwrap] <br>_GMaster: crawl interval: 1 seconds<br>[2018-01-19 14:23:25.134413] I [master(/brick1/mvol1):466:mgmt_lock] <br>_GMaster: Got lock : /brick1/mvol1 : Becoming ACTIVE<br>[2018-01-19 14:23:25.136784] I [master(/brick1/mvol1):1163:crawl] <br>_GMaster: starting history crawl... turns: 1, stime: (1516248272, 0), <br>etime: 1516371805<br>[2018-01-19 14:23:25.139033] I [master(/brick1/mvol1):1192:crawl] <br>_GMaster: slave's time: (1516248272, 0)<br><br>[2018-01-19 14:23:27.157931] E [resource(/brick1/mvol1):234:errlog] <br>Popen: command "rsync -aR0 --inplace --files-from=- --super --stats <br>--numeric-ids --no-implied-dirs --xattrs --acls . -e ssh <br>-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i <br>/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto <br>-S /tmp/gsyncd-aux-ssh-o2j6UA/db73a3bfe7357366aff777392fc60a7e.sock <br>--compress root@gl-slave-01-int:/proc/398/cwd" returned with 3<br><br>[2018-01-19 14:23:27.158600] I [syncdutils(/brick1/mvol1):220:finalize] <br>&lt;top&gt;: exiting.<br>[2018-01-19 14:23:27.162561] I [repce(agent):92:service_loop] <br>RepceServer: terminating on reaching EOF.<br>[2018-01-19 14:23:27.163053] I [syncdutils(agent):220:finalize] &lt;top&gt;: <br>exiting.<br>[2018-01-19 14:23:28.61029] I [monitor(monitor):344:monitor] Monitor: <br>worker(/brick1/mvol1) died in startup phase<br><br><br>/var/log/glusterfs/bricks/brick1-mvol1.log<br><br>[2018-01-19 14:23:18.264649] I [login.c:81:gf_auth] 0-auth/login: <br>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622<br>[2018-01-19 14:23:18.264689] I [MSGID: 115029] <br>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted <br>client from <br>gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0 <br>(version: 3.7.18)<br>[2018-01-19 14:23:21.995012] I [login.c:81:gf_auth] 0-auth/login: <br>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622<br>[2018-01-19 14:23:21.995049] I [MSGID: 115029] <br>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted <br>client from <br>gl-master-01-22759-2018/01/19-14:23:21:928705-mvol1-client-0-0-0 <br>(version: 3.7.18)<br>[2018-01-19 14:23:23.392692] I [MSGID: 115036] <br>[server.c:552:server_rpc_notify] 0-mvol1-server: disconnecting <br>connection from <br>gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0<br>[2018-01-19 14:23:23.392746] I [MSGID: 101055] <br>[client_t.c:420:gf_client_unref] 0-mvol1-server: Shutting down <br>connection gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0<br>[2018-01-19 14:23:25.322559] I [login.c:81:gf_auth] 0-auth/login: <br>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622<br>[2018-01-19 14:23:25.322591] I [MSGID: 115029] <br>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted <br>client from <br>gl-master-03-17451-2018/01/19-14:23:25:261540-mvol1-client-0-0-0 <br>(version: 3.7.18)<br>[2018-01-19 14:23:27.164568] W [socket.c:596:__socket_rwv] <br>0-mvol1-changelog: readv on <br>/var/run/gluster/.0a6056eb995956f1dc84f32256dae47222743.sock failed (No <br>data available)<br>[2018-01-19 14:23:27.164621] I [MSGID: 101053] <br>[mem-pool.c:640:mem_pool_destroy] 0-mvol1-changelog: size=588 max=0 total=0<br>[2018-01-19 14:23:27.164641] I [MSGID: 101053] <br>[mem-pool.c:640:mem_pool_destroy] 0-mvol1-changelog: size=124 max=0 total=0<br>[2018-01-19 14:23:27.168989] I [MSGID: 115036] <br>[server.c:552:server_rpc_notify] 0-mvol1-server: disconnecting <br>connection from <br>gl-master-01-22759-2018/01/19-14:23:21:928705-mvol1-client-0-0-0<br>[2018-01-19 14:23:27.169030] I [MSGID: 101055] <br>[client_t.c:420:gf_client_unref] 0-mvol1-server: Shutting down <br>connection gl-master-01-22759-2018/01/19-14:23:21:928705-mvol1-client-0-0-0<br>[2018-01-19 14:23:28.636402] I [login.c:81:gf_auth] 0-auth/login: <br>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622<br>[2018-01-19 14:23:28.636443] I [MSGID: 115029] <br>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted <br>client from <br>gl-master-02-17275-2018/01/19-14:23:28:429242-mvol1-client-0-0-0 <br>(version: 3.7.18)<br>[2018-01-19 14:23:31.728022] I [MSGID: 115036] <br>[server.c:552:server_rpc_notify] 0-mvol1-server: disconnecting <br>connection from <br>gl-master-03-17451-2018/01/19-14:23:25:261540-mvol1-client-0-0-0<br>[2018-01-19 14:23:31.728086] I [MSGID: 101055] <br>[client_t.c:420:gf_client_unref] 0-mvol1-server: Shutting down <br>connection gl-master-03-17451-2018/01/19-14:23:25:261540-mvol1-client-0-0-0<br><br>on all gluster nodes :<br><br>rsync&nbsp; version 3.1.1&nbsp; protocol version 31<br>glusterfs 3.7.18<br>ubuntu 16.04.3<br><br>[ 14:22:43 ] - root@gl-master-01&nbsp; ~/tmp $gluster volume geo-replication <br>mvol1 gl-slave-01-int::svol1 config<br>special_sync_mode: partial<br>gluster_log_file: <br>/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1.gluster.log<br>ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no <br>-i /var/lib/glusterd/geo-replication/secret.pem<br>change_detector: changelog<br>use_meta_volume: true<br>session_owner: 2f5de6e4-66de-40a7-9f24-4762aad3ca96<br>state_file: <br>/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/monitor.status<br>gluster_params: aux-gfid-mount acl<br>remote_gsyncd: /nonexistent/gsyncd<br>working_dir: <br>/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1<br>state_detail_file: <br>/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1-detail.status<br>gluster_command_dir: /usr/sbin/<br>pid_file: <br>/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/monitor.pid<br>georep_session_working_dir: <br>/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/<br>ssh_command_tar: ssh -oPasswordAuthentication=no <br>-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem<br>master.stime_xattr_name: <br>trusted.glusterfs.2f5de6e4-66de-40a7-9f24-4762aad3ca96.256628ab-57c2-44a4-9367-59e1939ade64.stime<br>changelog_log_file: <br>/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1-changes.log<br>socketdir: /var/run/gluster<br>volume_id: 2f5de6e4-66de-40a7-9f24-4762aad3ca96<br>ignore_deletes: false<br>state_socket_unencoded: <br>/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1.socket<br>log_file: <br>/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1.log<br>[ 14:22:46 ] - root@gl-master-01&nbsp; ~/tmp $<br><br><hr><br>Gluster-users mailing list<br>Gluster-users@gluster.org<br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>