<div dir="ltr"><div dir="ltr">Hi Dietmar,<br><br></div><div dir="ltr">batch-fsync-delay-usec was already set to 0 and I increased the
sync_jobs from 3 to 6. In the moment I increased the sync_jobs
following error appeared in gsyncd.log :<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<p>[2021-03-03 23:17:46.59727] E [syncdutils(worker
/brick1/mvol1):312:log_raise_exception] <top>: connection to
peer is broken<br>
[2021-03-03 23:17:46.59912] E [syncdutils(worker
/brick2/mvol1):312:log_raise_exception] <top>: connection to
peer is broken</p>
<p></p></blockquote><div>If the geo-rep session is currently not in faulty state, we should be bothered about this log message. It is normal when the config is updated, geo-rep restart occurs and the above message pops up.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p>passive nodes became active and the content in
<brick>/.processing was removed. currently new changelog
files are created in this directory.shortly before I changed the sync_jobs I have checked the
<brick>/.processing directory on the master nodes. the
result was the same for every master node.</p></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<p>since the last error about 12 hours ago nearly 2400 changelog
files were created on each node but it looks like none of them
were consumed.</p></blockquote><div> Processed changelogs that are synced are archived under <brick>/.processed directory. Verify if the latest file is created there.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<p>in the moment I'm not sure what is right and what is wrong...<span lang="en"><span><span>should
at least the oldest changelog files in this directory have
been processed gradually ?</span></span></span></p></div></blockquote><div>Also you can try to set the log-level to debug for a while and set it back to info(to avoid flooding of logs) and check the logs to get a better picture of the scenario.</div><div>#gluster volume geo-replication <primary> <ip>::<secondary> config log-level DEBUG<br>#gluster volume geo-replication <primary> <ip>::<secondary> config log-level INFO<br><br>Regards,<br>Shwetha<br><br></div></div></div>