[Gluster-users] no progress in geo-replication

Shwetha Acharya sacharya at redhat.com
Thu Mar 4 07:48:16 UTC 2021


Hi Dietmar,

batch-fsync-delay-usec was already set to 0 and I increased the sync_jobs
from 3 to 6. In the moment I increased the sync_jobs following error
appeared in gsyncd.log :

> [2021-03-03 23:17:46.59727] E [syncdutils(worker
> /brick1/mvol1):312:log_raise_exception] <top>: connection to peer is broken
> [2021-03-03 23:17:46.59912] E [syncdutils(worker
> /brick2/mvol1):312:log_raise_exception] <top>: connection to peer is broken
>
> If the geo-rep session is currently not in faulty state, we should be
bothered about this log message. It is normal when the config is updated,
geo-rep restart occurs and the above message pops up.

> passive nodes became active and the content in <brick>/.processing was
> removed. currently new changelog files are created in this
> directory.shortly before I changed the sync_jobs I have checked the
> <brick>/.processing directory on the master nodes. the result was the same
> for every master node.
>
since the last error about 12 hours ago nearly 2400 changelog files were
> created on each node but it looks like none of them were consumed.
>
 Processed changelogs that are synced are archived under <brick>/.processed
directory. Verify if the latest file is created there.

> in the moment I'm not sure what is right and what is wrong...should at
> least the oldest changelog files in this directory have been processed
> gradually ?
>
Also you can try to set the log-level to debug for a while and set it back
to info(to avoid flooding of logs) and check the logs to get a better
picture of the scenario.
#gluster volume geo-replication <primary> <ip>::<secondary> config
log-level DEBUG
#gluster volume geo-replication <primary> <ip>::<secondary> config
log-level INFO

Regards,
Shwetha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210304/c3bb57ce/attachment.html>


More information about the Gluster-users mailing list