<div dir="ltr"><div><div><div><div><div>Hi Mark,<br><br></div>Few questions.<br><br></div>1. Is this trace back consistently hit? I just wanted to confirm whether it's transient which occurs once in a while and gets back to normal?<br></div>2. Please upload the complete geo-rep logs from both master and slave.<br><br></div>Thanks,<br></div>Kotresh HR<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 6, 2018 at 7:10 PM, Mark Betham <span dir="ltr"><<a href="mailto:mark.betham@performancehorizon.com" target="_blank">mark.betham@performancehorizon.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Dear Gluster-Users,<br clear="all"><div><br></div><div>I have geo-replication setup and configured between 2 Gluster pools located at different sites. What I am seeing is an error being reported within the geo-replication slave log as follows;</div><div><br></div><div><div><i><font face="monospace, monospace">[2018-06-05 12:05:26.767615] E [syncdutils(slave):331:log_<wbr>raise_exception] <top>: FAIL: </font></i></div><div><i><font face="monospace, monospace">Traceback (most recent call last):</font></i></div><div><i><font face="monospace, monospace"> File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/syncdutils.<wbr>py", line 361, in twrap</font></i></div><div><i><font face="monospace, monospace"> tf(*aa)</font></i></div><div><i><font face="monospace, monospace"> File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/resource.py"<wbr>, line 1009, in <lambda></font></i></div><div><i><font face="monospace, monospace"> t = syncdutils.Thread(target=<wbr>lambda: (repce.service_loop(),</font></i></div><div><i><font face="monospace, monospace"> File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 90, in service_loop</font></i></div><div><i><font face="monospace, monospace"> self.q.put(recv(self.inf))</font></i></div><div><i><font face="monospace, monospace"> File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 61, in recv</font></i></div><div><i><font face="monospace, monospace"> return pickle.load(inf)</font></i></div><div><i><font face="monospace, monospace">ImportError: No module named h_2013-04-26-04:02:49-2013-04-<wbr>26_11:02:53.gz.15WBuUh</font></i></div><div><i><font face="monospace, monospace">[2018-06-05 12:05:26.768085] E [repce(slave):117:worker] <top>: call failed: </font></i></div><div><i><font face="monospace, monospace">Traceback (most recent call last):</font></i></div><div><i><font face="monospace, monospace"> File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 113, in worker</font></i></div><div><i><font face="monospace, monospace"> res = getattr(self.obj, rmeth)(*in_data[2:])</font></i></div><div><i><font face="monospace, monospace">TypeError: getattr(): attribute name must be string</font></i></div></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">From this point in time the slave server begins to consume all of its available RAM until it becomes non-responsive. Eventually the gluster service seems to kill off the offending process and the memory is returned to the system. Once the memory has been returned to the remote slave system the geo-replication often recovers and data transfer resumes.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">I have attached the full geo-replication slave log containing the error shown above. I have also attached an image file showing the memory usage of the affected storage server.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">We are currently running Gluster version 3.12.9 on top of CentOS 7.5 x86_64. The system has been fully patched and is running the latest software, excluding glibc which had to be downgraded to get geo-replication working.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">The Gluster volume runs on a dedicated partition using the XFS filesystem which in turn is running on a LVM thin volume. The physical storage is presented as a single drive due to the underlying disks being part of a raid 10 array.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">The Master volume which is being replicated has a total of 2.2 TB of data to be replicated. The total size of the volume fluctuates very little as data being removed equals the new data coming in. This data is made up of many thousands of files across many separated directories. Data file sizes vary from the very small (>1K) to the large (>1Gb). The Gluster service itself is running with a single volume in a replicated configuration across 3 bricks at each of the sites. The delta changes being replicated are on average about 100GB per day, where this includes file creation / deletion / modification.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div>The config for the geo-replication session is as follows, taken from the current source server;</div><div><div><font face="monospace, monospace"><i><br></i></font></div><div><font face="monospace, monospace"><i>special_sync_mode: partial</i></font></div><div><font face="monospace, monospace"><i>gluster_log_file: /var/log/glusterfs/geo-<wbr>replication/glustervol0/ssh%<wbr>3A%2F%2Froot%40storage-server.<wbr>local%3Agluster%3A%2F%2F127.0.<wbr>0.1%3Aglustervol1.gluster.log</i></font></div><div><font face="monospace, monospace"><i>ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<wbr>replication/secret.pem</i></font></div><div><font face="monospace, monospace"><i>change_detector: changelog</i></font></div><div><font face="monospace, monospace"><i>session_owner: 40e9e77a-034c-44a2-896e-<wbr>59eec47e8a84</i></font></div><div><font face="monospace, monospace"><i>state_file: /var/lib/glusterd/geo-<wbr>replication/glustervol0_<wbr>storage-server.local_<wbr>glustervol1/monitor.status</i></font></div><div><font face="monospace, monospace"><i>gluster_params: aux-gfid-mount acl</i></font></div><div><font face="monospace, monospace"><i>log_rsync_performance: true</i></font></div><div><font face="monospace, monospace"><i>remote_gsyncd: /nonexistent/gsyncd</i></font></div><div><font face="monospace, monospace"><i>working_dir: /var/lib/misc/glusterfsd/<wbr>glustervol0/ssh%3A%2F%2Froot%<wbr>40storage-server.local%<wbr>3Agluster%3A%2F%2F127.0.0.1%<wbr>3Aglustervol1</i></font></div><div><font face="monospace, monospace"><i>state_detail_file: /var/lib/glusterd/geo-<wbr>replication/glustervol0_<wbr>storage-server.local_<wbr>glustervol1/ssh%3A%2F%2Froot%<wbr>40storage-server.local%<wbr>3Agluster%3A%2F%2F127.0.0.1%<wbr>3Aglustervol1-detail.status</i></font></div><div><font face="monospace, monospace"><i>gluster_command_dir: /usr/sbin/</i></font></div><div><font face="monospace, monospace"><i>pid_file: /var/lib/glusterd/geo-<wbr>replication/glustervol0_<wbr>storage-server.local_<wbr>glustervol1/monitor.pid</i></font></div><div><font face="monospace, monospace"><i>georep_session_working_dir: /var/lib/glusterd/geo-<wbr>replication/glustervol0_<wbr>storage-server.local_<wbr>glustervol1/</i></font></div><div><font face="monospace, monospace"><i>ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<wbr>replication/tar_ssh.pem</i></font></div><div><font face="monospace, monospace"><i>master.stime_xattr_name: trusted.glusterfs.40e9e77a-<wbr>034c-44a2-896e-59eec47e8a84.<wbr>ccfaed9b-ff4b-4a55-acfa-<wbr>03f092cdf460.stime</i></font></div><div><font face="monospace, monospace"><i>changelog_log_file: /var/log/glusterfs/geo-<wbr>replication/glustervol0/ssh%<wbr>3A%2F%2Froot%40storage-server.<wbr>local%3Agluster%3A%2F%2F127.0.<wbr>0.1%3Aglustervol1-changes.log</i></font></div><div><font face="monospace, monospace"><i>socketdir: /var/run/gluster</i></font></div><div><font face="monospace, monospace"><i>volume_id: 40e9e77a-034c-44a2-896e-<wbr>59eec47e8a84</i></font></div><div><font face="monospace, monospace"><i>ignore_deletes: false</i></font></div><div><font face="monospace, monospace"><i>state_socket_unencoded: /var/lib/glusterd/geo-<wbr>replication/glustervol0_<wbr>storage-server.local_<wbr>glustervol1/ssh%3A%2F%2Froot%<wbr>40storage-server.local%<wbr>3Agluster%3A%2F%2F127.0.0.1%<wbr>3Aglustervol1.socket</i></font></div><div><font face="monospace, monospace"><i>log_file: /var/log/glusterfs/geo-<wbr>replication/glustervol0/ssh%<wbr>3A%2F%2Froot%40storage-server.<wbr>local%3Agluster%3A%2F%2F127.0.<wbr>0.1%3Aglustervol1.log</i></font></div></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">If any further information is required in order to troubleshoot this issue then please let me know.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">I would be very grateful for any help or guidance received.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">Many thanks,</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">Mark Betham.</font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif"><br></font></div><div class="m_-3374420064552741869gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"></div></div></div></div></div>
</div>
<br>
<p><span><br>
</span><span>This
email may contain confidential material; unintended
recipients must not disseminate, use, or act upon any
information in it. If you received this email in error,
please contact the sender and permanently delete the email.<br>
Performance Horizon Group Limited | Registered in England
& Wales 07188234 | Level 8, West One, Forth Banks,
Newcastle upon Tyne, NE1 3PA</span><span>
</span></p>
<p><span><br></span></p><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</div>