<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none"><!--P{margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr" style="font-size:12pt;color:#000000;background-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hi Kotresh,</p>
<p>I have been running 4.1.3 from the end of August.</p>
<p>Since then data has been synced to geo side with a couple of hundred GB per 24 hour, even with the errors I have reported in this thread.</p>
<p><br>
</p>
<p>Four days ago all data transfer to geo side stopped, and the logs repeats the same error over and over again (see below).</p>
<p>Both nodes toggle status Active/Faulty.<br>
</p>
<p><br>
</p>
<p>Thanks alot!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus <br>
</p>
<p><br>
</p>
<p>One master node, gsyncd.log:</p>
<p>[2018-09-10 10:53:38.409709] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty<br>
[2018-09-10 10:53:47.783914] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:47.852792] I [gsyncd(status):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:48.421061] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp; brick=/urd-gds/gluster&nbsp; slave_node=urd-gds-geo-000<br>
[2018-09-10 10:53:48.462655] I [gsyncd(agent /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:48.463366] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-09-10 10:53:48.465905] I [gsyncd(worker /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:48.474558] I [resource(worker /urd-gds/gluster):1377:connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-09-10 10:53:50.70219] I [resource(worker /urd-gds/gluster):1424:connect_remote] SSH: SSH connection between master and slave established. duration=1.5954<br>
[2018-09-10 10:53:50.70777] I [resource(worker /urd-gds/gluster):1096:connect] GLUSTER: Mounting gluster volume locally...<br>
[2018-09-10 10:53:51.170597] I [resource(worker /urd-gds/gluster):1119:connect] GLUSTER: Mounted gluster volume duration=1.0994<br>
[2018-09-10 10:53:51.171158] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor<br>
[2018-09-10 10:53:51.696057] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:51.764605] I [gsyncd(status):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 10:53:53.210553] I [master(worker /urd-gds/gluster):1593:register] _GMaster: Working dir&nbsp;&nbsp;&nbsp; path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster<br>
[2018-09-10 10:53:53.211148] I [resource(worker /urd-gds/gluster):1282:service_loop] GLUSTER: Register time&nbsp;&nbsp;&nbsp;&nbsp; time=1536576833<br>
[2018-09-10 10:53:53.230945] I [gsyncdstatus(worker /urd-gds/gluster):277:set_active] GeorepStatus: Worker Status Change&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status=Active<br>
[2018-09-10 10:53:53.233444] I [gsyncdstatus(worker /urd-gds/gluster):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change&nbsp;&nbsp;&nbsp; status=History Crawl<br>
[2018-09-10 10:53:53.233632] I [master(worker /urd-gds/gluster):1507:crawl] _GMaster: starting history crawl&nbsp;&nbsp;&nbsp; turns=1 stime=(1524272046, 0)&nbsp;&nbsp; entry_stime=(1524271940, 0)&nbsp;&nbsp;&nbsp;&nbsp; etime=1536576833<br>
[2018-09-10 10:53:53.234951] I [master(worker /urd-gds/gluster):1536:crawl] _GMaster: slave's time&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1524272046, 0)<br>
[2018-09-10 10:53:53.762105] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0856 num_files=1&nbsp;&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=0<br>
[2018-09-10 10:53:54.437858] I [master(worker /urd-gds/gluster):1374:process] _GMaster: Entry Time Taken&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MKD=0&nbsp;&nbsp; MKN=0&nbsp;&nbsp; LIN=0&nbsp;&nbsp; SYM=0&nbsp;&nbsp; REN=0&nbsp;&nbsp; RMD=0&nbsp;&nbsp; CRE=0&nbsp;&nbsp; duration=0.0000 UNL=0<br>
[2018-09-10 10:53:54.437973] I [master(worker /urd-gds/gluster):1384:process] _GMaster: Data/Metadata Time Taken&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SETA=0&nbsp; SETX=0&nbsp; meta_duration=0.0000&nbsp;&nbsp;&nbsp; data_duration=1.1979&nbsp;&nbsp;&nbsp; DATA=1&nbsp; XATT=0<br>
[2018-09-10 10:53:54.438153] I [master(worker /urd-gds/gluster):1394:process] _GMaster: Batch Completed changelog_end=1524272047&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; entry_stime=(1524271940, 0)&nbsp;&nbsp;&nbsp;&nbsp; changelog_start=1524272047&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(152\<br>
4272046, 0)&nbsp;&nbsp; duration=1.2029 num_changelogs=1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mode=history_changelog<br>
[2018-09-10 10:53:54.482408] I [master(worker /urd-gds/gluster):1536:crawl] _GMaster: slave's time&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1524272046, 0)<br>
[2018-09-10 10:53:54.583467] E [repce(worker /urd-gds/gluster):197:__call__] RepceClient: call failed&nbsp;&nbsp; call=1844:139973681694528:1536576834.54 method=entry_ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error=GsyncdError<br>
[2018-09-10 10:53:54.583585] E [syncdutils(worker /urd-gds/gluster):300:log_raise_exception] &lt;top&gt;: execution of &quot;gluster&quot; failed with ENOENT (No such file or directory)<br>
[2018-09-10 10:53:54.600353] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.<br>
[2018-09-10 10:53:55.175978] I [monitor(monitor):279:monitor] Monitor: worker died in startup phase&nbsp;&nbsp;&nbsp;&nbsp; brick=/urd-gds/gluster<br>
[2018-09-10 10:53:55.182988] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty<br>
[2018-09-10 10:53:56.24414] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
</p>
<p><br>
</p>
<p><br>
</p>
<p>Other master node, gsyncd.log:</p>
<p>[2018-09-10 11:10:43.10458] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change&nbsp; status=Faulty<br>
[2018-09-10 11:10:53.28702] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp;&nbsp; brick=/urd-gds/gluster&nbsp; slave_node=urd-gds-geo-000<br>
[2018-09-10 11:10:53.69638] I [gsyncd(agent /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 11:10:53.70264] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-09-10 11:10:53.71902] I [gsyncd(worker /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf<br>
[2018-09-10 11:10:53.80737] I [resource(worker /urd-gds/gluster):1377:connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-09-10 11:10:54.621948] I [resource(worker /urd-gds/gluster):1424:connect_remote] SSH: SSH connection between master and slave established.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; duration=1.5410<br>
[2018-09-10 11:10:54.622504] I [resource(worker /urd-gds/gluster):1096:connect] GLUSTER: Mounting gluster volume locally...<br>
[2018-09-10 11:10:55.721349] I [resource(worker /urd-gds/gluster):1119:connect] GLUSTER: Mounted gluster volume duration=1.0984<br>
[2018-09-10 11:10:55.721913] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor<br>
[2018-09-10 11:10:58.543606] I [master(worker /urd-gds/gluster):1593:register] _GMaster: Working dir&nbsp;&nbsp;&nbsp; path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster<br>
[2018-09-10 11:10:58.545701] I [resource(worker /urd-gds/gluster):1282:service_loop] GLUSTER: Register time&nbsp;&nbsp;&nbsp;&nbsp; time=1536577858<br>
[2018-09-10 11:10:58.564208] I [gsyncdstatus(worker /urd-gds/gluster):277:set_active] GeorepStatus: Worker Status Change&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status=Active<br>
[2018-09-10 11:10:58.565689] I [gsyncdstatus(worker /urd-gds/gluster):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change&nbsp;&nbsp;&nbsp; status=History Crawl<br>
[2018-09-10 11:10:58.565876] I [master(worker /urd-gds/gluster):1507:crawl] _GMaster: starting history crawl&nbsp;&nbsp;&nbsp; turns=1 stime=(1527128725, 0)&nbsp;&nbsp; entry_stime=(1527128815, 0)&nbsp;&nbsp;&nbsp;&nbsp; etime=1536577858<br>
[2018-09-10 11:10:59.593652] I [master(worker /urd-gds/gluster):1536:crawl] _GMaster: slave's time&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1527128725, 0)<br>
[2018-09-10 11:11:01.755116] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.5233 num_files=103&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=0<br>
[2018-09-10 11:11:02.897665] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.6648 num_files=116&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=0<br>
[2018-09-10 11:11:03.98150] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp; duration=0.2003 num_files=59&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-09-10 11:11:03.219059] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1207 num_files=16&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=0<br>
[2018-09-10 11:11:03.841105] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1212 num_files=32&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-09-10 11:11:04.951658] I [master(worker /urd-gds/gluster):1944:syncjob] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.2160 num_files=24&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=0<br>
[2018-09-10 11:11:05.2938] E [repce(worker /urd-gds/gluster):197:__call__] RepceClient: call failed&nbsp;&nbsp;&nbsp;&nbsp; call=2935:140696531339072:1536577864.67 method=entry_ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error=GsyncdError<br>
[2018-09-10 11:11:05.3125] E [syncdutils(worker /urd-gds/gluster):300:log_raise_exception] &lt;top&gt;: execution of &quot;gluster&quot; failed with ENOENT (No such file or directory)<br>
[2018-09-10 11:11:05.17061] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.<br>
[2018-09-10 11:11:05.733716] I [monitor(monitor):279:monitor] Monitor: worker died in startup phase&nbsp;&nbsp;&nbsp;&nbsp; brick=/urd-gds/gluster<br>
[2018-09-10 11:11:05.768186] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty<br>
[2018-09-10 11:11:15.788830] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp; brick=/urd-gds/gluster&nbsp; slave_node=urd-gds-geo-000<br>
[2018-09-10 11:11:15.829871] I [gsyncd(agent /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
<div style="color: rgb(33, 33, 33);">
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Från:</b> Kotresh Hiremath Ravishankar &lt;khiremat@redhat.com&gt;<br>
<b>Skickat:</b> den 3 september 2018 07:58<br>
<b>Till:</b> Marcus Pedersén<br>
<b>Kopia:</b> gluster-users@gluster.org<br>
<b>Ämne:</b> Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty</font>
<div>&nbsp;</div>
</div>
<div>
<div dir="ltr">
<div>
<div>
<div>Hi Marcus,<br>
<br>
</div>
Geo-rep had few important fixes in 4.1.3. Is it possible to upgrade and check whether the issue is still seen?<br>
<br>
</div>
Thanks,<br>
</div>
Kotresh HR<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sat, Sep 1, 2018 at 5:08 PM, Marcus Pedersén <span dir="ltr">
&lt;<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr" style="font-size:12pt; color:#000000; background-color:#ffffff; font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi again,</p>
<p>I found another problem on the other master node.</p>
<p>The node toggles Active/Faulty and it is the same error over and over again.</p>
<p><br>
</p>
<p>[2018-09-01 11:23:02.94080] E [repce(worker /urd-gds/gluster):197:__call__<wbr>] RepceClient: call failed&nbsp;&nbsp;&nbsp; call=1226:139955262510912:<wbr>1535800981.24 method=entry_ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error=GsyncdError<br>
[2018-09-01 11:23:02.94214] E [syncdutils(worker /urd-gds/gluster):300:log_<wbr>raise_exception] &lt;top&gt;: execution of &quot;gluster&quot; failed with ENOENT (No such file or directory)<br>
[2018-09-01 11:23:02.106194] I [repce(agent /urd-gds/gluster):80:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-09-01 11:23:02.124444] I [gsyncdstatus(monitor):244:<wbr>set_worker_status] GeorepStatus: Worker Status Change status=Faulty</p>
<p><br>
</p>
<p>I have also found a python error as well, I have only seen this once though.</p>
<p><br>
</p>
<p>[2018-09-01 11:16:45.907660] I [master(worker /urd-gds/gluster):1536:crawl] _GMaster: slave's time&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1524101534, 0)<br>
[2018-09-01 11:16:47.364109] E [syncdutils(worker /urd-gds/gluster):332:log_<wbr>raise_exception] &lt;top&gt;: FAIL:<span class=""><br>
Traceback (most recent call last):<br>
</span>&nbsp; File &quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/syncdutils.<wbr>py&quot;, line 362, in twrap<br>
&nbsp;&nbsp;&nbsp; tf(*aargs)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/master.py&quot;, line 1939, in syncjob<br>
&nbsp;&nbsp;&nbsp; po = self.sync_engine(pb, self.log_err)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/resource.py&quot;<wbr>, line 1442, in rsync<br>
&nbsp;&nbsp;&nbsp; rconf.ssh_ctl_args &#43; \<br>
AttributeError: 'NoneType' object has no attribute 'split'<br>
[2018-09-01 11:16:47.384531] I [repce(agent /urd-gds/gluster):80:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-09-01 11:16:48.362987] I [monitor(monitor):279:monitor] Monitor: worker died in startup phase&nbsp;&nbsp;&nbsp;&nbsp; brick=/urd-gds/gluster<br>
[2018-09-01 11:16:48.370701] I [gsyncdstatus(monitor):244:<wbr>set_worker_status] GeorepStatus: Worker Status Change status=Faulty<br>
[2018-09-01 11:16:58.390548] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp; brick=/urd-gds/gluster&nbsp; slave_node=urd-gds-geo-000</p>
<p><br>
</p>
<p>I attach the logs as well.</p>
<p><br>
</p>
<p>Many thanks!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus Pedersén</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
<div dir="ltr" style="font-size:12pt; color:#000000; background-color:#ffffff; font-family:Calibri,Arial,Helvetica,sans-serif">
<hr style="display:inline-block; width:98%">
<div id="m_1594857026570226030divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Från:</b>
<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.<wbr>org</a> &lt;<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<wbr>gluster.org</a>&gt; för Marcus Pedersén &lt;<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>&gt;<br>
<b>Skickat:</b> den 31 augusti 2018 16:09<br>
<b>Till:</b> <a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>
<div>
<div class="h5"><br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty</div>
</div>
</font>
<div>&nbsp;</div>
</div>
<div>
<div class="h5">
<div>
<p>I realy appologize, third try to make mail smaller.</p>
<p><br>
</p>
<p>/Marcus </p>
<p><br>
</p>
<div dir="ltr" style="font-size:12pt; color:#000000; background-color:#ffffff; font-family:Calibri,Arial,Helvetica,sans-serif">
<hr style="display:inline-block; width:98%">
<div id="m_1594857026570226030divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Från:</b> Marcus Pedersén<br>
<b>Skickat:</b> den 31 augusti 2018 16:03<br>
<b>Till:</b> Kotresh Hiremath Ravishankar<br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> SV: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty</font>
<div>&nbsp;</div>
</div>
<div>
<p>Sorry, resend due to too large mail.</p>
<p><br>
</p>
<p>/Marcus<br>
</p>
<div dir="ltr" style="font-size:12pt; color:#000000; background-color:#ffffff; font-family:Calibri,Arial,Helvetica,sans-serif">
<hr style="display:inline-block; width:98%">
<div id="m_1594857026570226030divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Från:</b> Marcus Pedersén<br>
<b>Skickat:</b> den 31 augusti 2018 15:19<br>
<b>Till:</b> Kotresh Hiremath Ravishankar<br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> SV: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty</font>
<div>&nbsp;</div>
</div>
<div>
<p>Hi Kotresh,</p>
<p>Please find attached logs, only logs from today.</p>
<p>The python error was repeated over and over again until I disabled selinux.</p>
<p>After that the node bacame active again.</p>
<p>The return code 23 seems to be repeated over and over again.<br>
</p>
<p><br>
</p>
<p>rsync version 3.1.2</p>
<p><br>
</p>
<p>Thanks a lot!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus<br>
</p>
<p><br>
</p>
<div style="color:rgb(33,33,33)">
<hr style="display:inline-block; width:98%">
<div id="m_1594857026570226030divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Från:</b> Kotresh Hiremath Ravishankar &lt;<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>&gt;<br>
<b>Skickat:</b> den 31 augusti 2018 11:09<br>
<b>Till:</b> Marcus Pedersén<br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty</font>
<div>&nbsp;</div>
</div>
<div>
<div dir="ltr">
<div>
<div>
<div>
<div>Hi Marcus,<br>
<br>
</div>
Could you attach full logs? Is the same trace back happening repeatedly? It will be helpful you attach the corresponding mount log as well.<br>
</div>
What's the rsync version, you are using?<br>
<br>
</div>
Thanks,<br>
</div>
Kotresh HR<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Aug 31, 2018 at 12:16 PM, Marcus Pedersén <span dir="ltr">
&lt;<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr" style="font-size:12pt; color:#000000; background-color:#ffffff; font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi all,</p>
<p>I had problems with stopping sync after upgrade to 4.1.2.</p>
<p>I upgraded to 4.1.3 and it ran fine for one day, but now one of the master nodes shows faulty.</p>
<p>Most of the sync jobs have return code 23, how do I resolve this?</p>
<p>I see messages like: </p>
<p>_GMaster: Sucessfully fixed all entry ops with gfid mismatch</p>
<p>Will this resolve error code 23?<br>
</p>
<p>There is also a python error.</p>
<p>The python error was a selinux problem, turning off selinux made node go to active again.<br>
</p>
<p>See log below.<br>
</p>
<p><br>
</p>
<p>CentOS 7, installed through SIG Gluster (OS updated to latest at the same time)<br>
</p>
<p>Master cluster: 2 x (2 &#43; 1) distributed, replicated</p>
<p>Client cluster: 1 x (2 &#43; 1) replicated</p>
<p><br>
</p>
<p>Many thanks in advance!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus Pedersén<br>
</p>
<p><br>
</p>
<p><br>
</p>
<p>gsyncd.log from Faulty node:</p>
<p>[2018-08-31 06:25:51.375267] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.8099 num_files=57&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:25:51.465895] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0904 num_files=3&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:25:52.562107] E [repce(worker /urd-gds/gluster):197:__call__<wbr>] RepceClient: call failed&nbsp;&nbsp; call=30069:139655665837888:153<wbr>5696752.35&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; method=entry_ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error=OSError<br>
[2018-08-31 06:25:52.562346] E [syncdutils(worker /urd-gds/gluster):332:log_rais<wbr>e_exception] &lt;top&gt;: FAIL:<br>
Traceback (most recent call last):<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/gsyncd.py&quot;, line 311, in main<br>
&nbsp;&nbsp;&nbsp; func(args)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/subcmds.py&quot;, line 72, in subcmd_worker<br>
&nbsp;&nbsp;&nbsp; local.service_loop(remote)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/resource.py&quot;, line 1288, in service_loop<br>
&nbsp;&nbsp;&nbsp; g3.crawlwrap(oneshot=True)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/master.py&quot;, line 615, in crawlwrap<br>
&nbsp;&nbsp;&nbsp; self.crawl()<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/master.py&quot;, line 1545, in crawl<br>
&nbsp;&nbsp;&nbsp; self.changelogs_batch_process(<wbr>changes)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/master.py&quot;, line 1445, in changelogs_batch_process<br>
&nbsp;&nbsp;&nbsp; self.process(batch)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/master.py&quot;, line 1280, in process<br>
&nbsp;&nbsp;&nbsp; self.process_change(change, done, retry)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/master.py&quot;, line 1179, in process_change<br>
&nbsp;&nbsp;&nbsp; failures = self.slave.server.entry_ops(en<wbr>tries)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/repce.py&quot;, line 216, in __call__<br>
&nbsp;&nbsp;&nbsp; return self.ins(self.meth, *a)<br>
&nbsp; File &quot;/usr/libexec/glusterfs/python<wbr>/syncdaemon/repce.py&quot;, line 198, in __call__<br>
&nbsp;&nbsp;&nbsp; raise res<br>
OSError: [Errno 13] Permission denied<br>
[2018-08-31 06:25:52.578367] I [repce(agent /urd-gds/gluster):80:service_l<wbr>oop] RepceServer: terminating on reaching EOF.<br>
[2018-08-31 06:25:53.558765] I [monitor(monitor):279:monitor] Monitor: worker died in startup phase&nbsp;&nbsp;&nbsp;&nbsp; brick=/urd-gds/gluster<br>
[2018-08-31 06:25:53.569777] I [gsyncdstatus(monitor):244:set<wbr>_worker_status] GeorepStatus: Worker Status Change status=Faulty<br>
[2018-08-31 06:26:03.593161] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp; brick=/urd-gds/gluster&nbsp; slave_node=urd-gds-geo-000<br>
[2018-08-31 06:26:03.636452] I [gsyncd(agent /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-rep<wbr>lication/urd-gds-volume_urd-<wbr>gds-geo-001_urd-gds-volume/<wbr>gsyncd.conf<br>
[2018-08-31 06:26:03.636810] I [gsyncd(worker /urd-gds/gluster):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-rep<wbr>lication/urd-gds-volume_urd-<wbr>gds-geo-001_urd-gds-volume/<wbr>gsyncd.conf<br>
[2018-08-31 06:26:03.637486] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-08-31 06:26:03.650330] I [resource(worker /urd-gds/gluster):1377:connect<wbr>_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-08-31 06:26:05.296473] I [resource(worker /urd-gds/gluster):1424:connect<wbr>_remote] SSH: SSH connection between master and slave established.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; duration=1.6457<br>
[2018-08-31 06:26:05.297904] I [resource(worker /urd-gds/gluster):1096:connect<wbr>] GLUSTER: Mounting gluster volume locally...<br>
[2018-08-31 06:26:06.396939] I [resource(worker /urd-gds/gluster):1119:connect<wbr>] GLUSTER: Mounted gluster volume duration=1.0985<br>
[2018-08-31 06:26:06.397691] I [subcmds(worker /urd-gds/gluster):70:subcmd_wo<wbr>rker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor<br>
[2018-08-31 06:26:16.815566] I [master(worker /urd-gds/gluster):1593:registe<wbr>r] _GMaster: Working dir&nbsp;&nbsp;&nbsp; path=/var/lib/misc/gluster/gsy<wbr>ncd/urd-gds-volume_urd-gds-geo<wbr>-001_urd-gds-volume/urd-gds-<wbr>gluster<br>
[2018-08-31 06:26:16.816423] I [resource(worker /urd-gds/gluster):1282:service<wbr>_loop] GLUSTER: Register time&nbsp;&nbsp;&nbsp;&nbsp; time=1535696776<br>
[2018-08-31 06:26:16.888772] I [gsyncdstatus(worker /urd-gds/gluster):277:set_acti<wbr>ve] GeorepStatus: Worker Status Change&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status=Active<br>
[2018-08-31 06:26:16.892049] I [gsyncdstatus(worker /urd-gds/gluster):249:set_work<wbr>er_crawl_status] GeorepStatus: Crawl Status Change&nbsp;&nbsp;&nbsp; status=History Crawl<br>
[2018-08-31 06:26:16.892703] I [master(worker /urd-gds/gluster):1507:crawl] _GMaster: starting history crawl&nbsp;&nbsp;&nbsp; turns=1 stime=(1525739167, 0)&nbsp;&nbsp; entry_stime=(1525740143, 0)&nbsp;&nbsp;&nbsp;&nbsp; etime=1535696776<br>
[2018-08-31 06:26:17.914803] I [master(worker /urd-gds/gluster):1536:crawl] _GMaster: slave's time&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1525739167, 0)<br>
[2018-08-31 06:26:18.521718] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1063 num_files=17&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:19.260137] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.3441 num_files=34&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:19.615191] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0923 num_files=7&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:19.891227] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1302 num_files=12&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:19.922700] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.5024 num_files=50&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23</p>
<p>[2018-08-31 06:26:21.639342] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=1.5233 num_files=5&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:22.12726] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp; duration=0.1191 num_files=7&nbsp;&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:22.86136] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp; duration=0.0731 num_files=4&nbsp;&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:22.503290] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0779 num_files=15&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:23.214704] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0738 num_files=9&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:23.251876] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.2478 num_files=33&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:23.802699] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0873 num_files=9&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:23.828176] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0758 num_files=3&nbsp;&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:23.854063] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.2662 num_files=34&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:24.403228] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0997 num_files=30&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:25.526] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; duration=0.0965 num_files=8&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:25.438527] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0832 num_files=9&nbsp;&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:25.447256] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.6180 num_files=86&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:25.571913] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0706 num_files=2&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=0<br>
[2018-08-31 06:26:27.21325] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp; duration=0.0814 num_files=1&nbsp;&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:27.615520] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0933 num_files=13&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:27.668323] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.2190 num_files=95&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:27.740139] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0716 num_files=11&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.191068] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1167 num_files=38&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.268213] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0768 num_files=7&nbsp;&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.317909] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0770 num_files=4&nbsp;&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.710064] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0932 num_files=23&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.907250] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0886 num_files=26&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:28.976679] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0692 num_files=4&nbsp;&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:29.55774] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp;&nbsp; duration=0.0788 num_files=9&nbsp;&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:29.295576] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0847 num_files=16&nbsp;&nbsp;&nbsp; job=1&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:29.665076] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1087 num_files=25&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:30.277998] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.1122 num_files=40&nbsp;&nbsp;&nbsp; job=2&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:31.153105] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.3822 num_files=74&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:31.227639] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0743 num_files=18&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23<br>
[2018-08-31 06:26:31.302660] I [master(worker /urd-gds/gluster):1944:syncjob<wbr>] Syncer: Sync Time Taken&nbsp;&nbsp; duration=0.0748 num_files=18&nbsp;&nbsp;&nbsp; job=3&nbsp;&nbsp; return_code=23</p>
<p><br>
</p>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">
här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">
here </a></p>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="m_1594857026570226030gmail_signature">
<div dir="ltr">
<div>Thanks and Regards,<br>
</div>
Kotresh H R<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">
här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">
here </a></p>
</div>
</div>
</div>
</div>
<div>
<div class="h5">
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">
här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">
here </a></p>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature">
<div dir="ltr">
<div>Thanks and Regards,<br>
</div>
Kotresh H R<br>
</div>
</div>
</div>
</div>
</div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/">here </a></p>
</body>
</html>