<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        mso-margin-top-alt:auto;
        margin-right:0cm;
        mso-margin-bottom-alt:auto;
        margin-left:0cm;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
span.EmailStyle18
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="DE" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-GB">Ok. It happens on all slave nodes (and on the interimmaster as well).<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB">It’s like I assumed. These are the logs of one of the slaves:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB">gsyncd.log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:25.418382] I [repce(slave slave/bricks/brick1/brick):80:service_loop] RepceServer: terminating on reaching EOF.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:37.95297] W [gsyncd(slave slave/bricks/brick1/brick):293:main] &lt;top&gt;: Session config file not exists, using the default config&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/glustervol1_slave_glustervol1/gsyncd.conf<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:37.109643] I [resource(slave slave/bricks/brick1/brick):1096:connect] GLUSTER: Mounting gluster volume locally...<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:38.303920] I [resource(slave slave/bricks/brick1/brick):1119:connect] GLUSTER: Mounted gluster volume&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; duration=1.1941<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:38.304771] I [resource(slave slave/bricks/brick1/brick):1146:service_loop] GLUSTER: slave listening<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:41.981554] I [resource(slave slave/bricks/brick1/brick):598:entry_ops] &lt;top&gt;: Special case: rename on mkdir&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; gfid=29d1d60d-1ad6-45fc-87e0-93d478f7331e&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; entry='.gfid/6b97b987-8aef-46c3-af27-20d3aa883016/New
 folder'<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:42.45641] E [repce(slave slave/bricks/brick1/brick):105:worker] &lt;top&gt;: call failed:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">Traceback (most recent call last):<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 101, in worker<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; res = getattr(self.obj, rmeth)(*in_data[2:])<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/resource.py&quot;, line 599, in entry_ops<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; src_entry = get_slv_dir_path(slv_host, slv_volume, gfid)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 682, in get_slv_dir_path<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; [ENOENT], [ESTALE])<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 540, in errno_wrap<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return call(*arg)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">OSError: [Errno 13] Permission denied: '/bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e'<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:42.81794] I [repce(slave slave/bricks/brick1/brick):80:service_loop] RepceServer: terminating on reaching EOF.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:53.459676] W [gsyncd(slave slave/bricks/brick1/brick):293:main] &lt;top&gt;: Session config file not exists, using the default config&nbsp;&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/glustervol1_slave_glustervol1/gsyncd.conf<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:53.473500] I [resource(slave slave/bricks/brick1/brick):1096:connect] GLUSTER: Mounting gluster volume locally...<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:54.659044] I [resource(slave slave/bricks/brick1/brick):1119:connect] GLUSTER: Mounted gluster volume&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;duration=1.1854<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:52:54.659837] I [resource(slave slave/bricks/brick1/brick):1146:service_loop] GLUSTER: slave listening<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB">The folder “New folder” will be created via Samba and it was renamed by my colleague right away after creation.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@slave glustervol1_slave_glustervol1]# ls /bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e/<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@slave glustervol1_slave_glustervol1]# ls /bricks/brick1/brick/.glusterfs/29/d1/ -al<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">total 0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">drwx--S---&#43;&nbsp; 2 root AD&#43;group 50 Sep 21 09:39 .<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">drwx--S---&#43; 11 root AD&#43;group 96 Sep 21 09:39 ..<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">lrwxrwxrwx.&nbsp; 1 root AD&#43;group 75 Sep 21 09:39 29d1d60d-1ad6-45fc-87e0-93d478f7331e -&gt; ../../6b/97/6b97b987-8aef-46c3-af27-20d3aa883016/vRealize Operation Manager<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB">Creating the folder in /bricks/brick1/brick/.glusterfs/6b/97/6b97b987-8aef-46c3-af27-20d3aa883016/, but it didn’t change anything.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB">mnt-slave-bricks-brick1-brick.log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.625723] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustervol1-client-0: error returned while attempting to connect to host:(null), port:0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.626092] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustervol1-client-0: error returned while attempting to connect to host:(null), port:0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.626181] I [rpc-clnt.c:2105:rpc_clnt_reconfig] 0-glustervol1-client-0: changing port to 49152 (from 0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.643111] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustervol1-client-0: error returned while attempting to connect to host:(null), port:0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.643489] W [dict.c:923:str_to_data] (--&gt;/usr/lib64/glusterfs/4.1.3/xlator/protocol/client.so(&#43;0x4131a) [0x7fafb023831a] --&gt;/lib64/libglusterfs.so.0(dict_set_str&#43;0x16)
 [0x7fafbdb83266] --&gt;/lib64/libglusterfs.so.0(str_to_data&#43;0x91) [0x7fafbdb7fea1] ) 0-dict: value is NULL [Invalid argument]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.643507] I [MSGID: 114006] [client-handshake.c:1308:client_setvolume] 0-glustervol1-client-0: failed to set process-name in handshake msg<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.643541] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustervol1-client-0: error returned while attempting to connect to host:(null), port:0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.671460] I [MSGID: 114046] [client-handshake.c:1176:client_setvolume_cbk] 0-glustervol1-client-0: Connected to glustervol1-client-0, attached to remote volume '/bricks/brick1/brick'.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.672694] I [fuse-bridge.c:4294:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.672715] I [fuse-bridge.c:4927:fuse_graph_sync] 0-fuse: switched to graph 0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:10.673329] I [MSGID: 109005] [dht-selfheal.c:2342:dht_selfheal_directory] 0-glustervol1-dht: Directory selfheal failed: Unable to form layout for directory /<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:16.116458] I [fuse-bridge.c:5199:fuse_thread_proc] 0-fuse: initating unmount of /var/mountbroker-root/user1300/mtpt-geoaccount-ARDW1E<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:16.116595] W [glusterfsd.c:1514:cleanup_and_exit] (--&gt;/lib64/libpthread.so.0(&#43;0x7e25) [0x7fafbc9eee25] --&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter&#43;0xe5) [0x55d5dac5dd65]
 --&gt;/usr/sbin/glusterfs(cleanup_and_exit&#43;0x6b) [0x55d5dac5db8b] ) 0-: received signum (15), shutting down<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:16.116616] I [fuse-bridge.c:5981:fini] 0-fuse: Unmounting '/var/mountbroker-root/user1300/mtpt-geoaccount-ARDW1E'.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-24 13:51:16.116625] I [fuse-bridge.c:5986:fini] 0-fuse: Closing fuse connection to '/var/mountbroker-root/user1300/mtpt-geoaccount-ARDW1E'.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="color:black">Regards,<o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-GB" style="color:black">Christian</span><span lang="EN-GB"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"><o:p>&nbsp;</o:p></span></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span lang="EN-US" style="font-size:12.0pt;color:black">From:
</span></b><span lang="EN-US" style="font-size:12.0pt;color:black">Kotresh Hiremath Ravishankar &lt;khiremat@redhat.com&gt;<br>
<b>Date: </b>Saturday, 22. September 2018 at 06:52<br>
</span><b><span style="font-size:12.0pt;color:black">To: </span></b><span style="font-size:12.0pt;color:black">&quot;Kotte, Christian (Ext)&quot; &lt;christian.kotte@novartis.com&gt;<br>
<b>Cc: </b>Gluster Users &lt;gluster-users@gluster.org&gt;<br>
<b>Subject: </b>Re: [Gluster-users] [geo-rep] Replication faulty - gsyncd.log OSError: [Errno 13] Permission denied<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal">The problem occured on slave side whose error is propagated to master. Mostly any traceback with repce involved is related to problem in slave. Just check few lines above in the log to find the slave node, the crashed worker is connected
 to and get geo replication logs to further debug. <o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</div>
</div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div>
<div>
<p class="MsoNormal">On Fri, 21 Sep 2018, 20:10 Kotte, Christian (Ext), &lt;<a href="mailto:christian.kotte@novartis.com">christian.kotte@novartis.com</a>&gt; wrote:<o:p></o:p></p>
</div>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm">
<div>
<div>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">Hi,</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">Any idea how to troubleshoot this?</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">New folders and files were created on the master and the replication went faulty. They were created via Samba.
</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">Version: GlusterFS 4.1.3</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@master]# gluster volume geo-replication status</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">MASTER NODE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MASTER VOL&nbsp;&nbsp;&nbsp;&nbsp; MASTER BRICK&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE USER&nbsp;&nbsp;&nbsp; SLAVE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 SLAVE NODE&nbsp;&nbsp;&nbsp; STATUS&nbsp;&nbsp;&nbsp; CRAWL STATUS&nbsp;&nbsp;&nbsp; LAST_SYNCED</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">master&nbsp;&nbsp;&nbsp; glustervol1&nbsp;&nbsp;&nbsp; /bricks/brick1/brick&nbsp;&nbsp;&nbsp; geoaccount&nbsp;&nbsp;&nbsp; ssh://geoaccount@slave_1::glustervol1&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
 N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">master&nbsp;&nbsp;&nbsp; glustervol1&nbsp;&nbsp;&nbsp; /bricks/brick1/brick&nbsp;&nbsp;&nbsp; geoaccount&nbsp;&nbsp;&nbsp; ssh://geoaccount@slave_2::glustervol1&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
 N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">master&nbsp;&nbsp;&nbsp; glustervol1&nbsp;&nbsp;&nbsp; /bricks/brick1/brick&nbsp;&nbsp;&nbsp; geoaccount&nbsp;&nbsp;&nbsp; ssh://geoaccount@interimmaster::glustervol1&nbsp;&nbsp; N/A &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Faulty&nbsp;&nbsp;&nbsp;
 N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">The following error is repeatedly logged in the gsyncd.logs:</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:38.611479] I [repce(agent /bricks/brick1/brick):80:service_loop] RepceServer: terminating on reaching EOF.</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:39.211527] I [monitor(monitor):279:monitor] Monitor: worker died in startup phase&nbsp;&nbsp;&nbsp;&nbsp; brick=/bricks/brick1/brick</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:39.214322] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:49.318953] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker&nbsp;&nbsp; brick=/bricks/brick1/brick&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 slave_node=<a href="http://nrchbs-slp2020.nibr.novartis.net" target="_blank">nrchbs-slp2020.nibr.novartis.net</a></span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:49.471532] I [gsyncd(agent /bricks/brick1/brick):297:main] &lt;top&gt;: Using session config file&nbsp;&nbsp; path=/var/lib/glusterd/geo-replication/glustervol1_nrchbs-slp2020.nibr.novartis.net_glustervol1/gsyncd.conf</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:49.473917] I [changelogagent(agent /bricks/brick1/brick):72:__init__] ChangelogAgent: Agent listining...</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:49.491359] I [gsyncd(worker /bricks/brick1/brick):297:main] &lt;top&gt;: Using session config file&nbsp; path=/var/lib/glusterd/geo-replication/glustervol1_nrchbs-slp2020.nibr.novartis.net_glustervol1/gsyncd.conf</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:49.538049] I [resource(worker /bricks/brick1/brick):1377:connect_remote] SSH: Initializing SSH connection
 between master and slave...</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:53.5017] I [resource(worker /bricks/brick1/brick):1424:connect_remote] SSH: SSH connection between master
 and slave established.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; duration=3.4665</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:53.5419] I [resource(worker /bricks/brick1/brick):1096:connect] GLUSTER: Mounting gluster volume locally...</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:54.120374] I [resource(worker /bricks/brick1/brick):1119:connect] GLUSTER: Mounted gluster volume&nbsp;&nbsp;&nbsp;&nbsp; duration=1.1146</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:54.121012] I [subcmds(worker /bricks/brick1/brick):70:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging
 back to monitor</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.144460] I [master(worker /bricks/brick1/brick):1593:register] _GMaster: Working dir&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path=/var/lib/misc/gluster/gsyncd/glustervol1_nrchbs-slp2020.nibr.novartis.net_glustervol1/bricks-brick1-brick</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.145145] I [resource(worker /bricks/brick1/brick):1282:service_loop] GLUSTER: Register time time=1537540016</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.160064] I [gsyncdstatus(worker /bricks/brick1/brick):277:set_active] GeorepStatus: Worker Status Change&nbsp;&nbsp;&nbsp;
 status=Active</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.161175] I [gsyncdstatus(worker /bricks/brick1/brick):249:set_worker_crawl_status] GeorepStatus: Crawl Status
 Change&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status=History Crawl</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.161536] I [master(worker /bricks/brick1/brick):1507:crawl] _GMaster: starting history crawl&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; turns=1
 stime=(1537522637, 0)&nbsp;&nbsp; entry_stime=(1537537141, 0)&nbsp;&nbsp;&nbsp;&nbsp; etime=1537540016</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.164277] I [master(worker /bricks/brick1/brick):1536:crawl] _GMaster: slave's time&nbsp; stime=(1537522637, 0)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.197065] I [master(worker /bricks/brick1/brick):1360:process] _GMaster: Skipping already processed entry
 ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; to_changelog=1537522638 num_changelogs=1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; from_changelog=1537522638</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.197402] I [master(worker /bricks/brick1/brick):1374:process] _GMaster: Entry Time Taken&nbsp;&nbsp;&nbsp; MKD=0&nbsp;&nbsp; MKN=0&nbsp;&nbsp;
 LIN=0&nbsp;&nbsp; SYM=0&nbsp;&nbsp; REN=0&nbsp;&nbsp; RMD=0&nbsp;&nbsp; CRE=0&nbsp;&nbsp; duration=0.0000 UNL=1</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.197623] I [master(worker /bricks/brick1/brick):1384:process] _GMaster: Data/Metadata Time Taken&nbsp;&nbsp;&nbsp; SETA=0&nbsp;
 SETX=0&nbsp; meta_duration=0.0000&nbsp;&nbsp;&nbsp; data_duration=0.0284&nbsp;&nbsp;&nbsp; DATA=0&nbsp; XATT=0</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:56.198230] I [master(worker /bricks/brick1/brick):1394:process] _GMaster: Batch Completed&nbsp;&nbsp;&nbsp;&nbsp; changelog_end=1537522638&nbsp;&nbsp;&nbsp;&nbsp;
 &nbsp;&nbsp;&nbsp;entry_stime=(1537537141, 0)&nbsp;&nbsp;&nbsp;&nbsp; changelog_start=1537522638&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stime=(1537522637, 0)&nbsp;&nbsp; duration=0.0333 num_changelogs=1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mode=history_changelog</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:57.200436] I [master(worker /bricks/brick1/brick):1536:crawl] _GMaster: slave's time&nbsp; stime=(1537522637, 0)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:57.528625] E [repce(worker /bricks/brick1/brick):197:__call__] RepceClient: call failed&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; call=17209:140650361157440:1537540017.21&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 method=entry_ops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; error=OSError</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[2018-09-21 14:26:57.529371] E [syncdutils(worker /bricks/brick1/brick):332:log_raise_exception] &lt;top&gt;: FAIL:</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">Traceback (most recent call last):</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py&quot;, line 311, in main</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; func(args)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/subcmds.py&quot;, line 72, in subcmd_worker</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; local.service_loop(remote)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/resource.py&quot;, line 1288, in service_loop</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; g3.crawlwrap(oneshot=True)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/master.py&quot;, line 615, in crawlwrap</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; self.crawl()</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/master.py&quot;, line 1545, in crawl</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; self.changelogs_batch_process(changes)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/master.py&quot;, line 1445, in changelogs_batch_process</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; self.process(batch)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/master.py&quot;, line 1280, in process</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; self.process_change(change, done, retry)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/master.py&quot;, line 1179, in process_change</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; failures = self.slave.server.entry_ops(entries)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 216, in __call__</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return self.ins(self.meth, *a)</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp; File &quot;/usr/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 198, in __call__</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; raise res</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">OSError: [Errno 13] Permission denied: '/bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e'</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">The permissions look fine. The replication is done via geo user instead of root. It should be able to read, but I’m not sure if the syncdaemon runs under geoaccount!?</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@master vRealize Operation Manager]# ll /bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">lrwxrwxrwx. 1 root root 75 Sep 21 09:39 /bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e -&gt; ../../6b/97/6b97b987-8aef-46c3-af27-20d3aa883016/vRealize
 Operation Manager</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@master vRealize Operation Manager]# ll /bricks/brick1/brick/.glusterfs/29/d1/29d1d60d-1ad6-45fc-87e0-93d478f7331e/</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">total 4</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">drwxrwxr-x. 2 AD&#43;user AD&#43;group&nbsp; 131 Sep 21 10:14 6.7</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">drwxrwxr-x. 2 AD&#43;user AD&#43;group 4096 Sep 21 09:43 7.0</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">drwxrwxr-x. 2 AD&#43;user AD&#43;group&nbsp;&nbsp; 57 Sep 21 10:28 7.5</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="font-family:&quot;Courier New&quot;">[root@master vRealize Operation Manager]#</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">It could be possible that the folder was renamed. I had 3 similar issues since I migrated to GlusterFS 4.x but couldn’t investigate much. I needed to completely
 wipe GlusterFS and geo-repliction to get rid of this error…</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">Any help is appreciated.</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="color:black">Regards,</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span lang="EN-GB" style="color:black">&nbsp;</span><o:p></o:p></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b><span lang="EN-GB" style="color:black">Christian Kotte</span></b><o:p></o:p></p>
</div>
</div>
<p class="MsoNormal">_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&amp;d=DwMFaQ&amp;c=ZbgFmJjg4pdtrnL2HUJUDw&amp;r=faVOd9yfnSYhe2mQhqlDwcpXGm7x8HN1C9wPmFD3694&amp;m=buld78OSs9O-NEZ-w9vywUcr-bP6_RTbL2pwat-zRIU&amp;s=bKc1d7zoIXuVSLbZS_vD3v4-FJrG2I6T6Dhcq8Qk6Bs&amp;e=" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><o:p></o:p></p>
</blockquote>
</div>
</div>
</body>
</html>