It's a hard link, so use find's '-samefile' option to see if it's the last one or not.<div><br></div><div>If you really want to delete it, have a backup and then delete both the gfid and any other hard links.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Thu, Feb 8, 2024 at 22:43, Anant Saraswat</div><div><anant.saraswat@techblue.co.uk> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv9479189542"><div dir="ltr">
<p style="margin-top:0px;margin-bottom:0px;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Can anyone please suggest if it's safe to delete '/opt/tier1data2019/brick/.glusterfs/d5/3f/d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8'?
 This file is only present on the primary master node (master1) and doesn't exist on master2 and master3 nodes. When I resume the geo-replication, I get the following error.</span></p>
<p><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></p>
<p><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Also, how can I remove this file from the changelogs so that when I start the geo-replication again, this file
 won't be checked?</span></p>
<p><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></p>
<p><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></p>
<div><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[2024-02-07 22:37:36.911439] D [master(worker /opt/tier1data2019/brick):1344:process] _GMaster: processing
 change [{changelog=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick/.history/.processing/CHANGELOG.1705936007}]</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[2024-02-07 22:37:36.915193] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster
 Mount process exited [{error=ENOTCONN}]</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[2024-02-07 22:37:36.915252] E [syncdutils(worker /opt/tier1data2019/brick):363:log_raise_exception] <top>: FULL
 EXCEPTION TRACE:</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Traceback (most recent call last):</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317, in main</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    func(args)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 86, in subcmd_worker</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    local.service_loop(remote)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1298, in service_loop</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    g3.crawlwrap(oneshot=True)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 604, in crawlwrap</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    self.crawl()</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1614, in crawl</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    self.changelogs_batch_process(changes)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1510, in changelogs_batch_process</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    self.process(batch)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1345, in process</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    self.process_change(change, done, retry)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1071, in process_change</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    st = lstat(pt)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 589, in lstat</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    return errno_wrap(os.lstat, [e], [ENOENT], [ESTALE, EBUSY])</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 571, in errno_wrap</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">    return call(*arg)</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">OSError: [Errno 107] Transport endpoint is not connected: '.gfid/d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8'</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[2024-02-07 22:37:37.344426] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}]</span></div>
<div style="text-align:left;text-indent:0px;background-color:rgb(255, 255, 255);margin:0px;">
<span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[2024-02-07 22:37:37.346601] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change
 [{status=Faulty}]</span></div>
<div><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></div>
<div><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Thanks,</span></div>
<div><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Anant</span></div>
<div><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></div>
<hr style="display:inline-block;width:98%;">
<div id="yiv9479189542yqt95842" class="yiv9479189542yqt1099626562"><div dir="ltr" id="yiv9479189542divRplyFwdMsg"><span style="font-family:Calibri, sans-serif;font-size:11pt;color:rgb(0, 0, 0);"><b>From:</b> Anant Saraswat <anant.saraswat@techblue.co.uk><br clear="none">
<b>Sent:</b> 08 February 2024 2:00 PM<br clear="none">
<b>To:</b> Diego Zuccato <diego.zuccato@unibo.it>; gluster-users@gluster.org <gluster-users@gluster.org>; Strahil Nikolov <hunter86_bg@yahoo.com>; Aravinda Vishwanathapura <aravinda@kadalu.tech><br clear="none">
<b>Subject:</b> Re: [Gluster-users] __Geo-replication status is getting Faulty after few seconds</span>
<div> </div>
</div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">Thanks
<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAAM908903" ymailto="mailto:diego.zuccato@unibo.it" target="_blank" href="mailto:diego.zuccato@unibo.it" class="yiv9479189542x_tWKOu yiv9479189542x_mention yiv9479189542x_ms-bgc-nlr yiv9479189542x_ms-fcl-b">
@Diego Zuccato</a>, I'm just thinking, if we delete the suspected file, won't it create an issue since this ID is present in the `CHANGELOG.1705936007` file?</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">[root@master1 ~]# grep -i "d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8" /var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick/.history/.processing/CHANGELOG.1705936007</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">E d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8 CREATE 33188 0 0 e8aff729-a310-4d21-a64b-d8cc7cb1a828/app_docmerge12monthsfixedCUSTODIAL_2024_1_22_15_3_24_648.doc</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">D d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);">E d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8 UNLINK e8aff729-a310-4d21-a64b-d8cc7cb1a828/app_docmerge12monthsfixedCUSTODIAL_2024_1_22_15_3_24_648.doc</span></div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:12pt;color:rgb(0, 0, 0);"><br clear="none">
</span></div>
<div id="yiv9479189542x_appendonsend"></div>
<hr style="direction:ltr;display:inline-block;width:98%;">
<div dir="ltr" id="yiv9479189542x_divRplyFwdMsg"><span style="font-family:Calibri, sans-serif;font-size:11pt;color:rgb(0, 0, 0);"><b>From:</b> Gluster-users <gluster-users-bounces@gluster.org> on behalf of Diego Zuccato <diego.zuccato@unibo.it><br clear="none">
<b>Sent:</b> 08 February 2024 1:37 PM<br clear="none">
<b>To:</b> gluster-users@gluster.org <gluster-users@gluster.org><br clear="none">
<b>Subject:</b> Re: [Gluster-users] __Geo-replication status is getting Faulty after few seconds</span>
<div> </div>
</div>
<div style="direction:ltr;"><span style="font-family:Aptos, Calibri, Helvetica, sans-serif;font-size:11pt;color:rgb(0, 0, 0);">EXTERNAL: Do not click links or open attachments if you do not recognize the sender.<br clear="none">
<br clear="none">
That '1' means there's no corresponding file in the regular file<br clear="none">
structure (outside .glusterfs).<br clear="none">
IIUC it shouldn't happen, but it does (quite often). *Probably* it's<br clear="none">
safe to just delete it, but wait for advice from more competent users.<br clear="none">
<br clear="none">
Diego<br clear="none">
<br clear="none">
Il 08/02/2024 13:42, Anant Saraswat ha scritto:<br clear="none">
> Hi Everyone,<br clear="none">
><br clear="none">
> As I was getting "OSError: [Errno 107] Transport endpoint is not<br clear="none">
> connected: '.gfid/d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8' " error in the<br clear="none">
> primary master node gsyncd log, So I started searching this file details<br clear="none">
> and I found this file in the brick, under the .glusterfs folder on<br clear="none">
> master1 node.<br clear="none">
><br clear="none">
> Path on master1 -<br clear="none">
> /opt/tier1data2019/brick/.glusterfs/d5/3f/d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8<br clear="none">
><br clear="none">
> [root@master1 ~]# ls -lrt /opt/tier1data2019/brick/.glusterfs/d5/3f/<br clear="none">
><br clear="none">
> -rw-r--r--  2 root    root       15996 Dec 14 10:10<br clear="none">
> d53feba6-dc8b-4645-a86c-befabd0e5069<br clear="none">
> -rw-r--r--  2 root    root      343111 Dec 18 10:55<br clear="none">
> d53fed32-b47a-48bf-889e-140c69b04479<br clear="none">
> -rw-r--r--  2 root    root     5060531 Dec 29 15:29<br clear="none">
> d53f184d-91e8-4bc1-b6e7-bb5f27ef8b41<br clear="none">
> -rw-r--r--  2 root    root     2149782 Jan 12 13:25<br clear="none">
> d53ffee5-fa66-4493-8bdf-f2093b3f6ce7<br clear="none">
> -rw-r--r--  2 root    root     1913460 Jan 18 10:40<br clear="none">
> d53f799b-0e87-4800-a3cd-fac9e1a30b54<br clear="none">
> -rw-r--r--  2 root    root       62940 Jan 22 09:35<br clear="none">
> d53fb9d4-8c64-4a83-b968-bbbfb9af4224<br clear="none">
> -rw-r--r--  1 root    root      174592 Jan 22 15:06<br clear="none">
> d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8<br clear="none">
> -rw-r--r--  2 root    root        5633 Jan 26 08:36<br clear="none">
> d53f6bf6-9aac-476c-b8c5-0569fc8d5116<br clear="none">
> -rw-r--r--  2 root    root      801740 Feb  8 11:40<br clear="none">
> d53f71f8-e88b-4ece-b66e-228c2b08d6c8<br clear="none">
><br clear="none">
> Now I have noticed two things:<br clear="none">
><br clear="none">
> First, this file is only present on the primary master node (master1)<br clear="none">
> and doesn't exist on master2 and master3 nodes.<br clear="none">
><br clear="none">
> Second, this file has different file attributes than other files in the<br clear="none">
> folder. If you check the second column of the above output, every file<br clear="none">
> has "2", but this file has "1".<br clear="none">
><br clear="none">
> Now, can someone please guide me why this file has "1" and what I should<br clear="none">
> do next? Is it safe to copy this file to the remaining two master nodes,<br clear="none">
> or should I delete it from master1?<br clear="none">
><br clear="none">
> Many thanks,<br clear="none">
> Anant<br clear="none">
><br clear="none">
> ------------------------------------------------------------------------<br clear="none">
> *From:* Gluster-users <gluster-users-bounces@gluster.org> on behalf of<br clear="none">
> Anant Saraswat <anant.saraswat@techblue.co.uk><br clear="none">
> *Sent:* 08 February 2024 12:01 AM<br clear="none">
> *To:* Aravinda <aravinda@kadalu.tech><br clear="none">
> *Cc:* gluster-users@gluster.org <gluster-users@gluster.org><br clear="none">
> *Subject:* Re: [Gluster-users] __Geo-replication status is getting<br clear="none">
> Faulty after few    seconds<br clear="none">
><br clear="none">
> *EXTERNAL: Do not click links or open attachments if you do not<br clear="none">
> recognize the sender.*<br clear="none">
><br clear="none">
> Hi @Aravinda <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA5196246a-091c-8bf2-fc54-3deb56291d89" ymailto="mailto:aravinda@kadalu.tech" target="_blank" href="mailto:aravinda@kadalu.tech" class="yiv9479189542OWAAutoLink">mailto:aravinda@kadalu.tech</a>>,<br clear="none">
><br clear="none">
> I have checked the rsync version, and it's the same on primary and<br clear="none">
> secondary nodes. We have rsync version 3.1.3, protocol version 31, on<br clear="none">
> all servers. It's very strange that we have not made any changes, that<br clear="none">
> we are aware of, and this geo-replication was working fine for the last<br clear="none">
> 5 years, and suddenly it has stopped, and we are unable to understand<br clear="none">
> the root cause of it.<br clear="none">
><br clear="none">
><br clear="none">
> I have checked the tcpdump and I can see that the master node is sending<br clear="none">
> RST to the secondary node when geo-replication connects, but we are not<br clear="none">
> seeing any RST when we do the ssh using the root user from master to<br clear="none">
> secondary node ourselves, which makes me think that geo-replication is<br clear="none">
> able to connect to the secondary node but after that, it's not liking<br clear="none">
> something and tries to reset the connection, and this is repeating in a<br clear="none">
> loop.<br clear="none">
><br clear="none">
><br clear="none">
> I have also enabled geo-replication debug logs and I am getting this<br clear="none">
> error in the master node gsyncd logs.<br clear="none">
><br clear="none">
><br clear="none">
> [2024-02-07 22:37:36.820978] D [repce(worker<br clear="none">
> /opt/tier1data2019/brick):195:push] RepceClient: call<br clear="none">
> 2563661:140414778891136:1707345456.8209238 entry_ops([{'op': 'CREATE',<br clear="none">
> 'skip_entry': False, 'gfid': '3d57e1e4-7bd2-44f6-a6d1-d628208b3697',<br clear="none">
> 'entry':<br clear="none">
> '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge8795785720233840105.docx', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': '3d57e1e4-7bd2-44f6-a6d1-d628208b3697', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge8795785720233840105.docx'},
 {'op': 'CREATE', 'skip_entry': False, 'gfid': '7bd35f91-1408-476d-869a-9936f2d94afc', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/0c3fb22f-0fbe-4445-845b-9d94d84a9888', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid':
 '3837018c-2f5e-43d4-ab58-0ed8b7456e73', 'entry': '.gfid/861afb81-386a-4b5b-af37-cef63a55a436/26fcd7e7-2c8c-4dcb-96f2-2c8a0d79f3d4', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': 'db311b10-b1e2-4b84-adea-a6746214aeda', 'entry':
 '.gfid/861afb81-386a-4b5b-af37-cef63a55a436/0526d0da-1f36-4203-8563-7e23aacf6237', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': '9bbb253a-226a-44b1-a968-7cfa76cf9463', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeLLRenewalLetterDocusign_1_22_15_1_18_153.doc',
 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': '9bbb253a-226a-44b1-a968-7cfa76cf9463', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeLLRenewalLetterDocusign_1_22_15_1_18_153.doc'}, {'op': 'CREATE', 'skip_entry':
 False, 'gfid': 'f62d0c65-6ede-48ff-b9bf-c44a33e5e023', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/85530794-c15f-44d4-8660-87a14c2c9c8c', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': 'fd3d0af6-8ef5-4b76-bb47-0bc508df0ed0',
 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeMOA_1_22_15_1_20_501.doc', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': 'fd3d0af6-8ef5-4b76-bb47-0bc508df0ed0', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeMOA_1_22_15_1_20_501.doc'},
 {'op': 'CREATE', 'skip_entry': False, 'gfid': 'e93c5771-9676-40d4-90cd-f0586ec05dd9', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/cc372667-3b77-468f-bac6-671d4eb069e9', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid':
 '02045f44-68ff-4a35-a843-08939afc46a4', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeTTRenewalLetterASTNoFee-2022_1_22_15_1_19_530.doc', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': '02045f44-68ff-4a35-a843-08939afc46a4',
 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/app_docmergeTTRenewalLetterASTNoFee-2022_1_22_15_1_19_530.doc'}, {'op': 'CREATE', 'skip_entry': False, 'gfid': '6f5766c9-2dc3-4636-9041-9cf4ac64d26b', 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/556a0e3c-510d-4396-8f32-335aafec1314',
 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': 'f78561f0-c9f2-4192-a82a-8368e0ad8b2b', 'entry': '.gfid/ec161c2e-bb32-4639-a7b2-9be961221d86/app_1705935977525.tmp'}, {'op': 'CREATE', 'skip_entry': False, 'gfid': 'd1e33edb-523e-41c1-a021-8bd3a5a2c7c0',
 'entry': '.gfid/e861ff10-696a-4b03-9716-39d9e7dd08d7/c655e3e5-9d4c-43d7-9171-949f01612e6d', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': 'b6f44b28-c2bf-4e70-b953-1c559ded7835', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge7370453767656401681.docx',
 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': 'b6f44b28-c2bf-4e70-b953-1c559ded7835', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge7370453767656401681.docx'}, {'op': 'CREATE', 'skip_entry': False, 'gfid':
 '2d845d9e-7a49-4200-a100-759fe831ba0e', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/84d47d84-5749-4a19-8f73-293078d17c63', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': '44554c17-21aa-427a-b796-7ecec6af2570', 'entry':
 '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge8634804987715893755.docx', 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'CREATE', 'skip_entry': False, 'gfid': '652bf5d7-3b7a-41d8-aa4f-e52296034821', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/91a25682-69ea-4edc-9250-d6c7aac56853',
 'mode': 33188, 'uid': 0, 'gid': 0}, {'op': 'UNLINK', 'skip_entry': False, 'gfid': '44554c17-21aa-427a-b796-7ecec6af2570', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/app_docmerge8634804987715893755.docx'}, {'op': 'CREATE', 'skip_entry': False, 'gfid':
 '04720811-b90e-42b7-a5d1-656afd92e245', 'entry': '.gfid/9a39167c-6c28-470a-b699-11eeaaff8edd/a66cbc42-61dc-4896-bb69-c715f1a820db', 'mode': 33188, 'uid': 0, 'gid': 0}],) ...<br clear="none">
><br clear="none">
> [2024-02-07 22:37:36.909606] D [repce(worker<br clear="none">
> /opt/tier1data2019/brick):215:__call__] RepceClient: call<br clear="none">
> 2563661:140414778891136:1707345456.8209238 entry_ops -> []<br clear="none">
> [2024-02-07 22:37:36.911032] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):317:a_syncdata] _GMaster: files<br clear="none">
> [{files={'.gfid/652bf5d7-3b7a-41d8-aa4f-e52296034821',<br clear="none">
> '.gfid/2d845d9e-7a49-4200-a100-759fe831ba0e',<br clear="none">
> '.gfid/3837018c-2f5e-43d4-ab58-0ed8b7456e73',<br clear="none">
> '.gfid/e93c5771-9676-40d4-90cd-f0586ec05dd9',<br clear="none">
> '.gfid/f62d0c65-6ede-48ff-b9bf-c44a33e5e023',<br clear="none">
> '.gfid/7bd35f91-1408-476d-869a-9936f2d94afc',<br clear="none">
> '.gfid/04720811-b90e-42b7-a5d1-656afd92e245',<br clear="none">
> '.gfid/6f5766c9-2dc3-4636-9041-9cf4ac64d26b',<br clear="none">
> '.gfid/db311b10-b1e2-4b84-adea-a6746214aeda',<br clear="none">
> '.gfid/d1e33edb-523e-41c1-a021-8bd3a5a2c7c0'}}]<br clear="none">
> [2024-02-07 22:37:36.911089] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/652bf5d7-3b7a-41d8-aa4f-e52296034821}]<br clear="none">
> [2024-02-07 22:37:36.911133] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/2d845d9e-7a49-4200-a100-759fe831ba0e}]<br clear="none">
> [2024-02-07 22:37:36.911169] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/3837018c-2f5e-43d4-ab58-0ed8b7456e73}]<br clear="none">
> [2024-02-07 22:37:36.911202] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/e93c5771-9676-40d4-90cd-f0586ec05dd9}]<br clear="none">
> [2024-02-07 22:37:36.911235] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/f62d0c65-6ede-48ff-b9bf-c44a33e5e023}]<br clear="none">
> [2024-02-07 22:37:36.911268] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/7bd35f91-1408-476d-869a-9936f2d94afc}]<br clear="none">
> [2024-02-07 22:37:36.911301] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/04720811-b90e-42b7-a5d1-656afd92e245}]<br clear="none">
> [2024-02-07 22:37:36.911333] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/6f5766c9-2dc3-4636-9041-9cf4ac64d26b}]<br clear="none">
> [2024-02-07 22:37:36.911366] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/db311b10-b1e2-4b84-adea-a6746214aeda}]<br clear="none">
> [2024-02-07 22:37:36.911398] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):320:a_syncdata] _GMaster: candidate for<br clear="none">
> syncing [{file=.gfid/d1e33edb-523e-41c1-a021-8bd3a5a2c7c0}]<br clear="none">
> [2024-02-07 22:37:36.911439] D [master(worker<br clear="none">
> /opt/tier1data2019/brick):1344:process] _GMaster: processing change<br clear="none">
> [{changelog=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick/.history/.processing/CHANGELOG.1705936007}]<br clear="none">
> [2024-02-07 22:37:36.915193] E [syncdutils(worker<br clear="none">
> /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount<br clear="none">
> process exited [{error=ENOTCONN}]<br clear="none">
> [2024-02-07 22:37:36.915252] E [syncdutils(worker<br clear="none">
> /opt/tier1data2019/brick):363:log_raise_exception] <top>: FULL EXCEPTION<br clear="none">
> TRACE:<br clear="none">
> Traceback (most recent call last):<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317,<br clear="none">
> in main<br clear="none">
>      func(args)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 86,<br clear="none">
> in subcmd_worker<br clear="none">
>      local.service_loop(remote)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line<br clear="none">
> 1298, in service_loop<br clear="none">
>      g3.crawlwrap(oneshot=True)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 604,<br clear="none">
> in crawlwrap<br clear="none">
>      self.crawl()<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1614,<br clear="none">
> in crawl<br clear="none">
>      self.changelogs_batch_process(changes)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1510,<br clear="none">
> in changelogs_batch_process<br clear="none">
>      self.process(batch)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1345,<br clear="none">
> in process<br clear="none">
>      self.process_change(change, done, retry)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1071,<br clear="none">
> in process_change<br clear="none">
>      st = lstat(pt)<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line<br clear="none">
> 589, in lstat<br clear="none">
>      return errno_wrap(os.lstat, [e], [ENOENT], [ESTALE, EBUSY])<br clear="none">
>    File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line<br clear="none">
> 571, in errno_wrap<br clear="none">
>      return call(*arg)<br clear="none">
> OSError: [Errno 107] Transport endpoint is not connected:<br clear="none">
> '.gfid/d53fad8f-84e9-4b24-9eb0-ccbcbdc4baa8'<br clear="none">
> [2024-02-07 22:37:37.344426] I [monitor(monitor):228:monitor] Monitor:<br clear="none">
> worker died in startup phase [{brick=/opt/tier1data2019/brick}]<br clear="none">
> [2024-02-07 22:37:37.346601] I<br clear="none">
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker<br clear="none">
> Status Change [{status=Faulty}]<br clear="none">
><br clear="none">
><br clear="none">
> Thanks,<br clear="none">
> Anant<br clear="none">
><br clear="none">
> ------------------------------------------------------------------------<br clear="none">
> *From:* Aravinda <aravinda@kadalu.tech><br clear="none">
> *Sent:* 07 February 2024 2:54 PM<br clear="none">
> *To:* Anant Saraswat <anant.saraswat@techblue.co.uk><br clear="none">
> *Cc:* Strahil Nikolov <hunter86_bg@yahoo.com>; gluster-users@gluster.org<br clear="none">
> <gluster-users@gluster.org><br clear="none">
> *Subject:* Re: [Gluster-users] __Geo-replication status is getting<br clear="none">
> Faulty after few    seconds<br clear="none">
><br clear="none">
> *EXTERNAL: Do not click links or open attachments if you do not<br clear="none">
> recognize the sender.*<br clear="none">
><br clear="none">
> It will keep track of last sync time if you change to non-root user. But<br clear="none">
> I don't think the issue is related to root vs non-root user.<br clear="none">
><br clear="none">
> Even in non-root user based Geo-rep, Primary volume is mounted using<br clear="none">
> root user only. Only in the secondary node, it will use Glusterd<br clear="none">
> mountbroker to allow mounting the Secondary volume as non-priviliaged user.<br clear="none">
><br clear="none">
> Check the rsync version in Primary and secondary nodes. Please fix the<br clear="none">
> versions if not matching.<br clear="none">
><br clear="none">
> --<br clear="none">
> Aravinda<br clear="none">
> Kadalu Technologies<br clear="none">
><br clear="none">
><br clear="none">
><br clear="none">
> ---- On Wed, 07 Feb 2024 20:11:47 +0530 *Anant Saraswat<br clear="none">
> <anant.saraswat@techblue.co.uk>* wrote ---<br clear="none">
><br clear="none">
> No, It was setup and running using the root user only.<br clear="none">
><br clear="none">
> Do you think I should setup using  a dedicated non-root user? will it<br clear="none">
> keep the track of old files or will it consider it as a new<br clear="none">
> geo-replication and copy all the files from the scratch?<br clear="none">
><br clear="none">
> ------------------------------------------------------------------------<br clear="none">
> *From:* Strahil Nikolov <hunter86_bg@yahoo.com<br clear="none">
> <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA09cd2dc8-a9f2-cdc9-f9b9-493de812651d" ymailto="mailto:hunter86_bg@yahoo.com" target="_blank" href="mailto:hunter86_bg@yahoo.com" class="yiv9479189542OWAAutoLink">mailto:hunter86_bg@yahoo.com</a>>><br clear="none">
> *Sent:* 07 February 2024 2:36 PM<br clear="none">
> *To:* Anant Saraswat <anant.saraswat@techblue.co.uk<br clear="none">
> <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA723d9148-923c-f8aa-2af1-aa7f822e6ed4" ymailto="mailto:anant.saraswat@techblue.co.uk" target="_blank" href="mailto:anant.saraswat@techblue.co.uk" class="yiv9479189542OWAAutoLink">mailto:anant.saraswat@techblue.co.uk</a>>>; Aravinda <aravinda@kadalu.tech<br clear="none">
> <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAf0acbfa9-732b-0dad-8d65-68fc78065bdd" ymailto="mailto:aravinda@kadalu.tech" target="_blank" href="mailto:aravinda@kadalu.tech" class="yiv9479189542OWAAutoLink">mailto:aravinda@kadalu.tech</a>>><br clear="none">
> *Cc:* gluster-users@gluster.org<br clear="none">
> <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAa4cb9fa5-9d2f-842e-000f-1809af13bfc1" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org" class="yiv9479189542OWAAutoLink">mailto:gluster-users@gluster.org</a>> <gluster-users@gluster.org<br clear="none">
> <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAee117874-a18b-0ee9-3770-ab2ff309811b" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org" class="yiv9479189542OWAAutoLink">mailto:gluster-users@gluster.org</a>>><br clear="none">
> *Subject:* Re: [Gluster-users] __Geo-replication status is getting<br clear="none">
> Faulty after few    seconds<br clear="none">
><br clear="none">
> *EXTERNAL: Do not click links or open attachments if you do not<br clear="none">
> recognize the sender.*<br clear="none">
><br clear="none">
> Have you tried setting up gluster georep with a dedicated non-root user ?<br clear="none">
><br clear="none">
> Best Regards,<br clear="none">
> Strahil Nikolov<br clear="none">
><br clear="none">
>     On Tue, Feb 6, 2024 at 16:38, Anant Saraswat<br clear="none">
>     <anant.saraswat@techblue.co.uk<br clear="none">
>     <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA8442d5ed-7f86-e76b-a310-918e4b512cbc" ymailto="mailto:anant.saraswat@techblue.co.uk" target="_blank" href="mailto:anant.saraswat@techblue.co.uk" class="yiv9479189542OWAAutoLink">mailto:anant.saraswat@techblue.co.uk</a>>> wrote:<br clear="none">
>     ________<br clear="none">
><br clear="none">
><br clear="none">
><br clear="none">
>     Community Meeting Calendar:<br clear="none">
><br clear="none">
>     Schedule -<br clear="none">
>     Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">
>     Bridge: <a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAb4f9a808-c157-f315-8462-e2b0d06e1f13" target="_blank" href="https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$" class="yiv9479189542OWAAutoLink">
https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$</a><br clear="none">
>     <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAab0a8af9-5575-2ba4-4b0e-dde3027de293" target="_blank" href="https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Dm8_fHcUmz5wnOfTdrkMSb6PXqGdC_3VpklsIdfjPuKgee_Ds7JD__1KjwR4F62a67f5292of5PyQVk9y3-TRe_00eSiJw$" class="yiv9479189542OWAAutoLink">https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Dm8_fHcUmz5wnOfTdrkMSb6PXqGdC_3VpklsIdfjPuKgee_Ds7JD__1KjwR4F62a67f5292of5PyQVk9y3-TRe_00eSiJw$</a>><br clear="none">
>     Gluster-users mailing list<br clear="none">
>     Gluster-users@gluster.org <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAb5b13332-c5b9-37d5-fb59-b1cadc3abfa5" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org" class="yiv9479189542OWAAutoLink">mailto:Gluster-users@gluster.org</a>><br clear="none">
>     <a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA9bd4ef22-656a-b2a2-a5ad-5a8f2d621bd4" target="_blank" href="https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$" class="yiv9479189542OWAAutoLink">
https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$</a><br clear="none">
>     <<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA60723ad1-76b7-98b1-d47c-0860073c432a" target="_blank" href="https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Dm8_fHcUmz5wnOfTdrkMSb6PXqGdC_3VpklsIdfjPuKgee_Ds7JD__1KjwR4F62a67f5292of5PyQVk9y3-TRe-GwoljEQ$" class="yiv9479189542OWAAutoLink">https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Dm8_fHcUmz5wnOfTdrkMSb6PXqGdC_3VpklsIdfjPuKgee_Ds7JD__1KjwR4F62a67f5292of5PyQVk9y3-TRe-GwoljEQ$</a>><br clear="none">
><br clear="none">
><br clear="none">
> DISCLAIMER: This email and any files transmitted with it are<br clear="none">
> confidential and intended solely for the use of the individual or entity<br clear="none">
> to whom they are addressed. If you have received this email in error,<br clear="none">
> please notify the sender. This message contains confidential information<br clear="none">
> and is intended only for the individual named. If you are not the named<br clear="none">
> addressee, you should not disseminate, distribute or copy this email.<br clear="none">
> Please notify the sender immediately by email if you have received this<br clear="none">
> email by mistake and delete this email from your system.<br clear="none">
><br clear="none">
> If you are not the intended recipient, you are notified that disclosing,<br clear="none">
> copying, distributing or taking any action in reliance on the contents<br clear="none">
> of this information is strictly prohibited. Thanks for your cooperation.<br clear="none">
><br clear="none">
><br clear="none">
><br clear="none">
> DISCLAIMER: This email and any files transmitted with it are<br clear="none">
> confidential and intended solely for the use of the individual or entity<br clear="none">
> to whom they are addressed. If you have received this email in error,<br clear="none">
> please notify the sender. This message contains confidential information<br clear="none">
> and is intended only for the individual named. If you are not the named<br clear="none">
> addressee, you should not disseminate, distribute or copy this email.<br clear="none">
> Please notify the sender immediately by email if you have received this<br clear="none">
> email by mistake and delete this email from your system.<br clear="none">
><br clear="none">
> If you are not the intended recipient, you are notified that disclosing,<br clear="none">
> copying, distributing or taking any action in reliance on the contents<br clear="none">
> of this information is strictly prohibited. Thanks for your cooperation.<br clear="none">
><br clear="none">
> DISCLAIMER: This email and any files transmitted with it are<br clear="none">
> confidential and intended solely for the use of the individual or entity<br clear="none">
> to whom they are addressed. If you have received this email in error,<br clear="none">
> please notify the sender. This message contains confidential information<br clear="none">
> and is intended only for the individual named. If you are not the named<br clear="none">
> addressee, you should not disseminate, distribute or copy this email.<br clear="none">
> Please notify the sender immediately by email if you have received this<br clear="none">
> email by mistake and delete this email from your system.<br clear="none">
><br clear="none">
> If you are not the intended recipient, you are notified that disclosing,<br clear="none">
> copying, distributing or taking any action in reliance on the contents<br clear="none">
> of this information is strictly prohibited. Thanks for your cooperation.<br clear="none">
><br clear="none">
><br clear="none">
> ________<br clear="none">
><br clear="none">
><br clear="none">
><br clear="none">
> Community Meeting Calendar:<br clear="none">
><br clear="none">
> Schedule -<br clear="none">
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">
> Bridge: <a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA75a65b7e-9fc4-3dbf-7ca7-70154ee3c4ba" target="_blank" href="https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$" class="yiv9479189542OWAAutoLink">
https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$</a><br clear="none">
> Gluster-users mailing list<br clear="none">
> Gluster-users@gluster.org<br clear="none">
> <a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA64b1f213-03b0-f6fe-74c6-f9fcba615c37" target="_blank" href="https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$" class="yiv9479189542OWAAutoLink">
https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$</a><br clear="none">
<br clear="none">
--<br clear="none">
Diego Zuccato<br clear="none">
DIFA - Dip. di Fisica e Astronomia<br clear="none">
Servizi Informatici<br clear="none">
Alma Mater Studiorum - Università di Bologna<br clear="none">
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">
tel.: +39 051 20 95786<br clear="none">
________<br clear="none">
<br clear="none">
<br clear="none">
<br clear="none">
Community Meeting Calendar:<br clear="none">
<br clear="none">
Schedule -<br clear="none">
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">
Bridge: <a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWA9733d57a-1a16-33a2-5da2-456eda1b81bc" target="_blank" href="https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$" class="yiv9479189542OWAAutoLink">
https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMXktbudo$</a><br clear="none">
Gluster-users mailing list<br clear="none">
Gluster-users@gluster.org<br clear="none">
<a rel="nofollow noopener noreferrer" shape="rect" id="yiv9479189542OWAd64c1714-1dae-5d43-43d1-993ab84fe20d" target="_blank" href="https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$" class="yiv9479189542OWAAutoLink">https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!Hwkjx1f7mDNwRXYE_O8d08ZwumxQdrfgEeGgY7hAyPpUF_6Ab11JKgwKWdW1Dqekz_JhA-9q9g4SmzePRJ8CObN6_FkMU-8p8vk$</a></span></div>
<p style="FONT-SIZE:10pt;FONT-FAMILY:ARIAL;"><span style="FONT-FAMILY:Calibri Light;"></span></p>
<p style="FONT-SIZE:10pt;FONT-FAMILY:ARIAL;">DISCLAIMER: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please
 notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if
 you have received this email by mistake and delete this email from your system. <br clear="none">
<br clear="none">
If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Thanks for your cooperation.
</p></div>
</div></div> </div> </blockquote></div>