<div dir="ltr"><div>isn't it trying to heal your dovecot-uidlist? try updating, restarting and initiating heal again</div><div><br></div><div>-v<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Sun, Oct 7, 2018 at 12:54 PM Hoggins! <<a href="mailto:fuckspam@wheres5.com">fuckspam@wheres5.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello list,<br>
<br>
My Gluster cluster has a condition, I'd like to know how to cure it.<br>
<br>
The setup: two bricks, replicated, with an arbiter.<br>
On brick 1, the /var/log/glusterfs/glustershd.log is quite empty, not<br>
much activity, everything looks fine.<br>
On brick 2, /var/log/glusterfs/glustershd.log shows a lot of these:<br>
[MSGID: 108026] [afr-self-heal-entry.c:887:afr_selfheal_entry_do]<br>
0-mailer-replicate-0: performing entry selfheal on<br>
9df5082b-d066-4659-91a4-5f2ad943ce51<br>
[MSGID: 108026] [afr-self-heal-entry.c:887:afr_selfheal_entry_do]<br>
0-mailer-replicate-0: performing entry selfheal on<br>
ba8c0409-95f5-499d-8594-c6de15d5a585<br>
<br>
These entries are repeated everyday, every ten minutes or so.<br>
<br>
Now if we list the contents of the directory represented by file ID<br>
9df5082b-d066-4659-91a4-5f2ad943ce51:<br>
On brick 1:<br>
drwx------. 2 1005 users 102400 13 sept. 17:03 cur<br>
-rw-------. 2 1005 users 22 14 mars 2016 dovecot-keywords<br>
-rw-------. 2 1005 users 0 6 janv. 2015 maildirfolder<br>
drwx------. 2 1005 users 6 30 juin 2015 new<br>
drwx------. 2 1005 users 6 4 oct. 17:46 tmp<br>
<br>
On brick 2:<br>
drwx------. 2 1005 users 102400 25 mai 11:00 cur<br>
-rw-------. 2 1005 users 22 14 mars 2016 dovecot-keywords<br>
-rw-------. 2 1005 users 80559 25 mai 11:00 dovecot-uidlist<br>
-rw-------. 2 1005 users 0 6 janv. 2015 maildirfolder<br>
drwx------. 2 1005 users 6 30 juin 2015 new<br>
drwx------. 2 1005 users 6 4 oct. 17:46 tmp<br>
<br>
(note the "dovecot-uidlist" file present on brick 2 but not on brick 1)<br>
<br>
Also, checking directory sizes fur the cur/ directory:<br>
On brick 1:<br>
165872 cur/<br>
<br>
On brick 2:<br>
161516 cur/<br>
<br>
BUT the number of files is the same on the two bricks for the cur/<br>
directory:<br>
$~ ls -l cur/ | wc -l<br>
1135<br>
<br>
So now you've got it: it's inconsistent between the two data bricks.<br>
<br>
On the arbiter, all seems good, the directory listing looks like what is<br>
on brick 2.<br>
Same kind of situation happens for file ID<br>
ba8c0409-95f5-499d-8594-c6de15d5a585.<br>
<br>
I'm sure that having this situation is not good and needs to be sorted<br>
out, so what can I do?<br>
<br>
Thanks for your help!<br>
<br>
Hoggins!<br>
<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>