<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 15/03/21 7:39 pm, Zenon Panoussis
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:385a6767-052a-b4ed-b07a-d163cf3f0eb1@provocation.net">
<pre class="moz-quote-pre" wrap="">I don't know how to interpret this, but it surely looks as if
Maildir/.Sent/cur needs to be healed on all three bricks. That
can't be possible, logically it doesn't make sense, because if
not even one brick has the data of an object, that object should
not exist at all.
</pre>
</blockquote>
For the same directory, different bricks could contain different
files which are the good copies that need to be synced to the other
replicas, so the same dir being listed in the heal info output of
all bricks is not a problem.<br>
<blockquote type="cite"
cite="mid:385a6767-052a-b4ed-b07a-d163cf3f0eb1@provocation.net">
<pre class="moz-quote-pre" wrap="">
</pre>
<blockquote type="cite" style="color: #007cff;">
<pre class="moz-quote-pre" wrap="">Are there any file names inside
/gfs/gv0/.glusterfs/indices/entry-changes/011fcc1b-4d90-4c36-86ec-488aaa4db3b8
in any of the bricks?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">node01: empty.
node02: 388 filenames, no directories.
node03: 394 filenames, no directories.
Would simply re-copying the entire Maildir/.Sent/cur and its contents
to the volume solve the problem or make it worse?</pre>
</blockquote>
Yes if the dataset is small, you can try rm -rf of the dir from the
mount (assuming no other application is accessing them on the
volume) launch heal once so that the heal info becomes zero and then
copy it over again .<br>
</body>
</html>