<div dir="ltr">Which version of glusterfs are you using?<br></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Sep 4, 2018 at 4:26 PM Dave Sherohman <<a href="mailto:dave@sherohman.org">dave@sherohman.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Last Friday, I rebooted one of my gluster nodes and it didn't properly<br>
mount the filesystem holding its brick (I had forgotten to add it to<br>
fstab...), so, when I got back to work on Monday, its root filesystem<br>
was full and the gluster heal info showed around 25000 entries needing<br>
to be healed.<br>
<br>
I got the filesystems straightened out and, within a matter of minutes,<br>
the number of entries waiting to be healed in that subvolume dropped to<br>
59. (Showing twice, of course. The cluster is replica 2+A, so the<br>
other full replica and the arbiter are both showing the same list of<br>
entries.) Over a full day later, it's still at 59.<br>
<br>
Is there anything I can do to kick the self-heal back into action and<br>
get those final 59 entries cleaned up?<br>
<br>
-- <br>
Dave Sherohman<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>