[Gluster-users] 3.8.2 : Node not healing

Pranith Kumar Karampuri pkarampu at redhat.com
Sat Aug 20 11:28:54 UTC 2016


On Sat, Aug 20, 2016 at 9:45 AM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> On 20/08/2016 1:21 AM, David Gossage wrote:
>
>> Any issues since then?  Was contemplating updating  from 3.7.14 -> 3.8.2
>> this weekend prior to doing some work changing up the underlying brick raid
>> levels and needing to do full heals one by one.  So far has been fine on my
>> test bed with what limited use I can put on it.
>>
>
> No problems at all, VM's operating normally, performance quite good.
>
>
> I was wondering if my original problem was being over hasty with a "heal
> full" command - maybe if I had waited a few minutes it would started
> healing normally. Its my understanding that would do a hash comparison of
> all shards, which would take a long time and really thrash the CPU.
>
> I have backups running at the moment, once they are finished I'll repeat
> the test and see how it does when left to its own devices.


Lindsay,
       Please do "gluster volume set <volname> data-self-heal-algorithm
full" to prevent diff self-heals(checksum computations on the files) which
use a lot of CPU if not already. One more thing that could have lead to lot
of CPU is full directory heals on .shard. Krutika recently implemented a
feature called granular entry self-heal which should address this issue in
future. We have throttling feature coming along in future as well to play
nice with rest of the system.


>
>
> --
> Lindsay Mathieson
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160820/8038ab08/attachment.html>


More information about the Gluster-users mailing list