<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>After almost a week of doing nothing the brick failed and we were
able to stop and restart glusterd and then could start a manual
heal.</p>
<p>It was interesting when the heal started the time to completion
was just about 21 days but as it worked through the 300000 some
entries it got faster to the point where it completed in 2 days.</p>
<p>Now I have 2 gfids that refuse to heal.</p>
<p>We have also been looking at converting these systems to RHEL and
buying support from RH but it seems that the sales arm is not
interested in calling people back.</p>
<div class="moz-cite-prefix">On 3/20/19 1:39 AM, Amar Tumballi
Suryanarayan wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHxyDdOzjdea=Wc6De6nfy4SQGztv_FdQwz9JUjSn_uCP1xwyQ@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">There are 2 things happen after a reboot.
<div><br>
</div>
<div>1. glusterd (management layer) does a sanity check of its
volumes, and sees if there are anything different while it
went down, and tries to correct its state.</div>
<div> - This is fine as long as number of volumes are less, or
numbers of nodes are less. (less is referred as < 100).</div>
<div><br>
</div>
<div>2. If it is a replicate or disperse volume, then self-heal
daemon does check if there are any self-heal pending.</div>
<div> - This does a 'index' crawl to check which files actually
changed when one of the brick/node was down.</div>
<div> - If this list is big, it can sometimes does take some
time.</div>
<div><br>
</div>
<div>But 'Days/weeks/month' is not a expected/observed behavior.
Is there any logs in the log file? If not, can you do a
'strace -f' to the pid which is consuming major CPU?? (strace
for 1 mins sample is good enough).</div>
<div><br>
</div>
<div>-Amar</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Mar 20, 2019 at 2:05
AM Alvin Starr <<a href="mailto:alvin@netvel.net"
moz-do-not-send="true">alvin@netvel.net</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">We
have a simple replicated volume with 1 brick on each node of
17TB.<br>
<br>
There is something like 35M files and directories on the
volume.<br>
<br>
One of the servers rebooted and is now "doing something".<br>
<br>
It kind of looks like its doing some kind of sality check with
the node <br>
that did not reboot but its hard to say and it looks like it
may run for <br>
hours/days/months....<br>
<br>
Will Gluster take a long time with Lots of little files to
resync?<br>
<br>
<br>
-- <br>
Alvin Starr || land: (905)513-7688<br>
Netvel Inc. || Cell: (416)806-0133<br>
<a href="mailto:alvin@netvel.net" target="_blank"
moz-do-not-send="true">alvin@netvel.net</a> ||<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>Amar Tumballi (amarts)<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
<a class="moz-txt-link-abbreviated" href="mailto:alvin@netvel.net">alvin@netvel.net</a> ||
</pre>
</body>
</html>