<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>You message means something (usually glusterfsd) is not running
quite right or at all on one of the servers.</p>
<p>If you can tell which it is, you need to stop/restart glusterd
and glusterfsd. Note: sometimes just stopping them doesn't really
stop them. You need to do a killall -9 for glusterd, glusterfsd
and anything else with "gluster"</p>
<p>Then just start glusterd and glusterfsd. Once they are up you
should be able to do the heal.</p>
<p>If you can't tell which it is and are able to take gluster
offline for users for a moment, do that process to all your brick
servers.<br>
</p>
<p>Brian Andrus<br>
</p>
<br>
<div class="moz-cite-prefix">On 7/13/2018 10:55 AM, hsafe wrote:<br>
</div>
<blockquote type="cite"
cite="mid:7bec836d-e5ac-820d-15d1-80adee951ad9@devopt.net">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<p>Hello Gluster community,</p>
<p>After several hundred GB of data writes (small image 100k
<size> 1M) into a replicated 2x glusterfs servers , I am
facing issue with healing process. Earlier the heal info
returned the bricks and nodes and the fact that there are no
failed heal; but now it gets to the state with below message:</p>
<p><b><font size="-1"># gluster volume heal gv1 info healed</font></b></p>
<p><b><font size="-1">Gathering list of heal failed entries on
volume gv1 has been unsuccessful on bricks that are down.
Please check if all brick processes are running.</font></b></p>
<p>issuing the heal info command gives a log list of gfid info
that takes like an hour to complete. The file data being images
would not change and primarily served from 8x server mount
native glusterfs. <br>
</p>
<p>Here is some insight on the status of the gluster, but how can
I effectively do a successful heal on the storages cause last
times trying to do that send the servers southway and
irresponsive <br>
</p>
<p><b><font size="-1"># gluster volume info<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: IMG-01:/images/storage/brick1<br>
Brick2: IMG-02:/images/storage/brick1<br>
Options Reconfigured:<br>
performance.md-cache-timeout: 128<br>
cluster.background-self-heal-count: 32<br>
server.statedump-path: /tmp<br>
performance.readdir-ahead: on<br>
nfs.disable: true<br>
network.inode-lru-limit: 50000<br>
features.bitrot: off<br>
features.scrub: Inactive<br>
performance.cache-max-file-size: 16MB<br>
client.event-threads: 8<br>
cluster.eager-lock: on</font></b><br>
</p>
<p>Appreciate your help.Thanks<br>
</p>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>