<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello Gluster community,</p>
<p>After several hundred GB of data writes (small image 100k
<size> 1M) into a replicated 2x glusterfs servers , I am
facing issue with healing process. Earlier the heal info returned
the bricks and nodes and the fact that there are no failed heal;
but now it gets to the state with below message:</p>
<p><b><font size="-1"># gluster volume heal gv1 info healed</font></b></p>
<p><b><font size="-1">Gathering list of heal failed entries on
volume gv1 has been unsuccessful on bricks that are down.
Please check if all brick processes are running.</font></b></p>
<p>issuing the heal info command gives a log list of gfid info that
takes like an hour to complete. The file data being images would
not change and primarily served from 8x server mount native
glusterfs. <br>
</p>
<p>Here is some insight on the status of the gluster, but how can I
effectively do a successful heal on the storages cause last times
trying to do that send the servers southway and irresponsive <br>
</p>
<p><b><font size="-1"># gluster volume info<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: IMG-01:/images/storage/brick1<br>
Brick2: IMG-02:/images/storage/brick1<br>
Options Reconfigured:<br>
performance.md-cache-timeout: 128<br>
cluster.background-self-heal-count: 32<br>
server.statedump-path: /tmp<br>
performance.readdir-ahead: on<br>
nfs.disable: true<br>
network.inode-lru-limit: 50000<br>
features.bitrot: off<br>
features.scrub: Inactive<br>
performance.cache-max-file-size: 16MB<br>
client.event-threads: 8<br>
cluster.eager-lock: on</font></b><br>
</p>
<p>Appreciate your help.Thanks<br>
</p>
</body>
</html>