<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <span dir="ltr"><<a href="mailto:tolid@tolid.eu.org" target="_blank">tolid@tolid.eu.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-size:10pt;font-family:Verdana,Geneva,sans-serif">
<p>Hi Karthik,</p>
<p><br></p>
<p>Thanks a lot for the explanation.</p>
<p>Does it mean a distributed volume health can be checked only by "gluster volume status " command?</p></div></blockquote><div>Yes. I am not aware of any other command which can give the status of plain distribute volume which is similar to the heal info command for replicate/disperse volumes. <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-size:10pt;font-family:Verdana,Geneva,sans-serif">
<p>And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)?</p></div></blockquote><div>Yes you can decrease it to any value. There won't be any side effect.<br><br></div><div>Regards,<br></div><div>Karthik<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-size:10pt;font-family:Verdana,Geneva,sans-serif">
<p><br></p>
<p>Regards,</p>
<p>Anatoliy</p><div><div class="h5">
<p><br></p>
<p><br></p>
<p><br></p>
<p><br></p>
<p>On 2018-03-13 16:46, Karthik Subrahmanya wrote:</p>
<blockquote type="cite" style="padding:0 0.4em;border-left:#1010ff 2px solid;margin:0">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>Hi Anatoliy,<br><br></div>
The heal command is basically used to heal any mismatching contents between replica copies of the files.</div>
For the command "gluster volume heal <volname>" to succeed, you should have the self-heal-daemon running,<br>which is true only if your volume is of type replicate/disperse.</div>
In your case you have a plain distribute volume where you do not store the replica of any files.</div>
So the volume heal will return you the error.<br><br></div>
Regards,</div>
Karthik</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Mar 13, 2018 at 7:53 PM, Anatoliy Dmytriyev <span><<a href="mailto:tolid@tolid.eu.org" target="_blank">tolid@tolid.eu.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br> <br> <br> Maybe someone can point me to a documentation or explain this? I can't find it myself.<br> Do we have any other useful resources except <a href="http://doc.gluster.org" rel="noopener noreferrer" target="_blank">doc.gluster.org</a>? As I see many gluster options are not described there or there are no explanation what is doing...
<div class="m_-1611571442260747725HOEnZb">
<div class="m_-1611571442260747725h5"><br> <br> <br> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br> <br> We have a very fresh gluster 3.10.10 installation.<br> Our volume is created as distributed volume, 9 bricks 96TB in total<br> (87TB after 10% of gluster disk space reservation)<br> <br> For some reasons I can't "heal" the volume:<br> # gluster volume heal gv0<br> Launching heal operation to perform index self heal on volume gv0 has<br> been unsuccessful on bricks that are down. Please check if all brick<br> processes are running.<br> <br> Which processes should be run on every brick for heal operation?<br> <br> # gluster volume status<br> Status of volume: gv0<br> Gluster process TCP Port RDMA Port Online Pid<br> ------------------------------<wbr>------------------------------<wbr>------------------<br> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 70850<br> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 102951<br> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 57535<br> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y 56676<br> Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152 Y 56880<br> Brick cn06-ib:/gfs/gv0/brick1/brick 0 49152 Y 56889<br> Brick cn07-ib:/gfs/gv0/brick1/brick 0 49152 Y 56902<br> Brick cn08-ib:/gfs/gv0/brick1/brick 0 49152 Y 94920<br> Brick cn09-ib:/gfs/gv0/brick1/brick 0 49152 Y 56542<br> <br> Task Status of Volume gv0<br> ------------------------------<wbr>------------------------------<wbr>------------------<br> There are no active volume tasks<br> <br> <br> # gluster volume info gv0<br> Volume Name: gv0<br> Type: Distribute<br> Volume ID: 8becaf78-cf2d-4991-93bf-f24466<wbr>88154f<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 9<br> Transport-type: rdma<br> Bricks:<br> Brick1: cn01-ib:/gfs/gv0/brick1/brick<br> Brick2: cn02-ib:/gfs/gv0/brick1/brick<br> Brick3: cn03-ib:/gfs/gv0/brick1/brick<br> Brick4: cn04-ib:/gfs/gv0/brick1/brick<br> Brick5: cn05-ib:/gfs/gv0/brick1/brick<br> Brick6: cn06-ib:/gfs/gv0/brick1/brick<br> Brick7: cn07-ib:/gfs/gv0/brick1/brick<br> Brick8: cn08-ib:/gfs/gv0/brick1/brick<br> Brick9: cn09-ib:/gfs/gv0/brick1/brick<br> Options Reconfigured:<br> client.event-threads: 8<br> performance.parallel-readdir: on<br> performance.readdir-ahead: on<br> cluster.nufa: on<br> nfs.disable: on</blockquote>
<br> -- <br> Best regards,<br> Anatoliy<br> ______________________________<wbr>_________________<br> Gluster-users mailing list<br> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noopener noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<p><br></p>
</div></div><span class="HOEnZb"><font color="#888888"><div>-- <br>
<div class="m_-1611571442260747725pre" style="margin:0;padding:0;font-family:monospace">Best regards,<br> Anatoliy</div>
</div>
</font></span></div>
</blockquote></div><br></div></div>