<div dir="ltr">Hi Rolf,<br><div class="gmail_extra"><br></div><div class="gmail_extra">answers follow inline...</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <span dir="ltr"><<a href="mailto:rolf@jotta.no" target="_blank">rolf@jotta.no</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi,<br><br></div>We ran a test on GlusterFS 3.12.1
with erasurecoded volumes 8+2 with 10 bricks (default config,tested with
100gb, 200gb, 400gb bricksizes,10gbit nics)<br><br>1.<br></div>Tests
show that healing takes about double the time on healing 200gb vs 100,
and abit under the double on 400gb vs 200gb bricksizes. Is this expected
behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours to heal.<br></div><div><br></div><div>100gb brick heal: 18 hours (8+2)</div><div><div>200gb brick heal: 37 hours (8+2) +205%<br></div><div><div>400gb brick heal: 59 hours (8+2) +159%<br></div></div></div><div><br></div><div>Each 100gb is filled with 80000 x 10mb files (200gb is 2x and 400gb is 4x)<br></div></div></div></blockquote><div><br></div><div>If I understand it correctly, you are storing 80.000 files of 10 MB each when you are using 100GB bricks, but you double this value for 200GB bricks (160.000 files of 10MB each). And for 400GB bricks you create 320.000 files. Have I understood it correctly ?</div><div><br></div><div>If this is true, it's normal that twice the space requires approximately twice the heal time. The healing time depends on the contents of the brick, not brick size. The same amount of files should take the same healing time, whatever the brick size is.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div></div><div><br>2.<br></div>Are
there any possibility to show the progress of a heal? As per now we run
gluster volume heal volume info, but this exit's when a brick is done
healing and when we run heal info again the command contiunes showing
gfid's until the brick is done again. This gives quite a bad picture of
the status of a heal. <br></div></div></blockquote><div><br></div><div>The output of 'gluster volume heal <volname> info' shows the list of files pending to be healed on each brick. The heal is complete when the list is empty. A faster alternative if you don't want to see the whole list of files is to use 'gluster volume heal <volname> statistics heal-count'. This will only show the number of pending files on each brick.</div><div><br></div><div>I don't know any other way to track progress of self-heal.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br>3.<br></div>What kind of config tweaks is recommended for these kind of EC volumes?<br></div></blockquote><div><br></div><div>I usually use the following values (specific only for ec):</div><div><br></div><div>client.event-threads 4</div><div>server.event-threads 4</div><div>performance.client-io-threads on</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><br clear="all"><div>$ gluster volume info<br></div><div>Volume Name: test-ec-100g<br>Type: Disperse<br>Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0e<wbr>b8ab27<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (8 + 2) = 10<br>Transport-type: tcp<br>Bricks:<br>Brick1: dn-304:/mnt/test-ec-100/brick<br>Brick2: dn-305:/mnt/test-ec-100/brick<br>Brick3: dn-306:/mnt/test-ec-100/brick<br>Brick4: dn-307:/mnt/test-ec-100/brick<br>Brick5: dn-308:/mnt/test-ec-100/brick<br>Brick6: dn-309:/mnt/test-ec-100/brick<br>Brick7: dn-310:/mnt/test-ec-100/brick<br>Brick8: dn-311:/mnt/test-ec-2/brick<br>Brick9: dn-312:/mnt/test-ec-100/brick<br>Brick10: dn-313:/mnt/test-ec-100/brick<br>Options Reconfigured:<br>nfs.disable: on<br>transport.address-family: inet<br> <br>Volume Name: test-ec-200<br>Type: Disperse<br>Volume ID: 2ce23e32-7086-49c5-bf0c-7612fd<wbr>7b3d5d<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (8 + 2) = 10<br>Transport-type: tcp<br>Bricks:<br>Brick1: dn-304:/mnt/test-ec-200/brick<br>Brick2: dn-305:/mnt/test-ec-200/brick<br>Brick3: dn-306:/mnt/test-ec-200/brick<br>Brick4: dn-307:/mnt/test-ec-200/brick<br>Brick5: dn-308:/mnt/test-ec-200/brick<br>Brick6: dn-309:/mnt/test-ec-200/brick<br>Brick7: dn-310:/mnt/test-ec-200/brick<br>Brick8: dn-311:/mnt/test-ec-200_2/bric<wbr>k<br>Brick9: dn-312:/mnt/test-ec-200/brick<br>Brick10: dn-313:/mnt/test-ec-200/brick<br>Options Reconfigured:<br>nfs.disable: on<br>transport.address-family: inet</div><div><br></div>Volume Name: test-ec-400<br>Type: Disperse<br>Volume ID: fe00713a-7099-404d-ba52-46c6b4<wbr>b6ecc0<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (8 + 2) = 10<br>Transport-type: tcp<br>Bricks:<br>Brick1: dn-304:/mnt/test-ec-400/brick<br>Brick2: dn-305:/mnt/test-ec-400/brick<br>Brick3: dn-306:/mnt/test-ec-400/brick<br>Brick4: dn-307:/mnt/test-ec-400/brick<br>Brick5: dn-308:/mnt/test-ec-400/brick<br>Brick6: dn-309:/mnt/test-ec-400/brick<br>Brick7: dn-310:/mnt/test-ec-400/brick<br>Brick8: dn-311:/mnt/test-ec-400_2/bric<wbr>k<br>Brick9: dn-312:/mnt/test-ec-400/brick<br>Brick10: dn-313:/mnt/test-ec-400/brick<br>Options Reconfigured:<br>nfs.disable: on<br>transport.address-family: inet<span class="HOEnZb"><font color="#888888"><br clear="all"><br>-- <br><div class="m_-3354409051368983522gmail_signature"><div dir="ltr"><div><div dir="ltr"><p></p>
Regards<br>
Rolf Arne Larsen<br>
Ops Engineer<br>
<a href="mailto:rolf@startsiden.no" target="_blank"><span>rolf@jottacloud.com</span></a><br>
<a href="http://www.jottacloud.com" target="_blank"></a></div></div></div></div>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>