<div dir="ltr">I have seen in testing environment read speed degrades when the same file was in process of heal. like normal read speed of the file would be 40MB but when you read same during heal is in process goes down to 10MB approx.<div><br></div><div>my file size varies from 4KB to 30GB files and Avg 15 files created in the new directory.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 3:29 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span class=""><div>>I was asking about reading data in same disperse set like 8+2 disperse
config if one disk is replaced and when heal is in process and when
client reads data which is available in rest of the 9 disks.<br><br></div></span>My use case is write heavy, we barely read data, so I do not know if read speed degrades during heal. But I know write speed do not change during heal.<br><br></div>How big is your files? How many files on average in each directory?<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 11:36 AM, Amudhan P <span dir="ltr"><<a href="mailto:amudhan83@gmail.com" target="_blank">amudhan83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div>I actually used this (find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal {} \; > /dev/null<br>) command on a specific folder to trigger heal but it was also not showing any difference in speed.<div><br></div><div>I was asking about reading data in same disperse set like 8+2 disperse config if one disk is replaced and when heal is in process and when client reads data which is available in rest of the 9 disks. </div><div><br></div><div>I am sure there was no bottleneck on network/disk IO in my case. </div><div><br></div><div><table style="border-collapse:collapse;width:583pt" border="0" cellspacing="0" cellpadding="0" width="777">
<colgroup><col style="width:583pt" width="777"> </colgroup><tbody><tr style="height:15pt" height="20">
<td style="height:15pt;width:583pt" width="777" height="20">I have tested 3.10.1 heal with disperse.shd-max-threads = 4. heal completed data size of 27GB in 13M15s. so it works well in a test environment but production environment it differs.<br><br></td>
</tr>
<tr style="height:15pt" height="20">
<td style="height:15pt" height="20"><br></td>
</tr></tbody></table><div><div class="m_-1272700227805689305h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 12:47 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You can increase heal speed by running below command from a client:<br>
find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal {} \; > /dev/null<br>
<br>
You can write a script with different folders to make it parallel.<br>
<br>
In my case I see 6TB data was healed within 7-8 days with above command running.<br>
<span class="m_-1272700227805689305m_7902750467696388854gmail-">>did you face any issue in reading data from rest of the good bricks in the set. like slow read < KB/s.<br>
</span>No, nodes generally have balanced network/disk IO during heal..<br>
<br>
You should make a detailed tests with non-prod cluster and try to find<br>
optimum heal configuration for your use case..<br>
Our new servers are on the way, in a couple of months I also will do<br>
detailed tests with 3.10.x and parallel disperse heal, will post the<br>
results here...<br>
<div class="m_-1272700227805689305m_7902750467696388854gmail-HOEnZb"><div class="m_-1272700227805689305m_7902750467696388854gmail-h5"><br>
<br>
On Tue, Apr 18, 2017 at 9:51 AM, Amudhan P <<a href="mailto:amudhan83@gmail.com" target="_blank">amudhan83@gmail.com</a>> wrote:<br>
> Serkan,<br>
><br>
> I have initially changed shd-max-thread 1 to 2 saw a little difference and<br>
> changing it to 4 & 8. doesn't make any difference.<br>
> disk write speed was about <1MB and data passed in thru network for healing<br>
> node from other node were 4MB combined.<br>
><br>
> Also, I tried ls -l from mount point to the folders and files which need to<br>
> be healed but have not seen any difference in performance.<br>
><br>
> But after 3 days of heal process running disk write speed was increased to 9<br>
> - 11MB and data passed thru network for healing node from other node were<br>
> 40MB combined.<br>
><br>
> Still 14GB of data to be healed when comparing to other disks in set.<br>
><br>
> I saw in another thread you also had the issue with heal speed, did you face<br>
> any issue in reading data from rest of the good bricks in the set. like slow<br>
> read < KB/s.<br>
><br>
> On Mon, Apr 17, 2017 at 2:05 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> Normally I see 8-10MB/sec/brick heal speed with gluster 3.7.11.<br>
>> I tested parallel heal for disperse with version 3.9.0 and see that it<br>
>> increase the heal speed to 20-40MB/sec<br>
>> I tested with shd-max-threads 2,4,8 and saw that best performance<br>
>> achieved with 2 or 4 threads.<br>
>> you can try to start with 2 and test with 4 and 8 and compare the results?<br>
><br>
><br>
</div></div></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>