<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div><br></div><div>Hi Amudhan,<br></div><div><br></div><div>In your case, was any IO going on while healing a file?<br></div><div>Were you writing on a file which was also getting healed by shd? and you observed that this file is not healing?<br></div><div> Or you just left the system after replace brick to complete the heal.<br></div><div><br></div><div>Ashish<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Serkan Çoban" <cobanserkan@gmail.com><br><b>To: </b>"Amudhan P" <amudhan83@gmail.com><br><b>Cc: </b>"Gluster Users" <gluster-users@gluster.org><br><b>Sent: </b>Tuesday, April 18, 2017 3:29:38 PM<br><b>Subject: </b>Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1<br><div><br></div><div dir="ltr"><div><div>>I was asking about reading data in same disperse set like 8+2 disperse config if one disk is replaced and when heal is in process and when client reads data which is available in rest of the 9 disks.<br><div><br></div></div>My use case is write heavy, we barely read data, so I do not know if read speed degrades during heal. But I know write speed do not change during heal.<br><div><br></div></div>How big is your files? How many files on average in each directory?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 11:36 AM, Amudhan P <span dir="ltr"><<a href="mailto:amudhan83@gmail.com" target="_blank" data-mce-href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex" data-mce-style="margin: 0 0 0 .8ex; border-left: 1px #ccc solid; padding-left: 1ex;"><div dir="ltr"><div><br></div>I actually used this (find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal {} \; > /dev/null<br>) command on a specific folder to trigger heal but it was also not showing any difference in speed.<div><br></div><div>I was asking about reading data in same disperse set like 8+2 disperse config if one disk is replaced and when heal is in process and when client reads data which is available in rest of the 9 disks. </div><div><br></div><div>I am sure there was no bottleneck on network/disk IO in my case. </div><div><br></div><div><table style="border-collapse:collapse;width:583pt" data-mce-style="border-collapse: collapse; width: 583pt;" class="mceItemTable" border="0" cellspacing="0" cellpadding="0" width="777"><colgroup><col style="width:583pt" data-mce-style="width: 583pt;" width="777"> </colgroup><tbody><tr style="height:15pt" data-mce-style="height: 15pt;"><td style="height:15pt;width:583pt" data-mce-style="height: 15pt; width: 583pt;" width="777" height="20">I have tested 3.10.1 heal with disperse.shd-max-threads = 4. heal completed data size of 27GB in 13M15s. so it works well in a test environment but production environment it differs.<br><div><br></div></td></tr><tr style="height:15pt" data-mce-style="height: 15pt;"><td style="height:15pt" data-mce-style="height: 15pt;" height="20"><br></td></tr></tbody></table><div><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 12:47 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank" data-mce-href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;">You can increase heal speed by running below command from a client:<br> find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal {} \; > /dev/null<br> <br> You can write a script with different folders to make it parallel.<br> <br> In my case I see 6TB data was healed within 7-8 days with above command running.<br> <span class="m_7902750467696388854gmail-">>did you face any issue in reading data from rest of the good bricks in the set. like slow read < KB/s.<br> </span>No, nodes generally have balanced network/disk IO during heal..<br> <br> You should make a detailed tests with non-prod cluster and try to find<br> optimum heal configuration for your use case..<br> Our new servers are on the way, in a couple of months I also will do<br> detailed tests with 3.10.x and parallel disperse heal, will post the<br> results here...<br><div class="m_7902750467696388854gmail-HOEnZb"><div class="m_7902750467696388854gmail-h5"><br> <br> On Tue, Apr 18, 2017 at 9:51 AM, Amudhan P <<a href="mailto:amudhan83@gmail.com" target="_blank" data-mce-href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a>> wrote:<br> > Serkan,<br> ><br> > I have initially changed shd-max-thread 1 to 2 saw a little difference and<br> > changing it to 4 & 8. doesn't make any difference.<br> > disk write speed was about <1MB and data passed in thru network for healing<br> > node from other node were 4MB combined.<br> ><br> > Also, I tried ls -l from mount point to the folders and files which need to<br> > be healed but have not seen any difference in performance.<br> ><br> > But after 3 days of heal process running disk write speed was increased to 9<br> > - 11MB and data passed thru network for healing node from other node were<br> > 40MB combined.<br> ><br> > Still 14GB of data to be healed when comparing to other disks in set.<br> ><br> > I saw in another thread you also had the issue with heal speed, did you face<br> > any issue in reading data from rest of the good bricks in the set. like slow<br> > read < KB/s.<br> ><br> > On Mon, Apr 17, 2017 at 2:05 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank" data-mce-href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>> wrote:<br> >><br> >> Normally I see 8-10MB/sec/brick heal speed with gluster 3.7.11.<br> >> I tested parallel heal for disperse with version 3.9.0 and see that it<br> >> increase the heal speed to 20-40MB/sec<br> >> I tested with shd-max-threads 2,4,8 and saw that best performance<br> >> achieved with 2 or 4 threads.<br> >> you can try to start with 2 and test with 4 and 8 and compare the results?<br> ><br> ><br></div></div></blockquote></div><br></div></div></div></div></div></blockquote></div><br></div><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>