<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 20, 2017 at 1:24 PM, Amudhan P <span dir="ltr"><<a href="mailto:amudhan83@gmail.com" target="_blank">amudhan83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Pranith,<div><br></div><div><span class=""><div class="gmail_extra">> 1) At the moment heals happen in parallel only for files not directories. i.e. same shd process doesn't heal 2 directories at a time. But it > can do as many file heals as shd-max-threads option. That could be the reason why Amudhan faced better performance after a while, but > it is a bit difficult to confirm without data.</div><div class="gmail_extra"> </div></span><div class="gmail_extra"> yes, your right disk has about 56153 files and each is under their own subdirectories. so equal or higher number folders will be there.</div><div class="gmail_extra"><br></div><div class="gmail_extra">I have doubt when heal process creates a folder in disk does it also check with rest of the bricks on same disperse set to process and update xattr for folders and files when getting healed.</div></div></div></blockquote><div><br></div><div>Yes in general most of the heal process involves contacting other bricks not just for creating directory but for other things as well like setting inode attributes/xattrs, data etc.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span class=""><div class="gmail_extra"><br></div><div class="gmail_extra">> 2) When a file is undergoing I/O both shd and mount will contend for locks to do I/O from bricks this probably is the reason for the > slowness in I/O. it will last only until the file is healed in parallel with the I/O from users.<br></div><div class="gmail_extra"><br></div></span><div class="gmail_extra"> I suggest there should be a mechanism in above case that should pause heal process and fulfill read request first and later continue with heal process. so user doesn't feel any difference in read speed.</div></div></div></blockquote><div><br></div><div>But the read request can come at any point. If READ request comes after heal process takes locks, then the logic will become very convoluted to give priority to I/O. I think a better way would be to disable I/O from triggering heals for your case. This doesn't really fix the problem but it would reduce the probability of seeing this issue.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span class=""><div class="gmail_extra"><br></div><div>>3) Serkan, Amudhan, it would be nice to have feedback about what do you feel are the bottlenecks so that we can come up with next set >of performance improvements. One of the newer enhancements Sunil is working on is to be able to heal larger chunks in one go rather >than ~128KB chunks. It will be configurable upto 128MB I think, this will improve throughput. Next set of enhancements would >concentrate on reducing network round trips in doing heal and doing parallel heals of directories.<br></div><div><br></div></span><div> I don't see any other bottlenecks other than what we discussed in this thread. heal should be faster when we have sufficient hardware power to do that. hope the newer enhancements would fulfill.</div><div><br></div><div><br></div><div>Coming to the original thread:</div><div><br></div><div>I think heal process is completed but still, there is a size difference of 14GB between healed disk and other good disks in the same set. </div><div>so I have compared files between healed disk and good disk there are 3 files missing but it is a kb size files and this file was deleted in 3.7 but it's still in bricks.</div></div></div></blockquote><div><br></div><div>Oh you have 3 files missing but no xattrs to indicate this? Could you let us know more about what are the parent directory xattrs on all the bricks where the file is missing?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><br></div><div>Why is this size difference?</div></div></div></blockquote><div><br></div><div>Could you find which files/directories are corresponding to the size difference? Also include .glusterfs in your commands for consideration.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><br></div><div>regards</div><span class="HOEnZb"><font color="#888888"><div>Amudhan P</div></font></span><div><div class="h5"><div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 19, 2017 at 4:05 PM, Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div>Some thoughts based on this mail thread:<br></div>1) At the moment heals happen in parallel only for files not directories. i.e. same shd process doesn't heal 2 directories at a time. But it can do as many file heals as shd-max-threads option. That could be the reason why Amudhan faced better performance after a while, but it is a bit difficult to confirm without data.<br></div><br>2) When a file is undergoing I/O both shd and mount will contend for locks to do I/O from bricks this probably is the reason for the slowness in I/O. it will last only until the file is healed in parallel with the I/O from users.<br></div><br>3) Serkan, Amudhan, it would be nice to have feedback about what do you feel are the bottlenecks so that we can come up with next set of performance improvements. One of the newer enhancements Sunil is working on is to be able to heal larger chunks in one go rather than ~128KB chunks. It will be configurable upto 128MB I think, this will improve throughput. Next set of enhancements would concentrate on reducing network round trips in doing heal and doing parallel heals of directories.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_8878474017233464669m_586641957441248438gmail-h5">On Tue, Apr 18, 2017 at 6:22 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="m_8878474017233464669m_586641957441248438gmail-h5"><span>>Is this by design ? Is it tuneable ? 10MB/s/brick is too low for us.<br>
>We will use 10GB ethernet, healing 10MB/s/brick would be a bottleneck.<br>
<br>
</span>That is the maximum if you are using EC volumes, I don't know about<br>
other volume configurations.<br>
With 3.9.0 parallel self heal of EC volumes should be faster though.<br>
</div></div><div class="m_8878474017233464669m_586641957441248438gmail-m_279567577251717524HOEnZb"><div class="m_8878474017233464669m_586641957441248438gmail-m_279567577251717524h5"><div><div class="m_8878474017233464669m_586641957441248438gmail-h5"><br>
<br>
<br>
On Tue, Apr 18, 2017 at 1:38 PM, Gandalf Corvotempesta<br>
<<a href="mailto:gandalf.corvotempesta@gmail.com" target="_blank">gandalf.corvotempesta@gmail.c<wbr>om</a>> wrote:<br>
> 2017-04-18 9:36 GMT+02:00 Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>>:<br>
>> Nope, healing speed is 10MB/sec/brick, each brick heals with this<br>
>> speed, so one brick or one server each will heal in one week...<br>
><br>
> Is this by design ? Is it tuneable ? 10MB/s/brick is too low for us.<br>
> We will use 10GB ethernet, healing 10MB/s/brick would be a bottleneck.<br></div></div><span class="m_8878474017233464669m_586641957441248438gmail-">
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></span></div></div></blockquote></div><span class="m_8878474017233464669m_586641957441248438gmail-HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_8878474017233464669m_586641957441248438gmail-m_279567577251717524gmail_signature"><div dir="ltr">Pranith<br></div></div>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>