<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 27, 2018 at 1:32 PM, Hu Bert <span dir="ltr"><<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">2018-07-27 9:22 GMT+02:00 Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>>:<br>
><br>
><br>
> On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert <<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>> wrote:<br>
>><br>
>> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>>:<br>
>> ><br>
>> ><br>
>> > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert <<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> > Do you already have all the 190000 directories already created? If<br>
>> >> > not<br>
>> >> > could you find out which of the paths need it and do a stat directly<br>
>> >> > instead<br>
>> >> > of find?<br>
>> >><br>
>> >> Quite probable not all of them have been created (but counting how<br>
>> >> much would take very long...). Hm, maybe running stat in a double loop<br>
>> >> (thx to our directory structure) would help. Something like this (may<br>
>> >> be not 100% correct):<br>
>> >><br>
>> >> for a in ${100..999}; do<br>
>> >> for b in ${100..999}; do<br>
>> >> stat /$a/$b/<br>
>> >> done<br>
>> >> done<br>
>> >><br>
>> >> Should run stat on all directories. I think i'll give this a try.<br>
>> ><br>
>> ><br>
>> > Just to prevent these served from a cache, it is probably better to do<br>
>> > this<br>
>> > from a fresh mount?<br>
>> ><br>
>> > --<br>
>> > Pranith<br>
>><br>
>> Good idea. I'll install glusterfs client on a little used machine, so<br>
>> there should be no caching. Thx! Have a good weekend when the time<br>
>> comes :-)<br>
><br>
><br>
> If this proves effective, what you need to also do is unmount and mount<br>
> again, something like:<br>
><br>
> mount<br>
> for a in ${100..999}; do<br>
> for b in ${100..999}; do<br>
> stat /$a/$b/<br>
> done<br>
> done<br>
> unmount<br>
<br>
</div></div>I'll see what is possible over the weekend.<br>
<br>
Btw.: i've seen in the munin stats that the disk utilization for<br>
bricksdd1 on the healthy gluster servers is between 70% (night) and<br>
almost 99% (daytime). So it looks like that the basic problem is the<br>
disk which seems not to be able to work faster? If so (heal)<br>
performance won't improve with this setup, i assume. </blockquote><div><br></div><div><div class="gmail_extra">It could be saturating in the day. But if enough self-heals are going on, even in the night</div><div class="gmail_extra">it should have been close to 100%.</div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Maybe switching<br>
to RAID10 (conventional hard disks), SSDs or even add 3 additional<br>
gluster servers (distributed replicated) could help?<br>
</blockquote></div></div><div><div><div class="gmail_extra"><br></div><div class="gmail_extra">It definitely will give better protection against hardware failure. Failure domain will be lesser.<br></div><div class="gmail_extra">-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div></div></div>