<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Wed, Aug 22, 2018 at 12:01 PM Hu Bert <<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Just an addition: in general there are no log messages in<br>
/var/log/glusterfs/ (if you don't all 'gluster volume ...'), but on<br>
the node with the lowest load i see in cli.log.1:<br>
<br>
[2018-08-22 06:20:43.291055] I [socket.c:2474:socket_event_handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-08-22 06:20:46.291327] I [socket.c:2474:socket_event_handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-08-22 06:20:49.291575] I [socket.c:2474:socket_event_handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
<br>
every 3 seconds. Looks like this bug:<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1484885" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1484885</a> - but that shoud<br>
have been fixed in the 3.12.x release, and network is fine.<br></blockquote><div><br></div><div><a class="gmail_plusreply" id="plusReplyChip-0" href="mailto:mchangir@redhat.com" tabindex="-1">+Milind Changire</a> <br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
In cli.log there are only these entries:<br>
<br>
[2018-08-22 06:19:23.428520] I [cli.c:765:main] 0-cli: Started running<br>
gluster with version 3.12.12<br>
[2018-08-22 06:19:23.800895] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started<br>
thread with index 1<br>
[2018-08-22 06:19:23.800978] I [socket.c:2474:socket_event_handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-08-22 06:19:23.809366] I [input.c:31:cli_batch] 0-: Exiting with: 0<br>
<br>
Just wondered if this could related anyhow.<br>
<br>
2018-08-21 8:17 GMT+02:00 Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
><br>
><br>
> On Tue, Aug 21, 2018 at 11:40 AM Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> wrote:<br>
>><br>
>> Good morning :-)<br>
>><br>
>> gluster11:<br>
>> ls -l /gluster/bricksdd1/shared/.glusterfs/indices/xattrop/<br>
>> total 0<br>
>> ---------- 1 root root 0 Aug 14 06:14<br>
>> xattrop-006b65d8-9e81-4886-b380-89168ea079bd<br>
>><br>
>> gluster12:<br>
>> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/<br>
>> total 0<br>
>> ---------- 1 root root 0 Jul 17 11:24<br>
>> xattrop-c7c6f765-ce17-4361-95fb-2fd7f31c7b82<br>
>><br>
>> gluster13:<br>
>> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/<br>
>> total 0<br>
>> ---------- 1 root root 0 Aug 16 07:54<br>
>> xattrop-16b696a0-4214-4999-b277-0917c76c983e<br>
>><br>
>><br>
>> And here's the output of 'perf ...' which ran almost a minute - file<br>
>> grew pretty fast to a size of 17 GB and system load went up heavily.<br>
>> Had to wait a while until load dropped :-)<br>
>><br>
>> fyi - load at the moment:<br>
>> load gluster11: ~90<br>
>> load gluster12: ~10<br>
>> load gluster13: ~50<br>
>><br>
>> perf record --call-graph=dwarf -p 7897 -o<br>
>> /tmp/perf.gluster11.bricksdd1.out<br>
>> [ perf record: Woken up 9837 times to write data ]<br>
>> Warning:<br>
>> Processed 2137218 events and lost 33446 chunks!<br>
>><br>
>> Check IO/CPU overload!<br>
>><br>
>> [ perf record: Captured and wrote 16576.374 MB<br>
>> /tmp/perf.gluster11.bricksdd1.out (2047760 samples) ]<br>
>><br>
>> Here's an excerpt.<br>
>><br>
>> + 1.93% 0.00% glusteriotwr0 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.89% 0.00% glusteriotwr28 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.86% 0.00% glusteriotwr15 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.85% 0.00% glusteriotwr63 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.83% 0.01% glusteriotwr0 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.82% 0.00% glusteriotwr38 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.82% 0.01% glusteriotwr28 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.82% 0.00% glusteriotwr0 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.81% 0.00% glusteriotwr28 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.81% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.81% 0.00% glusteriotwr36 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.80% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.78% 0.01% glusteriotwr63 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.77% 0.00% glusteriotwr63 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.75% 0.01% glusteriotwr38 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.75% 0.00% glusteriotwr38 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.74% 0.00% glusteriotwr17 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.74% 0.00% glusteriotwr44 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.73% 0.00% glusteriotwr6 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.73% 0.00% glusteriotwr37 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.73% 0.01% glusteriotwr36 [kernel.kallsyms] [k]<br>
>> entry_SYSCALL_64_after_swapgs<br>
>> + 1.72% 0.00% glusteriotwr34 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.72% 0.00% glusteriotwr36 [kernel.kallsyms] [k]<br>
>> do_syscall_64<br>
>> + 1.71% 0.00% glusteriotwr45 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.70% 0.00% glusteriotwr7 [unknown] [k]<br>
>> 0xffffffffffffffff<br>
>> + 1.68% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> sys_getdents<br>
>> + 1.68% 0.00% glusteriotwr15 [kernel.kallsyms] [k] filldir<br>
>> + 1.68% 0.00% glusteriotwr15 <a href="http://libc-2.24.so" rel="noreferrer" target="_blank">libc-2.24.so</a> [.]<br>
>> 0xffff80c60db8ef2b<br>
>> + 1.68% 0.00% glusteriotwr15 <a href="http://libc-2.24.so" rel="noreferrer" target="_blank">libc-2.24.so</a> [.]<br>
>> readdir64<br>
>> + 1.68% 0.00% glusteriotwr15 index.so [.]<br>
>> 0xffff80c6192a1888<br>
>> + 1.68% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> iterate_dir<br>
>> + 1.68% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> ext4_htree_fill_tree<br>
>> + 1.68% 0.00% glusteriotwr15 [kernel.kallsyms] [k]<br>
>> ext4_readdir<br>
>><br>
>> Or do you want to download the file /tmp/perf.gluster11.bricksdd1.out<br>
>> and examine it yourself? If so i could send you a link.<br>
><br>
><br>
> Thank you! yes a link would be great. I am not as good with kernel side of<br>
> things. So I will have to show this information to someone else who knows<br>
> these things so expect delay in response.<br>
><br>
>><br>
>><br>
>><br>
>> 2018-08-21 7:13 GMT+02:00 Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> ><br>
>> ><br>
>> > On Tue, Aug 21, 2018 at 10:13 AM Pranith Kumar Karampuri<br>
>> > <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br>
>> >><br>
>> >><br>
>> >><br>
>> >> On Mon, Aug 20, 2018 at 3:20 PM Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> wrote:<br>
>> >>><br>
>> >>> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3<br>
>> >>> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s<br>
>> >>> RAID Controller; operating system running on a raid1, then 4 disks<br>
>> >>> (JBOD) as bricks.<br>
>> >>><br>
>> >>> Ok, i ran perf for a few seconds.<br>
>> >>><br>
>> >>> ------------------------<br>
>> >>> perf record --call-graph=dwarf -p 7897 -o<br>
>> >>> /tmp/perf.gluster11.bricksdd1.out<br>
>> >>> ^C[ perf record: Woken up 378 times to write data ]<br>
>> >>> Warning:<br>
>> >>> Processed 83690 events and lost 96 chunks!<br>
>> >>><br>
>> >>> Check IO/CPU overload!<br>
>> >>><br>
>> >>> [ perf record: Captured and wrote 423.087 MB<br>
>> >>> /tmp/perf.gluster11.bricksdd1.out (51744 samples) ]<br>
>> >>> ------------------------<br>
>> >>><br>
>> >>> I copied a couple of lines:<br>
>> >>><br>
>> >>> + 8.10% 0.00% glusteriotwr22 [unknown] [k]<br>
>> >>> 0xffffffffffffffff<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> iterate_dir<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> sys_getdents<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> filldir<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> do_syscall_64<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> entry_SYSCALL_64_after_swapgs<br>
>> >>> + 8.10% 0.00% glusteriotwr22 <a href="http://libc-2.24.so" rel="noreferrer" target="_blank">libc-2.24.so</a> [.]<br>
>> >>> 0xffff80c60db8ef2b<br>
>> >>> + 8.10% 0.00% glusteriotwr22 <a href="http://libc-2.24.so" rel="noreferrer" target="_blank">libc-2.24.so</a> [.]<br>
>> >>> readdir64<br>
>> >>> + 8.10% 0.00% glusteriotwr22 index.so [.]<br>
>> >>> 0xffff80c6192a1888<br>
>> >>> + 8.10% 0.04% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> ext4_htree_fill_tree<br>
>> >>> + 8.10% 0.00% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> ext4_readdir<br>
>> >>> + 7.95% 0.12% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> htree_dirblock_to_tree<br>
>> >>> + 5.78% 0.96% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> __ext4_read_dirblock<br>
>> >>> + 4.80% 0.02% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> ext4_bread<br>
>> >>> + 4.78% 0.04% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> ext4_getblk<br>
>> >>> + 4.72% 0.02% glusteriotwr22 [kernel.kallsyms] [k]<br>
>> >>> __getblk_gfp<br>
>> >>> + 4.57% 0.00% glusteriotwr3 [unknown] [k]<br>
>> >>> 0xffffffffffffffff<br>
>> >>> + 4.55% 0.00% glusteriotwr3 [kernel.kallsyms] [k]<br>
>> >>> do_syscall_64<br>
>> >>><br>
>> >>> Do you need different or additional information?<br>
>> >><br>
>> >><br>
>> >> This looks like there are lot of readdirs going on which is different<br>
>> >> from<br>
>> >> what we observed earlier, how many seconds did you do perf record for?<br>
>> >> Will<br>
>> >> it be possible for you to do this for some more time? may be a minute?<br>
>> >> Just<br>
>> >> want to be sure that the data actually represents what we are<br>
>> >> observing.<br>
>> ><br>
>> ><br>
>> > I found one code path which on lookup does readdirs. Could you give me<br>
>> > the<br>
>> > output of ls -l <brick-path>/.glusterfs/indices/xattrop on all the three<br>
>> > bricks? It can probably give a correlation to see if it is indeed the<br>
>> > same<br>
>> > issue or not.<br>
>> ><br>
>> >><br>
>> >><br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>> 2018-08-20 11:20 GMT+02:00 Pranith Kumar Karampuri<br>
>> >>> <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> >>> > Even the brick which doesn't have high CPU seems to have same number<br>
>> >>> > of<br>
>> >>> > lookups, so that's not it.<br>
>> >>> > Is there any difference at all between the machines which have high<br>
>> >>> > CPU<br>
>> >>> > vs<br>
>> >>> > low CPU?<br>
>> >>> > I think the only other thing I would do is to install perf tools and<br>
>> >>> > try to<br>
>> >>> > figure out the call-graph which is leading to so much CPU<br>
>> >>> ><br>
>> >>> > This affects performance of the brick I think, so you may have to do<br>
>> >>> > it<br>
>> >>> > quickly and for less time.<br>
>> >>> ><br>
>> >>> > perf record --call-graph=dwarf -p <brick-pid> -o </path/to/output><br>
>> >>> > then<br>
>> >>> > perf report -i </path/to/output/given/in/the/previous/command><br>
>> >>> ><br>
>> >>> ><br>
>> >>> > On Mon, Aug 20, 2018 at 2:40 PM Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>><br>
>> >>> > wrote:<br>
>> >>> >><br>
>> >>> >> gluster volume heal shared info | grep -i number<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >> Number of entries: 0<br>
>> >>> >><br>
>> >>> >> Looks good to me.<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> 2018-08-20 10:51 GMT+02:00 Pranith Kumar Karampuri<br>
>> >>> >> <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> >>> >> > There are a lot of Lookup operations in the system. But I am not<br>
>> >>> >> > able to<br>
>> >>> >> > find why. Could you check the output of<br>
>> >>> >> ><br>
>> >>> >> > # gluster volume heal <volname> info | grep -i number<br>
>> >>> >> ><br>
>> >>> >> > it should print all zeros.<br>
>> >>> >> ><br>
>> >>> >> > On Fri, Aug 17, 2018 at 1:49 PM Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>><br>
>> >>> >> > wrote:<br>
>> >>> >> >><br>
>> >>> >> >> I don't know what you exactly mean with workload, but the main<br>
>> >>> >> >> function of the volume is storing (incl. writing, reading)<br>
>> >>> >> >> images<br>
>> >>> >> >> (from hundreds of bytes up to 30 MBs, overall ~7TB). The work is<br>
>> >>> >> >> done<br>
>> >>> >> >> by apache tomcat servers writing to / reading from the volume.<br>
>> >>> >> >> Besides<br>
>> >>> >> >> images there are some text files and binaries that are stored on<br>
>> >>> >> >> the<br>
>> >>> >> >> volume and get updated regularly (every x hours); we'll try to<br>
>> >>> >> >> migrate<br>
>> >>> >> >> the latter ones to local storage asap.<br>
>> >>> >> >><br>
>> >>> >> >> Interestingly it's only one process (and its threads) of the<br>
>> >>> >> >> same<br>
>> >>> >> >> brick on 2 of the gluster servers that consumes the CPU.<br>
>> >>> >> >><br>
>> >>> >> >> gluster11: bricksdd1; not healed; full CPU<br>
>> >>> >> >> gluster12: bricksdd1; got healed; normal CPU<br>
>> >>> >> >> gluster13: bricksdd1; got healed; full CPU<br>
>> >>> >> >><br>
>> >>> >> >> Besides: performance during heal (e.g. gluster12, bricksdd1) was<br>
>> >>> >> >> way<br>
>> >>> >> >> better than it is now. I've attached 2 pngs showing the<br>
>> >>> >> >> differing<br>
>> >>> >> >> cpu<br>
>> >>> >> >> usage of last week before/after heal.<br>
>> >>> >> >><br>
>> >>> >> >> 2018-08-17 9:30 GMT+02:00 Pranith Kumar Karampuri<br>
>> >>> >> >> <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> >>> >> >> > There seems to be too many lookup operations compared to any<br>
>> >>> >> >> > other<br>
>> >>> >> >> > operations. What is the workload on the volume?<br>
>> >>> >> >> ><br>
>> >>> >> >> > On Fri, Aug 17, 2018 at 12:47 PM Hu Bert<br>
>> >>> >> >> > <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>><br>
>> >>> >> >> > wrote:<br>
>> >>> >> >> >><br>
>> >>> >> >> >> i hope i did get it right.<br>
>> >>> >> >> >><br>
>> >>> >> >> >> gluster volume profile shared start<br>
>> >>> >> >> >> wait 10 minutes<br>
>> >>> >> >> >> gluster volume profile shared info<br>
>> >>> >> >> >> gluster volume profile shared stop<br>
>> >>> >> >> >><br>
>> >>> >> >> >> If that's ok, i've attached the output of the info command.<br>
>> >>> >> >> >><br>
>> >>> >> >> >><br>
>> >>> >> >> >> 2018-08-17 8:31 GMT+02:00 Pranith Kumar Karampuri<br>
>> >>> >> >> >> <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> >>> >> >> >> > Please do volume profile also for around 10 minutes when<br>
>> >>> >> >> >> > CPU%<br>
>> >>> >> >> >> > is<br>
>> >>> >> >> >> > high.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > On Fri, Aug 17, 2018 at 11:56 AM Pranith Kumar Karampuri<br>
>> >>> >> >> >> > <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> As per the output, all io-threads are using a lot of CPU.<br>
>> >>> >> >> >> >> It<br>
>> >>> >> >> >> >> is<br>
>> >>> >> >> >> >> better<br>
>> >>> >> >> >> >> to<br>
>> >>> >> >> >> >> check what the volume profile is to see what is leading to<br>
>> >>> >> >> >> >> so<br>
>> >>> >> >> >> >> much<br>
>> >>> >> >> >> >> work<br>
>> >>> >> >> >> >> for<br>
>> >>> >> >> >> >> io-threads. Please follow the documentation at<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/" rel="noreferrer" target="_blank">https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/</a><br>
>> >>> >> >> >> >> section: "<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> Running GlusterFS Volume Profile Command"<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> and attach output of "gluster volume profile info",<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> On Fri, Aug 17, 2018 at 11:24 AM Hu Bert<br>
>> >>> >> >> >> >> <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>><br>
>> >>> >> >> >> >> wrote:<br>
>> >>> >> >> >> >>><br>
>> >>> >> >> >> >>> Good morning,<br>
>> >>> >> >> >> >>><br>
>> >>> >> >> >> >>> i ran the command during 100% CPU usage and attached the<br>
>> >>> >> >> >> >>> file.<br>
>> >>> >> >> >> >>> Hopefully it helps.<br>
>> >>> >> >> >> >>><br>
>> >>> >> >> >> >>> 2018-08-17 7:33 GMT+02:00 Pranith Kumar Karampuri<br>
>> >>> >> >> >> >>> <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>>:<br>
>> >>> >> >> >> >>> > Could you do the following on one of the nodes where<br>
>> >>> >> >> >> >>> > you<br>
>> >>> >> >> >> >>> > are<br>
>> >>> >> >> >> >>> > observing<br>
>> >>> >> >> >> >>> > high<br>
>> >>> >> >> >> >>> > CPU usage and attach that file to this thread? We can<br>
>> >>> >> >> >> >>> > find<br>
>> >>> >> >> >> >>> > what<br>
>> >>> >> >> >> >>> > threads/processes are leading to high usage. Do this<br>
>> >>> >> >> >> >>> > for<br>
>> >>> >> >> >> >>> > say<br>
>> >>> >> >> >> >>> > 10<br>
>> >>> >> >> >> >>> > minutes<br>
>> >>> >> >> >> >>> > when<br>
>> >>> >> >> >> >>> > you see the ~100% CPU.<br>
>> >>> >> >> >> >>> ><br>
>> >>> >> >> >> >>> > top -bHd 5 > /tmp/top.${HOSTNAME}.txt<br>
>> >>> >> >> >> >>> ><br>
>> >>> >> >> >> >>> > On Wed, Aug 15, 2018 at 2:37 PM Hu Bert<br>
>> >>> >> >> >> >>> > <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>><br>
>> >>> >> >> >> >>> > wrote:<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> Hello again :-)<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> The self heal must have finished as there are no log<br>
>> >>> >> >> >> >>> >> entries<br>
>> >>> >> >> >> >>> >> in<br>
>> >>> >> >> >> >>> >> glustershd.log files anymore. According to munin disk<br>
>> >>> >> >> >> >>> >> latency<br>
>> >>> >> >> >> >>> >> (average<br>
>> >>> >> >> >> >>> >> io wait) has gone down to 100 ms, and disk utilization<br>
>> >>> >> >> >> >>> >> has<br>
>> >>> >> >> >> >>> >> gone<br>
>> >>> >> >> >> >>> >> down<br>
>> >>> >> >> >> >>> >> to ~60% - both on all servers and hard disks.<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> But now system load on 2 servers (which were in the<br>
>> >>> >> >> >> >>> >> good<br>
>> >>> >> >> >> >>> >> state)<br>
>> >>> >> >> >> >>> >> fluctuates between 60 and 100; the server with the<br>
>> >>> >> >> >> >>> >> formerly<br>
>> >>> >> >> >> >>> >> failed<br>
>> >>> >> >> >> >>> >> disk has a load of 20-30.I've uploaded some munin<br>
>> >>> >> >> >> >>> >> graphics of<br>
>> >>> >> >> >> >>> >> the<br>
>> >>> >> >> >> >>> >> cpu<br>
>> >>> >> >> >> >>> >> usage:<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> <a href="https://abload.de/img/gluster11_cpu31d3a.png" rel="noreferrer" target="_blank">https://abload.de/img/gluster11_cpu31d3a.png</a><br>
>> >>> >> >> >> >>> >> <a href="https://abload.de/img/gluster12_cpu8sem7.png" rel="noreferrer" target="_blank">https://abload.de/img/gluster12_cpu8sem7.png</a><br>
>> >>> >> >> >> >>> >> <a href="https://abload.de/img/gluster13_cpud7eni.png" rel="noreferrer" target="_blank">https://abload.de/img/gluster13_cpud7eni.png</a><br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> This can't be normal. 2 of the servers under heavy<br>
>> >>> >> >> >> >>> >> load<br>
>> >>> >> >> >> >>> >> and<br>
>> >>> >> >> >> >>> >> one<br>
>> >>> >> >> >> >>> >> not<br>
>> >>> >> >> >> >>> >> that much. Does anyone have an explanation of this<br>
>> >>> >> >> >> >>> >> strange<br>
>> >>> >> >> >> >>> >> behaviour?<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> Thx :-)<br>
>> >>> >> >> >> >>> >><br>
>> >>> >> >> >> >>> >> 2018-08-14 9:37 GMT+02:00 Hu Bert<br>
>> >>> >> >> >> >>> >> <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>>:<br>
>> >>> >> >> >> >>> >> > Hi there,<br>
>> >>> >> >> >> >>> >> ><br>
>> >>> >> >> >> >>> >> > well, it seems the heal has finally finished.<br>
>> >>> >> >> >> >>> >> > Couldn't<br>
>> >>> >> >> >> >>> >> > see/find<br>
>> >>> >> >> >> >>> >> > any<br>
>> >>> >> >> >> >>> >> > related log message; is there such a message in a<br>
>> >>> >> >> >> >>> >> > specific<br>
>> >>> >> >> >> >>> >> > log<br>
>> >>> >> >> >> >>> >> > file?<br>
>> >>> >> >> >> >>> >> ><br>
>> >>> >> >> >> >>> >> > But i see the same behaviour when the last heal<br>
>> >>> >> >> >> >>> >> > finished:<br>
>> >>> >> >> >> >>> >> > all<br>
>> >>> >> >> >> >>> >> > CPU<br>
>> >>> >> >> >> >>> >> > cores are consumed by brick processes; not only by<br>
>> >>> >> >> >> >>> >> > the<br>
>> >>> >> >> >> >>> >> > formerly<br>
>> >>> >> >> >> >>> >> > failed<br>
>> >>> >> >> >> >>> >> > bricksdd1, but by all 4 brick processes (and their<br>
>> >>> >> >> >> >>> >> > threads).<br>
>> >>> >> >> >> >>> >> > Load<br>
>> >>> >> >> >> >>> >> > goes<br>
>> >>> >> >> >> >>> >> > up to > 100 on the 2 servers with the not-failed<br>
>> >>> >> >> >> >>> >> > brick,<br>
>> >>> >> >> >> >>> >> > and<br>
>> >>> >> >> >> >>> >> > glustershd.log gets filled with a lot of entries.<br>
>> >>> >> >> >> >>> >> > Load<br>
>> >>> >> >> >> >>> >> > on<br>
>> >>> >> >> >> >>> >> > the<br>
>> >>> >> >> >> >>> >> > server<br>
>> >>> >> >> >> >>> >> > with the then failed brick not that high, but still<br>
>> >>> >> >> >> >>> >> > ~60.<br>
>> >>> >> >> >> >>> >> ><br>
>> >>> >> >> >> >>> >> > Is this behaviour normal? Is there some post-heal<br>
>> >>> >> >> >> >>> >> > after<br>
>> >>> >> >> >> >>> >> > a<br>
>> >>> >> >> >> >>> >> > heal<br>
>> >>> >> >> >> >>> >> > has<br>
>> >>> >> >> >> >>> >> > finished?<br>
>> >>> >> >> >> >>> >> ><br>
>> >>> >> >> >> >>> >> > thx in advance :-)<br>
>> >>> >> >> >> >>> ><br>
>> >>> >> >> >> >>> ><br>
>> >>> >> >> >> >>> ><br>
>> >>> >> >> >> >>> > --<br>
>> >>> >> >> >> >>> > Pranith<br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >><br>
>> >>> >> >> >> >> --<br>
>> >>> >> >> >> >> Pranith<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > --<br>
>> >>> >> >> >> > Pranith<br>
>> >>> >> >> ><br>
>> >>> >> >> ><br>
>> >>> >> >> ><br>
>> >>> >> >> > --<br>
>> >>> >> >> > Pranith<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > --<br>
>> >>> >> > Pranith<br>
>> >>> ><br>
>> >>> ><br>
>> >>> ><br>
>> >>> > --<br>
>> >>> > Pranith<br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Pranith<br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Pranith<br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div></div>