[Bugs] [Bug 1349953] thread CPU saturation limiting throughput on write workloads
bugzilla at redhat.com
bugzilla at redhat.com
Wed Jun 29 06:23:15 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1349953
--- Comment #7 from Manoj Pillai <mpillai at redhat.com> ---
(In reply to Pranith Kumar K from comment #6)
> >
> > The single hot thread is gone. Overall CPU utilization has gone up quite a
> > bit (idle is down to ~54%), and that may be a concern for some deployments.
>
> CPU utilization will go up because more threads are doing encoding now. Do
> you suggest for us to have a way to throttle that? I also added Xavi CCed
> Xavi to the bug to get his inputs.
>
Right, CPU utilization is expected to go up as you scale to higher throughput.
I'm just saying that client-side CPU utilization can be a concern depending on
how CPU-hungry the applications are -- client-side CPU is primarily meant for
them.
Do we need an ability to cap CPU utilization from within gluster (rather than
using something like cgroups)? Take a look at a run with client event-threads
set to 2, instead of 4 like in the runs above:
iozone output:
Children see throughput for 48 initial writers = 2778952.81 kB/sec
[lower throughput compared to 3.7 GB/s with client event-threads=4]
top output:
<body>
top - 00:26:56 up 1 day, 23:57, 5 users, load average: 1.38, 0.45, 0.38
Threads: 312 total, 3 running, 309 sleeping, 0 stopped, 0 zombie
%Cpu(s): 15.8 us, 9.8 sy, 0.0 ni, 73.6 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st
KiB Mem : 65728904 total, 40787784 free, 985512 used, 23955608 buff/cache
KiB Swap: 32964604 total, 32964604 free, 0 used. 64218968 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6003 root 20 0 954872 240892 3708 R 99.9 0.4 0:47.30 glusterfs
6005 root 20 0 954872 240892 3708 R 99.9 0.4 0:47.21 glusterfs
6008 root 20 0 954872 240892 3708 S 48.5 0.4 0:22.06 glusterfs
6199 root 20 0 954872 240892 3708 S 13.5 0.4 0:06.77 glusterfs
6200 root 20 0 954872 240892 3708 S 13.5 0.4 0:06.41 glusterfs
6004 root 20 0 954872 240892 3708 S 13.4 0.4 0:07.30 glusterfs
6126 root 20 0 53752 19488 816 S 5.8 0.0 0:02.61 iozone
[...]
</body>
[lower overall CPU utilization as well, compared to event-threads=4]
The 2 top threads by CPU utilization (99.9% each) seem to be the epoll threads,
based on pstack output. The third is the fuse thread.
In comment #0, IIRC, the epoll threads were not doing most of the encoding.
With client-io-threads on, the epoll threads seem to be doing most of the
encoding and using most of the CPU. Is that true and expected? And by varying
the number of client event-threads, I seem to be able to limit CPU utilization
at the expense of throughput.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=qGSg6zbsRW&a=cc_unsubscribe
More information about the Bugs
mailing list