[Bugs] [Bug 1764208] cgroup control-cpu-load.sh script not working
bugzilla at redhat.com
bugzilla at redhat.com
Tue Oct 22 13:17:35 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1764208
--- Comment #1 from Mohit Agrawal <moagrawa at redhat.com> ---
The cgroup(CPU/Memroy) restriction are not working for gluster processes
==================
Master
How reproducible:
===========
always
[root at rhs-gp-srv2 ~]# top -n 1 -b|egrep 'RES|gluster'
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6722 root 20 0 19.9g 3.8g 3428 S 1250 8.1 229868:38 glusterfsd
7421 root 20 0 13.4g 4.7g 4416 S 30.0 9.9 5002:00 glusterfs
6173 root 20 0 596100 23596 2548 S 0.0 0.0 0:20.29 glusterd
7410 root 20 0 1453872 16424 2220 S 0.0 0.0 6:18.91 glusterfsd
[root at rhs-gp-srv2 ~]#
[root at rhs-gp-srv2 ~]# pwd
/root
[root at rhs-gp-srv2 ~]# cd /usr/share/glusterfs/scripts
[root at rhs-gp-srv2 scripts]# ./control-cpu-load.sh
Enter gluster daemon pid for which you want to control CPU.
glusterfsd
Entered daemon_pid is not numeric so Rerun the script.
[root at rhs-gp-srv2 scripts]# ./control-cpu-load.sh
Enter gluster daemon pid for which you want to control CPU.
6722
If you want to continue the script to attach 6722 with new cgroup_gluster_6722
cgroup Press (y/n)?n
no
[root at rhs-gp-srv2 scripts]# ./control-cpu-load.sh
Enter gluster daemon pid for which you want to control CPU.
6722
If you want to continue the script to attach 6722 with new cgroup_gluster_6722
cgroup Press (y/n)?y
yes
Creating child cgroup directory 'cgroup_gluster_6722 cgroup' for
glusterd.service.
Enter quota value in range [10,100]:
25
Entered quota value is 25
Setting 25000 to cpu.cfs_quota_us for gluster_cgroup.
Tasks are attached successfully specific to 6722 to cgroup_gluster_6722.
[root at rhs-gp-srv2 scripts]# top -n 1 -b|egrep 'RES|gluster'
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6722 root 20 0 19.9g 3.8g 3392 S 2845 8.1 229895:21 glusterfsd
7421 root 20 0 13.4g 4.9g 4416 S 30.0 10.5 5002:22 glusterfs
6173 root 20 0 596100 23448 2400 S 0.0 0.0 0:20.29 glusterd
7410 root 20 0 1453872 16424 2220 S 0.0 0.0 6:18.91 glusterfsd
[root at rhs-gp-srv2 scripts]# top -n 1 -b|egrep 'RES|gluster'
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6722 root 20 0 19.9g 3.8g 3392 S 2640 8.1 229896:07 glusterfsd
7421 root 20 0 13.4g 5.0g 4416 S 25.0 10.6 5002:23 glusterfs
6173 root 20 0 596100 23448 2400 S 0.0 0.0 0:20.29 glusterd
7410 root 20 0 1453872 16424 2220 S 0.0 0.0 6:18.91 glusterfsd
[root at rhs-gp-srv2 scripts]# top -n 1 -b|egrep 'RES|gluster'
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6722 root 20 0 19.9g 3.8g 3392 S 2479 8.1 229896:38 glusterfsd
7421 root 20 0 13.4g 5.0g 4416 S 31.6 10.6 5002:23 glusterfs
6173 root 20 0 596100 23448 2400 S 0.0 0.0 0:20.29 glusterd
7410 root 20 0 1453872 16424 2220 S 0.0 0.0 6:18.91 glusterfsd
Steps to Reproduce:
=================
1.have a setup where cpu consumption of gluster procs are high
2. run /usr/share/glusterfs/scripts/control-cpu-load.sh
3. enter the pid of gluster proc and the limit under which the cpu must be
consumed.
RCA:
As per current code in script it moves the thread of any gluster process
those have gluster keyword
as a substring and as of now all gluster threads has glfs prefix so script is
not able to move all
gluster threads to new created cgroup and cgroup restriction are not working
if ps -T -p ${daemon_pid} | grep gluster > /dev/null; then
for thid in `ps -T -p ${daemon_pid} | grep gluster | awk -F " " '{print
$2}'`;
do
echo ${thid} > ${LOC}/${cgroup_name}/tasks ;
done
if cat /proc/${daemon_pid}/cgroup | grep -w ${cgroup_name} > /dev/null; then
echo "Tasks are attached successfully specific to ${daemon_pid} to
${cgroup_name}."
else
echo "Tasks are not attached successfully."
fi
fi
To avoid the same need to change the script for loop condition like below
if ps -T -p ${daemon_pid} | grep gluster > /dev/null; then
for thid in `ps -T -p ${daemon_pid} | grep -v SPID | awk -F " " '{print
$2}'`;
do
echo ${thid} > ${LOC}/${cgroup_name}/tasks ;
done
if cat /proc/${daemon_pid}/cgroup | grep -w ${cgroup_name} > /dev/null; then
echo "Tasks are attached successfully specific to ${daemon_pid} to
${cgroup_name}."
else
echo "Tasks are not attached successfully."
fi
fi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list