[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend
bugzilla at redhat.com
bugzilla at redhat.com
Fri Oct 27 08:27:32 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #37 from Manoj Pillai <mpillai at redhat.com> ---
Switched to a 32g ramdisk (the server has 56g) so that I can have longer runs
with larger data set of 24g instead of 12g in comment #35.
Repeated the 4 client, single brick run (io-thread-count=8, event-threads=4):
read: IOPS=59.0k, BW=234Mi (246M)(6144MiB/26220msec)
[IOPs dropped slightly with the longer run.]
Output of "top -bH -d 10" during randread looks like this:
<quote>
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2500 root 20 0 2199708 32556 4968 R 97.6 0.1 4:35.79 glusterrpcs+
7040 root 20 0 2199708 32556 4968 S 74.3 0.1 0:19.62 glusteriotw+
7036 root 20 0 2199708 32556 4968 S 73.5 0.1 0:24.13 glusteriotw+
7039 root 20 0 2199708 32556 4968 S 73.0 0.1 0:23.73 glusteriotw+
6854 root 20 0 2199708 32556 4968 S 72.7 0.1 0:35.91 glusteriotw+
7035 root 20 0 2199708 32556 4968 S 72.7 0.1 0:23.99 glusteriotw+
7038 root 20 0 2199708 32556 4968 R 72.5 0.1 0:23.75 glusteriotw+
7034 root 20 0 2199708 32556 4968 S 72.3 0.1 0:23.60 glusteriotw+
7037 root 20 0 2199708 32556 4968 R 72.3 0.1 0:23.42 glusteriotw+
2510 root 20 0 2199708 32556 4968 S 34.0 0.1 1:28.11 glusterposi+
</quote>
pstack on the thread showing 97+% CPU utilization:
<quote>
# pstack 2500
Thread 1 (process 2500):
#0 0x00007f0301398945 in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1 0x00007f03022f733b in rpcsvc_request_handler (arg=0x7f02f003f530) at
rpcsvc.c:1881
#2 0x00007f0301394e25 in start_thread () from /lib64/libpthread.so.0
#3 0x00007f0300c6134d in clone () from /lib64/libc.so.6
</quote>
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=qOroASWGnx&a=cc_unsubscribe
More information about the Bugs
mailing list