[Bugs] [Bug 1665055] New: kernel-writeback-cache option does not seem to be working
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jan 10 12:07:09 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1665055
Bug ID: 1665055
Summary: kernel-writeback-cache option does not seem to be
working
Product: GlusterFS
Version: 5
Hardware: x86_64
OS: Linux
Status: NEW
Component: fuse
Severity: high
Assignee: bugs at gluster.org
Reporter: mpillai at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
https://github.com/gluster/glusterfs/issues/435 adds support for writeback
cache with fuse.
However, it doesn't seem to be working as I would expect. In write tests, I
don't see an dirty data in page cache increasing.
Version-Release number of selected component (if applicable):
glusterfs-*5.2-1.el7.x86_64
kernel-3.10.0-957.el7.x86_64 (RHEL 7.6)
How reproducible:
Consistently
Steps to Reproduce:
1. create single brick distribute volume with default settings.
2. mount gluster volume with the kernel-writeback-cache option. output of ps
showing that this has been done:
/usr/sbin/glusterfs --kernel-writeback-cache=yes --process-name fuse
--volfile-server=172.16.70.128 --volfile-id=/perfvol /mnt/glustervol
3. run an fio write test without fsync options:
fio --name=initialwrite --ioengine=sync --rw=write \
--direct=0 --create_on_open=1 --bs=128k --directory=/mnt/glustervol/ \
--filename_format=f.\$jobnum.\$filenum --filesize=16g \
--size=16g --numjobs=4
Actual results:
Not seeing any dirty data accumulating in the page cache:
<excerpt>
# sar -r 5
Linux 3.10.0-957.el7.x86_64 (c09-h08-r630.rdu.openstack.engineering.redhat.com)
01/10/2019 _x86_64_ (56 CPU)
11:50:23 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit kbactive kbinact kbdirty
11:50:28 AM 126637320 4972364 3.78 0 2984068 363436
0.26 1884000 1156308 32
11:50:33 AM 126637320 4972364 3.78 0 2984068 363436
0.26 1884000 1156308 32
11:50:38 AM 126637320 4972364 3.78 0 2984068 363436
0.26 1884000 1156308 32
11:50:43 AM 125801556 5808128 4.41 0 3808880 937648
0.67 1896732 1980992 0
11:50:48 AM 120168932 11440752 8.69 0 9428756 937648
0.67 1896912 7599108 4
11:50:53 AM 114769368 16840316 12.80 0 14815316 937648
0.67 1896912 12986512 4
11:50:58 AM 109458768 22150916 16.83 0 20116092 937648
0.67 1897396 18287780 4
11:51:03 AM 104207304 27402380 20.82 0 25364236 937648
0.67 1897424 23535716 0
11:51:08 AM 98995764 32613920 24.78 0 30564848 937648
0.67 1897408 28735148 0
11:51:13 AM 93582944 38026740 28.89 0 35965720 937648
0.67 1897408 34136384 0
11:51:18 AM 88071656 43538028 33.08 0 41463728 937648
0.67 1897408 39634616 0
11:51:23 AM 82411904 49197780 37.38 0 47106212 937648
0.67 1897408 45275676 0
11:51:28 AM 76742608 54867076 41.69 0 52761136 937648
0.67 1897408 50932124 0
11:51:33 AM 71736380 59873304 45.49 0 57754148 937648
0.67 1897408 55924636 0
11:51:38 AM 66740952 64868732 49.29 0 62738164 937648
0.67 1897408 60908384 0
11:51:43 AM 61620148 69989536 53.18 0 67843088 937648
0.67 1897408 66014100 0
11:51:48 AM 59375388 72234296 54.89 0 70091108 363792
0.26 1893552 68261796 0
11:51:53 AM 59375388 72234296 54.89 0 70091108 363792
0.26 1893552 68261796 0
</excerpt>
Expected results:
Evidence of dirty data building up in the page cache.
Additional info:
For comparison, the same test was run on an XFS file system on the server (the
FS that would serve as the brick for the gluster volume). (The h/w spec of
server is not same as of client; it has more RAM e.g.). In this case, we can
see buildup of dirty data:
<excerpt>
# sar -r 5
Linux 3.10.0-957.el7.x86_64
(c06-h05-6048r.rdu.openstack.engineering.redhat.com) 01/10/2019
_x86_64_ (56 CPU)
11:46:33 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit kbactive kbinact kbdirty
11:46:38 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:46:43 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:46:48 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:46:53 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:46:58 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:47:03 AM 261061052 2794896 1.06 1124 244356 326432
0.12 132284 162168 40
11:47:08 AM 245576160 18279788 6.93 1124 15005000 896264
0.33 134232 14922152 4537836
11:47:13 AM 237023884 26832064 10.17 1124 23303832 896264
0.33 134236 23220484 4845900
11:47:18 AM 228223240 35632708 13.50 1124 31822796 896264
0.33 134236 31741232 4901984
11:47:23 AM 219775288 44080660 16.71 1124 40001140 896264
0.33 134236 39917604 4654116
11:47:28 AM 211272552 52583396 19.93 1124 48319980 896264
0.33 134236 48235832 4702104
11:47:33 AM 202607168 61248780 23.21 1124 56654988 896264
0.33 134236 56571356 4592700
11:47:38 AM 193999760 69856188 26.48 1124 65109548 896264
0.33 134236 65025612 4904092
11:47:43 AM 192078956 71776992 27.20 1124 67352876 326676
0.12 133228 67268776 4629040
11:47:48 AM 192078736 71777212 27.20 1124 67353220 326676
0.12 132644 67270272 0
11:47:53 AM 192078736 71777212 27.20 1124 67353220 326676
0.12 132644 67270272 0
11:47:58 AM 192078736 71777212 27.20 1124 67353220 326676
0.12 132644 67270272 0
11:48:03 AM 192078736 71777212 27.20 1124 67353220 326676
0.12 132644 67270272 0
</excerpt>
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list