[Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page cache on read after write
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jan 10 05:43:53 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1664934
--- Comment #1 from Manoj Pillai <mpillai at redhat.com> ---
(In reply to Manoj Pillai from comment #0)
[...]
> 1. use fio to create a data set that would fit easily in the page cache. My
> client has 128 GB RAM; I'll create a 64 GB data set:
>
> fio --name=initialwrite --ioengine=sync --rw=write \
> --direct=0 --create_on_open=1 --end_fsync=1 --bs=128k \
> --directory=/mnt/glustervol/ --filename_format=f.\$jobnum.\$filenum \
> --filesize=16g --size=16g --numjobs=4
>
Memory usage on the client while the write test is running:
<excerpt>
# sar -r 5
Linux 3.10.0-957.el7.x86_64 (c09-h08-r630.rdu.openstack.engineering.redhat.com)
01/10/2019 _x86_64_ (56 CPU)
05:35:36 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit kbactive kbinact kbdirty
05:35:41 AM 126671972 4937712 3.75 0 2974352 256704
0.18 1878020 1147776 36
05:35:46 AM 126671972 4937712 3.75 0 2974352 256704
0.18 1878020 1147776 36
05:35:51 AM 126666904 4942780 3.76 0 2974324 259900
0.19 1879948 1147772 16
05:35:56 AM 126665820 4943864 3.76 0 2974348 261300
0.19 1880304 1147776 24
05:36:01 AM 126663136 4946548 3.76 0 2974348 356356
0.25 1881500 1147772 20
05:36:06 AM 126663028 4946656 3.76 0 2974348 356356
0.25 1881540 1147772 20
05:36:11 AM 126664444 4945240 3.76 0 2974388 356356
0.25 1880648 1147788 32
05:36:16 AM 126174984 5434700 4.13 0 3449508 930284
0.66 1892912 1622536 32
05:36:21 AM 120539884 11069800 8.41 0 9076076 930284
0.66 1893784 7247852 32
05:36:26 AM 114979592 16630092 12.64 0 14620932 930284
0.66 1893796 12793472 32
05:36:31 AM 109392488 22217196 16.88 0 20192112 930284
0.66 1893796 18365764 32
05:36:36 AM 104113900 27495784 20.89 0 25457272 930284
0.66 1895152 23630336 32
05:36:41 AM 98713688 32895996 25.00 0 30842800 930284
0.66 1895156 29015400 32
05:36:46 AM 93355560 38254124 29.07 0 36190264 930688
0.66 1897548 34361664 32
05:36:51 AM 87640900 43968784 33.41 0 41885972 930688
0.66 1897556 40057860 32
05:36:56 AM 81903068 49706616 37.77 0 47626388 930688
0.66 1897004 45798848 0
05:37:01 AM 76209860 55399824 42.09 0 53303272 930688
0.66 1897004 51475716 0
05:37:06 AM 70540340 61069344 46.40 0 58956264 930688
0.66 1897004 57128836 0
05:37:11 AM 64872776 66736908 50.71 0 64609648 930688
0.66 1897000 62782624 0
05:37:16 AM 59376144 72233540 54.88 0 70096880 930688
0.66 1897368 68270084 0
05:37:21 AM 71333376 60276308 45.80 0 58169584 356740
0.25 1891388 56342848 0
05:37:26 AM 126653336 4956348 3.77 0 2974476 356740
0.25 1891392 1148348 0
05:37:31 AM 126654360 4955324 3.77 0 2974388 356740
0.25 1891380 1147784 0
05:37:36 AM 126654376 4955308 3.77 0 2974388 356740
0.25 1891380 1147784 0
05:37:41 AM 126654376 4955308 3.77 0 2974388 356740
0.25 1891380 1147784 0
</excerpt>
So as the write test progresses, kbcached steadily increases. But looks like
the cached data is dropped subsequently.
--
You are receiving this mail because:
You are on the CC list for the bug.
More information about the Bugs
mailing list