<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 20, 2018 at 1:55 AM, TomK <span dir="ltr"><<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">On 3/19/2018 10:52 AM, Rik Theys wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi,<br>
<br>
On 03/19/2018 03:42 PM, TomK wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:<br>
Removing NFS or NFS Ganesha from the equation, not very impressed on my<br>
own setup either. For the writes it's doing, that's alot of CPU usage<br>
in top. Seems bottle-necked via a single execution core somewhere trying<br>
to facilitate read / writes to the other bricks.<br>
<br>
Writes to the gluster FS from within one of the gluster participating<br>
bricks:<br>
<br>
[root@nfs01 n]# dd if=/dev/zero of=./some-file.bin<br>
<br>
393505+0 records in<br>
393505+0 records out<br>
201474560 bytes (201 MB) copied, 50.034 s, 4.0 MB/s<br>
</blockquote>
<br>
That's not really a fare comparison as you don't specify a blocksize.<br>
What does<br>
<br>
dd if=/dev/zero of=./some-file.bin bs=1M count=1000 oflag=direct<br>
<br>
give?<br>
<br>
<br>
Rik<br>
<br>
</blockquote></span>
Correct. Higher block sizes gave me better numbers earlier. Curious about improving the small file size performance though, preferrably via gluster tunables, if possible.<br>
<br>
Though it could be said I guess that compressing a set of large files and transferring them over that way is one solution. However needed the small block size on dd to perhaps quickly simulate alot of small requests in a somewhat ok-ish way.<br></blockquote><div><br></div><div>Aggregating large number of small writes by write-behind into large writes has been merged on master:<br></div><div><a href="https://github.com/gluster/glusterfs/issues/364">https://github.com/gluster/glusterfs/issues/364</a></div><div><br></div><div>Would like to know whether it helps for this usecase. Note that its not part of any release yet. So you've to build and install from repo.</div><div><br></div><div>Another suggestion is to run tests with turning off option performance.write-behind-trickling-writes.</div><div><br></div><div># gluster volume set <volname> performance.write-behind-trickling-writes off<br></div><div><br></div><div>A word of caution though is if your files are too small, these suggestions may not have much impact.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Here's the numbers from the VM:<br>
<br>
[ Via Gluster ]<br>
[root@nfs01 n]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000 oflag=direct<br>
10000+0 records in<br>
10000+0 records out<br>
10485760000 bytes (10 GB) copied, 96.3228 s, 109 MB/s<br>
[root@nfs01 n]# rm some-file.bin<br>
rm: remove regular file âsome-file.binâ? y<br>
<br>
[ Via XFS ]<br>
[root@nfs01 n]# cd /bricks/0/gv01/<br>
[root@nfs01 gv01]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000 oflag=direct<br>
10000+0 records in<br>
10000+0 records out<br>
10485760000 bytes (10 GB) copied, 44.79 s, 234 MB/s<br>
[root@nfs01 gv01]#<br>
<br>
<br>
<br>
top - 12:49:48 up 1 day, 9:39, 2 users, load average: 0.66, 1.15, 1.82<br>
Tasks: 165 total, 1 running, 164 sleeping, 0 stopped, 0 zombie<br>
%Cpu0 : 10.3 us, 9.6 sy, 0.0 ni, 28.0 id, 50.4 wa, 0.0 hi, 1.8 si, 0.0 st<br>
%Cpu1 : 13.8 us, 13.8 sy, 0.0 ni, 38.6 id, 30.0 wa, 0.0 hi, 3.8 si, 0.0 st<br>
%Cpu2 : 8.7 us, 6.9 sy, 0.0 ni, 48.7 id, 34.9 wa, 0.0 hi, 0.7 si, 0.0 st<br>
%Cpu3 : 10.6 us, 7.8 sy, 0.0 ni, 57.1 id, 24.1 wa, 0.0 hi, 0.4 si, 0.0 st<br>
KiB Mem : 3881708 total, 3543280 free, 224008 used, 114420 buff/cache<br>
KiB Swap: 4063228 total, 3836612 free, 226616 used. 3457708 avail Mem<span class="gmail-"><br>
<br>
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br></span>
14115 root 20 0 2504832 27640 2612 S 43.5 0.7 432:10.35 glusterfsd<br>
1319 root 20 0 1269620 23780 2636 S 38.9 0.6 752:44.78 glusterfs<br>
1334 root 20 0 2694264 56988 1672 S 16.3 1.5 311:20.90 ganesha.nfsd<br>
27458 root 20 0 108984 1404 540 D 3.0 0.0 0:00.24 dd<br>
14127 root 20 0 1164720 4860 1960 S 0.7 0.1 1:47.59 glusterfs<br>
750 root 20 0 389864 5528 3988 S 0.3 0.1 0:08.77 sssd_be<span class="gmail-im gmail-HOEnZb"><br>
<br>
-- <br>
Cheers,<br>
Tom K.<br>
------------------------------<wbr>------------------------------<wbr>-------------------------<br>
<br>
Living on earth is expensive, but it includes a free trip around the sun.<br>
<br></span><div class="gmail-HOEnZb"><div class="gmail-h5">
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div></div>