[Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

Raghavendra Gowdappa rgowdapp at redhat.com
Tue Mar 20 02:55:53 UTC 2018


On Tue, Mar 20, 2018 at 1:55 AM, TomK <tomkcpr at mdevsys.com> wrote:

> On 3/19/2018 10:52 AM, Rik Theys wrote:
>
>> Hi,
>>
>> On 03/19/2018 03:42 PM, TomK wrote:
>>
>>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
>>> Removing NFS or NFS Ganesha from the equation, not very impressed on my
>>> own setup either.  For the writes it's doing, that's alot of CPU usage
>>> in top. Seems bottle-necked via a single execution core somewhere trying
>>> to facilitate read / writes to the other bricks.
>>>
>>> Writes to the gluster FS from within one of the gluster participating
>>> bricks:
>>>
>>> [root at nfs01 n]# dd if=/dev/zero of=./some-file.bin
>>>
>>> 393505+0 records in
>>> 393505+0 records out
>>> 201474560 bytes (201 MB) copied, 50.034 s, 4.0 MB/s
>>>
>>
>> That's not really a fare comparison as you don't specify a blocksize.
>> What does
>>
>> dd if=/dev/zero of=./some-file.bin bs=1M count=1000 oflag=direct
>>
>> give?
>>
>>
>> Rik
>>
>> Correct.  Higher block sizes gave me better numbers earlier.  Curious
> about improving the small file size performance though, preferrably via
> gluster tunables, if possible.
>
> Though it could be said I guess that compressing a set of large files and
> transferring them over that way is one solution.  However needed the small
> block size on dd to perhaps quickly simulate alot of small requests in a
> somewhat ok-ish way.
>

Aggregating large number of small writes by write-behind into large writes
has been merged on master:
https://github.com/gluster/glusterfs/issues/364

Would like to know whether it helps for this usecase. Note that its not
part of any release yet. So you've to build and install from repo.

Another suggestion is to run tests with turning off option
performance.write-behind-trickling-writes.

# gluster volume set <volname> performance.write-behind-trickling-writes off

A word of caution though is if your files are too small, these suggestions
may not have much impact.


> Here's the numbers from the VM:
>
> [ Via Gluster ]
> [root at nfs01 n]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000
> oflag=direct
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes (10 GB) copied, 96.3228 s, 109 MB/s
> [root at nfs01 n]# rm some-file.bin
> rm: remove regular file âsome-file.binâ? y
>
> [ Via XFS ]
> [root at nfs01 n]# cd /bricks/0/gv01/
> [root at nfs01 gv01]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000
> oflag=direct
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes (10 GB) copied, 44.79 s, 234 MB/s
> [root at nfs01 gv01]#
>
>
>
> top - 12:49:48 up 1 day,  9:39,  2 users,  load average: 0.66, 1.15, 1.82
> Tasks: 165 total,   1 running, 164 sleeping,   0 stopped,   0 zombie
> %Cpu0  : 10.3 us,  9.6 sy,  0.0 ni, 28.0 id, 50.4 wa,  0.0 hi,  1.8 si,
> 0.0 st
> %Cpu1  : 13.8 us, 13.8 sy,  0.0 ni, 38.6 id, 30.0 wa,  0.0 hi,  3.8 si,
> 0.0 st
> %Cpu2  :  8.7 us,  6.9 sy,  0.0 ni, 48.7 id, 34.9 wa,  0.0 hi,  0.7 si,
> 0.0 st
> %Cpu3  : 10.6 us,  7.8 sy,  0.0 ni, 57.1 id, 24.1 wa,  0.0 hi,  0.4 si,
> 0.0 st
> KiB Mem :  3881708 total,  3543280 free,   224008 used,   114420 buff/cache
> KiB Swap:  4063228 total,  3836612 free,   226616 used.  3457708 avail Mem
>
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
> 14115 root      20   0 2504832  27640   2612 S  43.5  0.7 432:10.35
> glusterfsd
>  1319 root      20   0 1269620  23780   2636 S  38.9  0.6 752:44.78
> glusterfs
>  1334 root      20   0 2694264  56988   1672 S  16.3  1.5 311:20.90
> ganesha.nfsd
> 27458 root      20   0  108984   1404    540 D   3.0  0.0   0:00.24 dd
> 14127 root      20   0 1164720   4860   1960 S   0.7  0.1   1:47.59
> glusterfs
>   750 root      20   0  389864   5528   3988 S   0.3  0.1   0:08.77 sssd_be
>
> --
> Cheers,
> Tom K.
> ------------------------------------------------------------
> -------------------------
>
> Living on earth is expensive, but it includes a free trip around the sun.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180320/1f7b47ea/attachment.html>


More information about the Gluster-users mailing list