[Gluster-users] Gluster speed sooo slow
Fernando Frediani (Qube)
fernando.frediani at qubenet.net
Mon Aug 13 09:40:49 UTC 2012
I think Gluster as it stands now and current level of development is more for Multimedia and Archival files, not for small files nor for running Virtual Machines. It requires still a fair amount of development which hopefully RedHat will put in place.
Fernando
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Ivan Dimitrov
Sent: 13 August 2012 08:33
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] Gluster speed sooo slow
There is a big difference with working with small files (around 16kb) and big files (2mb). Performance is much better with big files. Witch is too bad for me ;(
On 8/11/12 2:15 AM, Gandalf Corvotempesta wrote:
What do you mean with "small files"? 16k ? 160k? 16mb?
Do you know any workaround or any other software for this?
Mee too i'm trying to create a clustered storage for many
small file
2012/8/10 Philip Poten <philip.poten at gmail.com<mailto:philip.poten at gmail.com>>
Hi Ivan,
that's because Gluster has really bad "many small files" performance
due to it's architecture.
On all stat() calls (which rsync is doing plenty of), all replicas are
being checked for integrity.
regards,
Philip
2012/8/10 Ivan Dimitrov <dobber at amln.net<mailto:dobber at amln.net>>:
> So I stopped a node to check the BIOS and after it went up, the rebalance
> kicked in. I was looking for those kind of speeds on a normal write. The
> rebalance is much faster than my rsync/cp.
>
> https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png
>
> Best Regards
> Ivan Dimitrov
>
>
> On 8/10/12 1:23 PM, Ivan Dimitrov wrote:
>>
>> Hello
>> What am I doing wrong?!?
>>
>> I have a test setup with 4 identical servers with 2 disks each in
>> distribute-replicate 2. All servers are connected to a GB switch.
>>
>> I am experiencing really slow speeds at anything I do. Slow write, slow
>> read, not to mention random write/reads.
>>
>> Here is an example:
>> random-files is a directory with 32768 files with average size 16kb.
>> [root at gltclient]:~# rsync -a /root/speedtest/random-files/
>> /home/gltvolume/
>> ^^ This will take more than 3 hours.
>>
>> On any of the servers if I do "iostat" the disks are not loaded at all:
>>
>> https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png
>>
>> This is similar result for all servers.
>>
>> Here is an example of simple "ls" command on the content.
>> [root at gltclient]:~# unalias ls
>> [root at gltclient]:~# /usr/bin/time -f "%e seconds" ls /home/gltvolume/ | wc
>> -l
>> 2.81 seconds
>> 5393
>>
>> almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls
>> will take around 35-45 seconds.
>>
>> This directory is on local disk:
>> [root at gltclient]:~# /usr/bin/time -f "%e seconds" ls
>> /root/speedtest/random-files/ | wc -l
>> 1.45 seconds
>> 32768
>>
>> [root at gltclient]:~# /usr/bin/time -f "%e seconds" cat /home/gltvolume/*
>> >/dev/null
>> 190.50 seconds
>>
>> [root at gltclient]:~# /usr/bin/time -f "%e seconds" du -sh /home/gltvolume/
>> 126M /home/gltvolume/
>> 75.23 seconds
>>
>>
>> Here is the volume information.
>>
>> [root at glt1]:~# gluster volume info
>>
>> Volume Name: gltvolume
>> Type: Distributed-Replicate
>> Volume ID: 16edd852-8d23-41da-924d-710b753bb374
>> Status: Started
>> Number of Bricks: 4 x 2 = 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: 1.1.74.246:/home/sda3
>> Brick2: glt2.network.net:/home/sda3
>> Brick3: 1.1.74.246:/home/sdb1
>> Brick4: glt2.network.net:/home/sdb1
>> Brick5: glt3.network.net:/home/sda3
>> Brick6: gltclient.network.net:/home/sda3
>> Brick7: glt3.network.net:/home/sdb1
>> Brick8: gltclient.network.net:/home/sdb1
>> Options Reconfigured:
>> performance.io-thread-count: 32
>> performance.cache-size: 256MB
>> cluster.self-heal-daemon: on
>>
>>
>> [root at glt1]:~# gluster volume status all detail
>> Status of volume: gltvolume
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick 1.1.74.246:/home/sda3
>> Port : 24009
>> Online : Y
>> Pid : 1479
>> File System : ext4
>> Device : /dev/sda3
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 179.3GB
>> Total Disk Space : 179.7GB
>> Inode Count : 11968512
>> Free Inodes : 11901550
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick glt2.network.net:/home/sda3
>> Port : 24009
>> Online : Y
>> Pid : 1589
>> File System : ext4
>> Device : /dev/sda3
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 179.3GB
>> Total Disk Space : 179.7GB
>> Inode Count : 11968512
>> Free Inodes : 11901550
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick 1.1.74.246:/home/sdb1
>> Port : 24010
>> Online : Y
>> Pid : 1485
>> File System : ext4
>> Device : /dev/sdb1
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 228.8GB
>> Total Disk Space : 229.2GB
>> Inode Count : 15269888
>> Free Inodes : 15202933
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick glt2.network.net:/home/sdb1
>> Port : 24010
>> Online : Y
>> Pid : 1595
>> File System : ext4
>> Device : /dev/sdb1
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 228.8GB
>> Total Disk Space : 229.2GB
>> Inode Count : 15269888
>> Free Inodes : 15202933
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick glt3.network.net:/home/sda3
>> Port : 24009
>> Online : Y
>> Pid : 28963
>> File System : ext4
>> Device : /dev/sda3
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 179.3GB
>> Total Disk Space : 179.7GB
>> Inode Count : 11968512
>> Free Inodes : 11906058
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick gltclient.network.net:/home/sda3
>> Port : 24009
>> Online : Y
>> Pid : 3145
>> File System : ext4
>> Device : /dev/sda3
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 179.3GB
>> Total Disk Space : 179.7GB
>> Inode Count : 11968512
>> Free Inodes : 11906058
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick glt3.network.net:/home/sdb1
>> Port : 24010
>> Online : Y
>> Pid : 28969
>> File System : ext4
>> Device : /dev/sdb1
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 228.8GB
>> Total Disk Space : 229.2GB
>> Inode Count : 15269888
>> Free Inodes : 15207375
>>
>> ------------------------------------------------------------------------------
>> Brick : Brick gltclient.network.net:/home/sdb1
>> Port : 24010
>> Online : Y
>> Pid : 3151
>> File System : ext4
>> Device : /dev/sdb1
>> Mount Options : rw,noatime
>> Inode Size : 256
>> Disk Space Free : 228.8GB
>> Total Disk Space : 229.2GB
>> Inode Count : 15269888
>> Free Inodes : 15207375
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120813/99721ccd/attachment.html>
More information about the Gluster-users
mailing list