[Gluster-users] GlusterFS 3.7 - slow/poor performances

Geoffrey Letessier geoffrey.letessier at cnrs.fr
Tue Jun 2 12:09:04 UTC 2015


Hi Pranith,

I’m sorry but I cannot bring you any comparison because comparison will be distorted by the fact in my HPC cluster in production the network technology is InfiniBand QDR and my volumes are quite different (brick in RAID6 (12x2TB), 2 bricks per server and 4 servers into my pool)

Concerning your demand, in attachments you can find all expected results hoping it can help you to solve this serious performance issue (maybe I need play with glusterfs parameters?).

Thank you very much by advance,
Geoffrey
------------------------------------------------------
Geoffrey Letessier
Responsable informatique & ingénieur système
UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at ibpc.fr <mailto:geoffrey.letessier at ibpc.fr>
> Le 2 juin 2015 à 10:09, Pranith Kumar Karampuri <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> a écrit :
> 
> hi Geoffrey,
>              Since you are saying it happens on all types of volumes, lets do the following:
> 1) Create a dist-repl volume
> 2) Set the options etc you need.
> 3) enable gluster volume profile using "gluster volume profile <volname> start"
> 4) run the work load
> 5) give output of "gluster volume profile <volname> info"
> 
> Repeat the steps above on new and old version you are comparing this with. That should give us insight into what could be causing the slowness.
> 
> Pranith
> On 06/02/2015 03:22 AM, Geoffrey Letessier wrote:
>> Dear all,
>> 
>> I have a crash test cluster where i’ve tested the new version of GlusterFS (v3.7) before upgrading my HPC cluster in production. 
>> But… all my tests show me very very low performances.
>> 
>> For my benches, as you can read below, I do some actions (untar, du, find, tar, rm) with linux kernel sources, dropping cache, each on distributed, replicated, distributed-replicated, single (single brick) volumes and the native FS of one brick.
>> 
>> # time (echo 3 > /proc/sys/vm/drop_caches; tar xJf ~/linux-4.1-rc5.tar.xz; sync; echo 3 > /proc/sys/vm/drop_caches)
>> # time (echo 3 > /proc/sys/vm/drop_caches; du -sh linux-4.1-rc5/; echo 3 > /proc/sys/vm/drop_caches)
>> # time (echo 3 > /proc/sys/vm/drop_caches; find linux-4.1-rc5/|wc -l; echo 3 > /proc/sys/vm/drop_caches)
>> # time (echo 3 > /proc/sys/vm/drop_caches; tar czf linux-4.1-rc5.tgz linux-4.1-rc5/; echo 3 > /proc/sys/vm/drop_caches)
>> # time (echo 3 > /proc/sys/vm/drop_caches; rm -rf linux-4.1-rc5.tgz linux-4.1-rc5/; echo 3 > /proc/sys/vm/drop_caches)
>> 
>> And here are the process times:
>> 
>> ---------------------------------------------------------------
>> |             |  UNTAR  |   DU   |  FIND   |   TAR   |   RM   |
>> ---------------------------------------------------------------
>> | single      |  ~3m45s |   ~43s |    ~47s |  ~3m10s | ~3m15s |
>> ---------------------------------------------------------------
>> | replicated  |  ~5m10s |   ~59s |   ~1m6s |  ~1m19s | ~1m49s |
>> ---------------------------------------------------------------
>> | distributed |  ~4m18s |   ~41s |    ~57s |  ~2m24s | ~1m38s |
>> ---------------------------------------------------------------
>> | dist-repl   |  ~8m18s |  ~1m4s |  ~1m11s |  ~1m24s | ~2m40s |
>> ---------------------------------------------------------------
>> | native FS   |    ~11s |    ~4s |     ~2s |    ~56s |   ~10s |
>> ---------------------------------------------------------------
>> 
>> I get the same results, whether with default configurations with custom configurations.
>> 
>> if I look at the side of the ifstat command, I can note my IO write processes never exceed 3MBs...
>> 
>> EXT4 native FS seems to be faster (roughly 15-20% but no more) than XFS one
>> 
>> My [test] storage cluster config is composed by 2 identical servers (biCPU Intel Xeon X5355, 8GB of RAM, 2x2TB HDD (no-RAID) and Gb ethernet)
>> 
>> My volume settings:
>>  single: 1server 1 brick
>>  replicated: 2 servers 1 brick each
>>  distributed: 2 servers 2 bricks each
>>  dist-repl: 2 bricks in the same server and replica 2
>> 
>> All seems to be OK in gluster status command line.
>> 
>> Do you have an idea why I obtain so bad results?
>> Thanks in advance.
>> Geoffrey
>> -----------------------------------------------
>> Geoffrey Letessier
>> 
>> Responsable informatique & ingénieur système
>> CNRS - UPR 9080 - Laboratoire de Biochimie Théorique
>> Institut de Biologie Physico-Chimique
>> 13, rue Pierre et Marie Curie - 75005 Paris
>> Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at cnrs.fr <mailto:geoffrey.letessier at cnrs.fr>
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150602/29138499/attachment-0003.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: client.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150602/29138499/attachment.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150602/29138499/attachment-0004.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: server.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150602/29138499/attachment-0001.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150602/29138499/attachment-0005.html>


More information about the Gluster-users mailing list