[Gluster-users] GlusterFS 3.5.3 - untar: very poor performance

Geoffrey Letessier geoffrey.letessier at cnrs.fr
Sat Jun 20 00:12:03 UTC 2015


Dear all,

I just noticed on my main volume of my HPC cluster my IO operations become impressively poor.. 

Doing some file operations above a linux kernel sources compressed file, the untar operation can take more than 1/2 hours for this file (roughly 80MB and 52 000 files inside) as you read below:
#######################################################
################  UNTAR time consumed  ################
#######################################################


real	32m42.967s
user	0m11.783s
sys	0m15.050s

#######################################################
#################  DU time consumed  ##################
#######################################################

557M	linux-4.1-rc6

real	0m25.060s
user	0m0.068s
sys	0m0.344s

#######################################################
#################  FIND time consumed  ################
#######################################################

52663

real	0m25.687s
user	0m0.084s
sys	0m0.387s

#######################################################
#################  GREP time consumed  ################
#######################################################

7952

real	2m15.890s
user	0m0.887s
sys	0m2.777s

#######################################################
#################  TAR time consumed  #################
#######################################################


real	1m5.551s
user	0m26.536s
sys	0m2.609s

#######################################################
#################  RM time consumed  ##################
#######################################################


real	2m51.485s
user	0m0.167s
sys	0m1.663s

For information, this volume is a distributed replicated one and is composed by 4 servers with 2 bricks each. Each bricks is a 12-drives RAID6 vdisk with nice native performances (around 1.2GBs).

In comparison, when I use DD to generate a 100GB file on the same volume, my write throughput is around 1GB (client side) and 500MBs (server side) because of replication:
Client side:
[root at node056 ~]# ifstat -i ib0
       ib0        
 KB/s in  KB/s out
 3251.45  1.09e+06
 3139.80  1.05e+06
 3185.29  1.06e+06
 3293.84  1.09e+06
...

Server side:
[root at lucifer ~]# ifstat -i ib0
       ib0        
 KB/s in  KB/s out
561818.1   1746.42
560020.3   1737.92
526337.1   1648.20
513972.7   1613.69
...

DD command:
[root at node056 ~]# dd if=/dev/zero of=/home/root/test.dd bs=1M count=100000
100000+0 enregistrements lus
100000+0 enregistrements écrits
104857600000 octets (105 GB) copiés, 202,99 s, 517 MB/s

So this issue doesn’t seem coming from the network (which is Infiniband technology in this case)

You can find in attachments a set of files:
	- mybench.sh: the bench script
	- benches.txt: output of my "bench"
	- profile.txt: gluster volume profile during the "bench"
	- vol_status.txt: gluster volume status
	- vol_info.txt: gluster volume info

Can someone help me to fix it (it’s very critical because this volume is on a HPC cluster in production).

Thanks by advance,
Geoffrey
-----------------------------------------------
Geoffrey Letessier

Responsable informatique & ingénieur système
CNRS - UPR 9080 - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at cnrs.fr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: benches.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mybench.sh
Type: application/octet-stream
Size: 1427 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment.obj>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0002.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: profile.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0001.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0003.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: vol_info.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0002.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0004.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: vol_status.txt
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0003.txt>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150620/0a40d412/attachment-0005.html>


More information about the Gluster-users mailing list