[Gluster-users] Writing is slow when there are 10 million files.

Franco Broi franco.broi at iongeo.com
Tue Apr 15 06:24:54 UTC 2014


I seriously doubt this is the right filesystem for you, we have problems
listing directories with a few hundred files, never mind millions.

On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote: 
> Dear All, 
> 
> 
> 
> I have a problem with slow writing when there are 10 million files.
> (Top level directories are 2,500.)
> 
> 
> I configured GlusterFS distributed cluster(3 nodes).
> Each node's spec is below.
> 
> 
>  CPU: Xeon E5-2620 (2.00GHz 6 Core)
>  HDD: SATA 7200rpm 4TB*12 (RAID 6)
>  NW: 10GBEth
>  GlusterFS : glusterfs 3.4.2 built on Jan  3 2014 12:38:06
>  
> This cluster(volume) is mounted on CentOS via FUSE client. 
> This volume is storage of our application and I want to store 3
> hundred million to 5 billion files.
> 
> 
> I performed a writing test, writing 32KByte file × 10 million to this
> volume, and encountered a problem.
> 
> 
> (1) Writing is so slow and slow down as number of files increases.
>   In non clustering situation(one node), this node's writing speed is
> 40 MByte/sec at random, 
>   But writing speed is 3.6MByte/sec on that cluster.
> (2) ls command is very slow.
>   About 20 second. Directory creation takes about 10 seconds at
> lowest.
> 
> 
> Question:
>  
>  1)5 Billion files are possible to store in GlusterFS?
>   Has someone succeeded to store billion  files to GlusterFS?
>   
>  2) Could you give me a link for a tuning guide or some information of
> tuning?
>  
> Thanks.
> 
> 
> -- Michitaka Terada
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list