[Gluster-users] {Disarmed} Unify very slow for 2000 query to cluster / s
    Tom Lahti 
    toml at bitstatement.net
       
    Thu Nov  6 22:47:07 UTC 2008
    
    
  
root at somebox:/mnt/cluster/nested/really/deep/here# time ls -l | wc -l
6656
real    0m3.856s
user    0m0.048s
sys     0m0.092s
root at somebox:~# dumpe2fs -h /dev/vg01/cluster
dumpe2fs 1.40.8 (13-Mar-2008)
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery sparse_super large_file
Filesystem OS type:       Linux
Inode count:              121372672
Block count:              485490688
Reserved block count:     24274534
Free blocks:              260715390
Free inodes:              114407582
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      908
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              128
RAID stripe width:        256
First inode:              11
Inode size:               256
Journal inode:            8
Default directory hash:   tea
Journal backup:           inode blocks
Journal size:             128M
root at somebox:~# mount | egrep "export|gluster"
/dev/mapper/vg01-cluster on /usr/local/export type ext3 (rw,noatime,reservation)
glusterfs on /mnt/cluster type fuse
(rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
Tom Lahti wrote:
> I have 20 million+ files on ext3 with dir_index and its rocket fast to
> locate any file, even when not cached.  "ls -l" in any random directory is
> practically instant.
OK, its only 12 million files.  Sue me :P
By the way, I am re-exporting this with samba and beating the Windows 2003
Servers for performance, both write and read (read in particular) ;)
-- 
-- ============================
   Tom Lahti
   BIT Statement LLC
   (425)251-0833 x 117
   http://www.bitstatement.net/
-- ============================
    
    
More information about the Gluster-users
mailing list