[Gluster-users] Is that iozone result normal?
Kirby Zhou
kirbyzhou at gmail.com
Sun Dec 14 17:36:33 UTC 2008
5-nodes server and 1 node client are connected by gigabits Ethernet.
#] iozone -r 32k -r 512k -s 8G
KB reclen write rewrite read reread read write
read rewrite read fwrite frewrite fread freread
8388608 32 10559 9792 62435 62260
8388608 512 63012 63409 63409 63138
It seems 32k write/rewrite performance are very poor, which is different
from a local file system.
In the local file system, the big difference will only present on the random
read/write stage.
========================
Server conf:
========================
volume brick1-raw
type storage/posix # POSIX FS translator
option directory /exports/disk1 # Export this directory
end-volume
volume brick1
type performance/io-threads
subvolumes brick1-raw
option thread-count 16
option cache-size 256m
end-volume
volume brick2-raw
type storage/posix # POSIX FS translator
option directory /exports/disk2 # Export this directory
end-volume
volume brick2
type performance/io-threads
subvolumes brick2-raw
option thread-count 16
option cache-size 256m
end-volume
volume brick-ns
type storage/posix # POSIX FS translator
option directory /exports/ns # Export this directory
end-volume========================
Client conf:
========================
volume unify0-raw
type cluster/unify
subvolumes remote1-brick1 remote1-brick2 remote2-brick1 remote2-brick2
remote3-brick1 remote3-brick2 remote4-brick1 remote4-brick2 remote5-brick1
remote5-brick2
option namespace afr-ns0
option scheduler alu
option alu.limits.min-free-disk 5%
option alu.order disk-usage:open-files-usage
option alu.disk-usage.entry-threshold 1GB # Kick in if the discrepancy
in disk-usage between volumes is more than 1GB
option alu.disk-usage.exit-threshold 60MB # Don't stop writing to the
least-used volume until the discrepancy is 1988MB
option alu.open-files-usage.entry-threshold 1024 # Kick in if the
discrepancy in open files is 1024
option alu.open-files-usage.exit-threshold 32 # Don't stop until 992
files have been written the least-used volume
end-volume
volume unify0-io-cache
type performance/io-cache
option cache-size 256MB # default is 32MB
option page-size 1MB # 128KB is default option
#option priority *.h:3,*.html:2,*:1 # default is '*:0'
#option force-revalidate-timeout 2 # default is 1
subvolumes unify0-raw
end-volume
volume unify0-writebehind
type performance/write-behind
option aggregate-size 1MB # default is 0bytes
#option window-size 3MB # default is 0bytes
option flush-behind on # default is 'off'
subvolumes unify0-io-cache
end-volume
volume unify0
type features/fixed-id
option fixed-uid 99
option fixed-gid 99
subvolumes unify0-io-cache
end-volume
More information about the Gluster-users
mailing list