[Gluster-users] behavior of ALU Scheduler
Deian Chepishev
dchepishev at nexbrod.com
Tue Oct 21 11:57:31 UTC 2008
Hello,
I have one question about the ALU scheduler.
If for example I have one UNIFY volume which is using ALU scheduler with
the following config:
volume unify
type cluster/unify
option namespace afr-ns
option scheduler rr
option scheduler alu # use the ALU scheduler
option alu.limits.min-free-disk 3% # Don't create files one a
volume with less than 5% free diskspace
option alu.limits.max-open-files 10000 # Don't create files on a
volume with more than 10000 files open
option alu.order
disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
option alu.disk-usage.entry-threshold 300GB # Kick in if the
discrepancy in disk-usage between volumes is more than 2GB
option alu.disk-usage.exit-threshold 100GB # Don't stop writing to
the least-used volume until the discrepancy is 1988MB
option alu.open-files-usage.entry-threshold 1024 # Kick in if the
discrepancy in open files is 1024
option alu.open-files-usage.exit-threshold 32 # Don't stop until 992
files have been written the least-used volume
option alu.stat-refresh.interval 10sec # Refresh the statistics used
for decision-making every 10 seconds
subvolumes brick1-stor01 brick1-stor02
end-volume
And I have the following directory structure visible from the client:
/mnt/gfs/test1
files:
testfile1.dat
testfile2.dat
/mnt/gfs/test2
In directory test1 I have for example files: testfile1.dat and
testfile2.dat which are physically located respectively
testfile1.dat on stor01
testfile2.dat on stor02
If the space on the bricks is such that ALU scheduler is in active
state, in this particular example if the free space difference is more
than 300G and stor02 have more free space than stor01
what will happen if I execute the following command? :
mv /mnt/gfs/test1/* /mnt/gfs/test2
Is it going to physically move the files from stor01 to stor02 or it
will leave it on the same server and just do a local filesystem move
than copying the files over the network?
Regards,
Deian
More information about the Gluster-users
mailing list