[Gluster-devel] cache doesn t seem to be taken into account

Sebastien COUPPEY sebastien.couppey at zero9.it
Wed Jan 9 16:23:36 UTC 2008


hello,

I discovered the GLFS 2 days ago, and have been since testing it.


I have 2 node AFR client side
1 client
version 1.3.7 everywhere
100 Mbits 

I did some transferts test with CP :

 Reference test : 

 scp laptop -> 1 node
  (128Mo) 11s
  (256Mo) 23s
  (1Go) 1m 33s


 cp laptop -> 2 nodesAFR (= cp localfile on the mount)
  (128Mo) 23s
  (256Mo) 47s
  (1Go) 3m 10 s

 the client is using 100 % of the bandwidth and as the AFR definition
 in on the client side such results seems normal to me.

 reference test : 
 cp localfs -> localfs
  (128Mo) 7s

 cp 2 nodeAFR -> laptop (=cp from the mount to localfilesystem)
  (128Mo) 0m11.864s

2nd try : 

 (128Mo) 0m11.878s

and on both time of the CP, it was the same node1
  
I thought that after the 1 read, the second one would have been like
the local filesystem. and all the file is read always from the same
node, I don t have 50%(node1) + 50%(node2) -> 100% laptop bandwidth.

is it normal ?
I attach my config files, maybe the order of a translator is missing
in the configuration.

Thanks for your advices.

 

 
-------------- next part --------------
##############################################
###  GlusterFS Client Volume Specification  ##
##############################################

#### CONFIG FILE RULES:
### "#" is comment character.
### - Config file is case sensitive
### - Options within a volume block can be in any order.
### - Spaces or tabs are used as delimitter within a line. 
### - Each option should end within a line.
### - Missing or commented fields will assume default values.
### - Blank/commented lines are allowed.
### - Sub-volumes should already be defined above before referring.

### Add client feature and attach to remote subvolume
volume server1
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.10.48         # IP address of the remote brick
  option remote-subvolume cmt        # name of the remote volume
end-volume

volume server2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.10.54
  option remote-subvolume cmt        # name of the remote volume
end-volume

volume server1-cmt-nsamespace
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.10.48      # IP address of the remote brick
  option remote-subvolume cmt-ns     # name of the remote volume
end-volume

volume server2-cmt-nsamespace
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.10.54      # IP address of the remote brick
  option remote-subvolume cmt-ns     # name of the remote volume
end-volume

volume afr-cmt
  type cluster/afr
  subvolumes server1 server2
  option replicate *:2
end-volume

volume afr-cmt-ns
  type cluster/afr
  subvolumes server1-cmt-nsamespace server2-cmt-nsamespace
  option replicate *:2
end-volume

volume unify
  type cluster/unify
  subvolumes afr-cmt
  option namespace afr-cmt-ns
  option scheduler rr
### ** ALU Scheduler Option **
#  option scheduler alu
#  option alu.limits.min-free-disk  5% #%
#  option alu.limits.max-open-files 10000
#  option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
#  option alu.disk-usage.entry-threshold 2GB
#  option alu.disk-usage.exit-threshold  128MB
#  option alu.open-files-usage.entry-threshold 1024
#  option alu.open-files-usage.exit-threshold 32
#  option alu.read-usage.entry-threshold 20 #%
#  option alu.read-usage.exit-threshold 4 #%
#  option alu.write-usage.entry-threshold 20 #%
#  option alu.write-usage.exit-threshold 4 #%
#  option alu.disk-speed-usage.entry-threshold 0 # DO NOT SET IT. SPEED IS CONSTANT!!!.
#  option alu.disk-speed-usage.exit-threshold 0 # DO NOT SET IT. SPEED IS CONSTANT!!!.
#  option alu.stat-refresh.interval 10sec
#  option alu.stat-refresh.num-file-create 10
end-volume

### Add IO-Cache feature
volume iocache
  type performance/io-cache
  #option page-size 256KB
  #option page-count 2
  option page-size 1MB      # 128KB is default
  option cache-size 300MB    # 32MB is default
  option force-revalidate-timeout 5 # 1second is default 
  subvolumes unify
end-volume


### Add readahead feature
volume readahead
  type performance/read-ahead
  option page-size 1MB     # unit in bytes
  option page-count 2       # cache per file  = (page-count x page-size)
  subvolumes iocache
end-volume


### Add writeback feature
volume writeback
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind off
  subvolumes readahead   
end-volume
-------------- next part --------------
##############################################
###  GlusterFS Server Volume Specification  ##
##############################################

#### CONFIG FILE RULES:
### "#" is comment character.
### - Config file is case sensitive
### - Options within a volume block can be in any order.
### - Spaces or tabs are used as delimitter within a line. 
### - Multiple values to options will be : delimitted.
### - Each option should end within a line.
### - Missing or commented fields will assume default values.
### - Blank/commented lines are allowed.
### - Sub-volumes should already be defined above before referring.

### Export volume "brick" with the contents of "/home/export" directory.
volume cmt
  type storage/posix                   # POSIX FS translator
  option directory /home/cmt        # Export this directory
end-volume

volume cmt-ns
  type storage/posix                   # POSIX FS translator
  option directory /home/cmt-ns        # Export this directory
end-volume


### Add network serving capability to above brick.
volume cmt-server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  subvolumes cmt cmt-ns
  option auth.ip.cmt.allow * # Allow access to "brick" volume
  option auth.ip.cmt-ns.allow * # Allow access to "brick" volume
end-volume


More information about the Gluster-devel mailing list