[Gluster-users] Write-behind nad IO-threads examples giving warnings

Filipe Maia filipe at xray.bmc.uu.se
Wed Jan 14 12:31:02 UTC 2009


Hi,

I've tried to use the doc/examples/write-behind.vol and io-threads.vol
examples in my unify configuration.
Here's the glusterfs-server.vol:


volume disk
  type storage/posix                   # POSIX FS translator
  option directory /export/data        # Export this directory
end-volume

volume ns
  type storage/posix
  option directory /export/ns
end-volume

###  'IO-threads' translator gives a threading behaviour to File I/O calls.
# All other normal fops are having default behaviour. Loading this on
server side helps
# to reduce the contension of network. (Which is assumed as a GlusterFS hang).
# One can load it in client side to reduce the latency involved in case of a
# slow network, when loaded below write-behind.
volume iot
  type performance/io-threads
  subvolumes disk
  option thread-count 4 # default value is 1
  option cache-size 16MB # default is 64MB (This is per thread, so configure it
                        # according to your RAM size and thread-count.
end-volume


### 'Write-behind' translator is a performance booster for write operation. Best
# used on client side, as its main intension is to reduce the network latency
# caused for each write operation.

volume brick
  type performance/write-behind
  subvolumes iot
  option flush-behind on    # default value is 'off'
  option window-size 2MB
  option aggregate-size 1MB # default value is 0
end-volume

# Volume name is server
volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.ns.allow *
  subvolumes brick ns
end-volume



I'm getting the following warnings with glusterfs-1.4.0rc7:

tintoretto:~# tail /var/log/glusterfsd.log
2009-01-14 12:22:43 W [write-behind.c:1363:init] brick: aggregate-size
is not zero, disabling flush-behind
2009-01-14 12:22:43 W [glusterfsd.c:416:_log_if_option_is_invalid]
iot: option 'cache-size' is not recognized
tintoretto:~#

What am I doing wrong?

Filipe




More information about the Gluster-users mailing list