[Gluster-users] glusterfs performance

Graeme graeme at sudo.ca
Fri Sep 19 19:42:41 UTC 2008


Anand Avati wrote:
> Please post your spec files and the commands used to benchmark

Alright, just a sec, as I'm running another benchmark with write-back's
aggregate-size set to 4MB (which seems to be the maximum)... Alright,
that made a bit of an improvement, but it's still slower than 1.3.10
with aggregate-size set to 1MB on the server side, and also still a
*lot* slower than the write speeds of the systems involved (~70MB/sec on
server1, ~35MB/sec on server2):

1.3.10: write=20.8MB/sec, read=36.1MB/sec, rewrite=12.5MB/sec
1.4pre5: write=19.3MB/sec, read=61.48MB/sec, rewrite 10.3MB/sec


Server config (on both machines) is:

--
volume unify-ds-brick
  type storage/posix
  option directory /srv/gluster/unify-ds
end-volume

volume unify-ds-lock
  type features/posix-locks
  subvolumes unify-ds-brick
end-volume

volume unify-ds
  type performance/io-threads
  option thread-count 2 # <= # logical CPUs
  option cache-size 64MB
  subvolumes unify-ds-lock
end-volume

volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  subvolumes unify-ds unify-ns
  option auth.addr.unify-ds.allow server1ip,server2ip,127.0.0.1
  option auth.addr.unify-ns.allow server1ip,server2ip,127.0.0.1
end-volume
--

Client config (on server1) is:

--
volume mirror1-brick
  type protocol/client
  option transport-type tcp/client      # for TCP/IP transport
  option remote-host 127.0.0.1
  option remote-subvolume unify-ds      # name of the remote volume
end-volume

volume mirror2-brick
  type protocol/client
  option transport-type tcp/client      # for TCP/IP transport
  option remote-host 69.90.194.208
  option remote-subvolume unify-ds      # name of the remote volume
end-volume

### READ AHEAD ###
volume mirror1-read
  type performance/read-ahead
  option page-size 256kB
  option page-count 4
  subvolumes mirror1-brick
end-volume

volume mirror2-read
  type performance/read-ahead
  option page-size 256kB
  option page-count 4
  subvolumes mirror2-brick
end-volume

### WRITE BEHIND ###
volume mirror1
  type performance/write-behind
  option aggregate-size 4MB
  option flush-behind off
  subvolumes mirror1-read
end-volume

volume mirror2
  type performance/write-behind
  option aggregate-size 4MB
  option flush-behind off
  subvolumes mirror2-read
end-volume

### TOP LEVEL VOLUMES ###
volume mirror
  type cluster/afr
  subvolumes mirror1 mirror2
  option read-subvolume mirror1
end-volume
--

And the benchmark I'm running on server1 (after mounting on /srv/mnt) is:
bonnie -s 8G -u nobody -d /srv/mnt/bonnie -f

G





More information about the Gluster-users mailing list