[Gluster-users] Slow write performance

Heine Andersen heine.andersen at gmail.com
Wed May 27 06:32:23 UTC 2009


Hi,

I'm evaluating gluster 2.0.1 on vmware esx, but i think the write
performance is bad.
A write of a 100 mb file takes 56 sec. If i do a scp of the same file to one
of the servers,
It's <5 sec. The OS is Ubuntu 8.04.1

"Benchmark"

time cp /tmp/bigfile.100m /gluster/
real    0m56.763s
user    0m0.000s
sys     0m0.292s

time scp /tmp/bigfile.100m host1:/tmp/tester
bigfile.100m
100%  100MB  25.0MB/s   00:04

real    0m4.019s
user    0m0.492s
sys     0m2.796s


Config :

#Server:
volume posix
  type storage/posix
  option directory /data
end-volume

volume locks
  type features/locks
  option mandatory-locks on
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume


#Client:
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host host1
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host host2
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp
  option remote-host host3
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2 remote3
end-volume

volume readahead
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes replicate
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 1MB
  option window-size 2MB
  option flush-behind off
  subvolumes readahead
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

regards
Heine
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090527/3d6ede2c/attachment.html>


More information about the Gluster-users mailing list