[Gluster-devel] glusterfs 2.0.0rc1 replicate write perfomance problem

Titov Alexander titoff.a at gmail.com
Tue Feb 17 22:39:00 UTC 2009


Hello!

I configure glusterfs with replicate, but noted significant problems with
performance.

In configuration with replicate, 3Gbytes test file puts to glusterfs around
5 minutes. It's disaster. I have two machines with software raid and gigabit
network crossover between them. Also I created NFS like configuration of
glusterfs (cluster and server - different machines), and same file puts to
glusterfs 51 seconds.

My sever config:
volume posix
  type storage/posix
  option directory /home/storage
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume storage
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  subvolumes storage
  option auth.login.storage.allow user
  option auth.login.user.password ******
end-volume

My client config:
volume server1
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.1.1
  option remote-port 6996
  option username user
  option password *******
  option remote-subvolume storage
end-volume

volume server2
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.1.2
  option remote-port 6996
  option username user
  option password *******
  option remote-subvolume storage
end-volume

volume replicate
  type cluster/replicate
  subvolumes server1 server2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 128KB
  option window-size 1MB
  option flush-behind on
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Fuse client latest 2.7.4gfs11.

OS 2.6.24-19 x86_64 GNU/Linux

Help me with this problem please.
-- 
Titov Alexander
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090218/3c25b240/attachment-0003.html>


More information about the Gluster-devel mailing list