[Gluster-devel] Speed problems and Gluster commercial support

skimber nabble at simonkimber.co.uk
Sat Jul 26 09:51:10 UTC 2008


Hi Everyone,

We have recently set up gluster on 2 x server and 2 x client machines using
AFR and Unify.

It seems to work correctly but the performance is horribly slow and
certainly no use for our production environment.

ls -l commands take far too long to complete, with directories containing
100 or so files taking at least 5 minutes to complete.

An Rsync from a remote server of mainly small files under 30k runs at about
1 file every three seconds compared to about 10 files every second when I
rsync to the client's local disk instead.

We are using glusterfs-1.3.9 and fuse-2.7.3glfs10 on Debian Etch and the
client and server configs are included at the bottom of this message.  Other
than lots of lines like the following (which somebody else has previously
mentioned shouldn't be anything to worry about) I can't see anything
relevant in any of the logs:

2008-07-15 14:19:41 E [posix.c:1984:posix_setdents] brick-ns: Error creating
file /data/export-ns/mydata/myfile.txt with mode (0100644)

Any advice or suggestions would be greatly appreciated! 

On another note, I have sent a couple of requests for information about Z
Research's commercial support services for Gluster, but have had no
response.  I have also tried phoning them several times but only ever get
their voicemail.  Can anybody on here vouch for their support services as my
experience so far hasn't inspired confidence?

Many thanks 

Simon




Configs:

The server config looks like this: 

volume posix 
  type storage/posix 
  option directory /data/export 
end-volume 

volume plocks 
  type features/posix-locks 
  subvolumes posix 
end-volume 

volume brick 
  type performance/io-threads 
  option thread-count 4 
  subvolumes plocks 
end-volume 

volume brick-ns 
  type storage/posix 
  option directory /data/export-ns 
end-volume 

volume server 
  type protocol/server 
  option transport-type tcp/server 
  option auth.ip.brick.allow * 
  option auth.ip.brick-ns.allow * 
  subvolumes brick brick-ns 
end-volume 



And the client config looks like this: 


volume brick1 
 type protocol/client 
 option transport-type tcp/client     # for TCP/IP transport 
 option remote-host data01      # IP address of the remote brick 
 option remote-subvolume brick        # name of the remote volume 
end-volume 

volume brick2 
 type protocol/client 
 option transport-type tcp/client 
 option remote-host data02 
 option remote-subvolume brick 
end-volume 

volume brick-ns1 
 type protocol/client 
 option transport-type tcp/client 
 option remote-host data01 
 option remote-subvolume brick-ns  # Note the different remote volume name. 
end-volume 

volume brick-ns2 
 type protocol/client 
 option transport-type tcp/client 
 option remote-host data02 
 option remote-subvolume brick-ns  # Note the different remote volume name. 
end-volume 

volume afr1 
 type cluster/afr 
 subvolumes brick1 brick2 
end-volume 

volume afr-ns 
 type cluster/afr 
 subvolumes brick-ns1 brick-ns2 
end-volume 

volume unify 
 type cluster/unify 
 option namespace afr-ns 
 option scheduler rr 
 subvolumes afr1 
end-volume 

volume readahead 
  type performance/read-ahead 
  option page-size 128kB        # 256KB is the default option 
  option page-count 4           # 2 is default option 
  option force-atime-update off # default is off 
  subvolumes unify 
end-volume 

volume writebehind 
  type performance/write-behind 
  option aggregate-size 1MB # default is 0bytes 
  option flush-behind on    # default is 'off' 
  subvolumes readahead 
end-volume 

volume io-cache 
  type performance/io-cache 
  option cache-size 64MB             # default is 32MB 
  option page-size 1MB               # 128KB is default option 
  option priority *:0                # default is '*:0' 
  option force-revalidate-timeout 2  # default is 1 
  subvolumes writebehind 
end-volume
-- 
View this message in context: http://www.nabble.com/Speed-problems-and-Gluster-commercial-support-tp18665138p18665138.html
Sent from the gluster-devel mailing list archive at Nabble.com.






More information about the Gluster-devel mailing list