[Gluster-users] AFR Config sanity check?

Daniel Jordan Bambach dan at lateral.net
Tue Jun 16 09:10:03 UTC 2009


Hiya,

I'm in the process of planning a migration of our servers from  
GlusterFS 1.3.9 to 2.0.1

We run a simple set-up with two machines acting as Server and Client  
to facilitate mirroring of data between two webservers.
They have come under alot more load recently, and GlusterFS has been  
locking up our Apache processes in state D alot hence my desire to  
test v2.

I have configured both servers with a config much like the below  
('srv2' is replaced with 'srv1' on the other machine), and wondered if  
there might be any further insight from the group about improving  
performance and reliability in my rather simple set-up.

Also, is there any reason I can't use the existing (reasonably large) / 
home/export directories from GlsuterFS1.3.9 for a 2.0 install?

Any tips gratefully received.
Dan.

volume posix
  type storage/posix
  option directory /home/export
end-volume

volume locks
   type features/locks
   subvolumes posix
end-volume

volume brick
  type performance/io-threads
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume

volume localhost
  type protocol/client
  option transport-type tcp
  option remote-host localhost
  option remote-subvolume brick
end-volume

volume srv2
  type protocol/client
  option transport-type tcp
  option remote-host srv2
  option remote-subvolume brick
end-volume

volume afr
   type cluster/replicate
   subvolumes brick srv2
   option read-subvolume brick
end-volume

volume writebehind
   type performance/write-behind
   option cache-size 1MB
   subvolumes afr
end-volume

volume cache
   type performance/io-cache
   option cache-size 128MB
   option priority *.pyc:4,*.html:3,*.php:2,*:1
   subvolumes writebehind
end-volume








More information about the Gluster-users mailing list