[Gluster-users] Gluster 2-Node like DRBD - poor performance and not-redundant?
Jeffery Soo
js at realtechtalk.com
Tue Oct 13 22:49:13 UTC 2009
Hi guys
I've been playing around with GlusterFS because I was hoping to replace
DRBD with it. Maybe I'm getting something wrong, but it doesn't seem as
robust or dependable as DRBD.
I just want to setup 2 GlusterFS with striping (I believe the correct
term is really AFR with GlusterFS), for high availability.
So far I've found the performance very slow, with 3-7MB/s and that if
you shut down one GlusterFS server, you will get "Transport endpoint is
not connected" when trying to write data.
Here are my config files:
### Add client feature and attach to remote subvolume of server1
volume brick1
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.21 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
### Add client feature and attach to remote subvolume of server2
volume brick2
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.22 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
### The file index on server1
volume brick1-ns
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.21 # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume
### The file index on server2
volume brick2-ns
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.22 # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume
#The replicated volume with data
volume afr1
type cluster/afr
subvolumes brick1 brick2
end-volume
#The replicated volume with indexes
volume afr-ns
type cluster/afr
subvolumes brick1-ns brick2-ns
end-volume
#The unification of all afr volumes (used for > 2 servers)
volume unify
type cluster/unify
option scheduler rr # round robin
option namespace afr-ns
subvolumes afr1
end-volume
# file: /etc/glusterfs/glusterfs-server.vol
volume posix
type storage/posix
option directory /data/export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume posix-ns
type storage/posix
option directory /data/export-ns
end-volume
volume locks-ns
type features/locks
subvolumes posix-ns
end-volume
volume brick-ns
type performance/io-threads
option thread-count 8
subvolumes locks-ns
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
option auth.addr.brick-ns.allow *
subvolumes brick brick-ns
end-volume
Any comments/help would be appreciated.
Thank you.
More information about the Gluster-users
mailing list