[Gluster-devel] Confused on AFR, where does it happen client or server
Brandon Lamb
brandonlamb at gmail.com
Tue Jan 8 04:15:50 UTC 2008
I have been reading through old mails for this list and the wiki and i
am confused on what machine would do the writes for an afr setup.
Say I have two "servers" (192.168.0.10, 192.168.0.11) with configs
-------------
volume locks
type features/posix-locks
subvolumes brick
end-volume
volume brick
type storage/posix
option directory /mnt/raid/gfs
end-volume
volume server
type protocol/server
subvolumes brick
option transport-type tcp/server # For TCP/IP transport
option client-volume-filename /etc/glusterfs/glusterfs-client.vol
option auth.ip.brick.allow 192.168.0*
option auth.ip.brick-ns.allow 192.168.0*
end-volume
------------
and one client (192.168.0.20) with config
------------
### Add client feature and attach to remote subvolume of server1
volume brick1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.0.10 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
### Add client feature and attach to remote subvolume of brick1
volume brick2
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.0.11 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
### Add AFR feature to brick1
volume afr1
type cluster/afr
subvolumes brick1 brick2
end-volume
### Add IO Threads
volume iothreads
type performance/io-threads
option thread-count 4
option cache-size 32MB
subvolumes afr1
end-volume
### Add IO Cache
volume io-cache
type performance/io-cache
option cache-size 32MB # default is 32MB
option page-size 1MB #128KB is default option
option priority *.php:3,*.htm:2,*:1 # default is '*:0'
option force-revalidate-timeout 2 # default is 1
subvolumes iothreads
end-volume
### Add writebehind feature
volume writebehind
type performance/write-behind
option aggregate-size 1MB
option flush-behind on
subvolumes io-cache
end-volume
### Add readahead feature
volume readahead
type performance/read-ahead
option page-size 256kB
option page-count 16 # cache per file = (page-count x page-size)
subvolumes writebehind
end-volume
------------
So when my client machine copies a 10 megabyte file, does the client
send this file to server1 and server2, or do the servers figure this
out?
Is there server side afr?
My question is along the lines of what happens if i have 10 servers
(data nodes) and 2 clients, how can i move the load of copying files
from clients to servers? or is this even possible?
I would think that in a mail cluster setup, i would not want my mail
servers having the extra load of handling the afr, i would think it
should send a write to one server and the server handles replicating?
Or am I totally not understanding something?
More information about the Gluster-devel
mailing list