[Gluster-devel] about afr
nicolas prochazka
prochazka.nicolas at gmail.com
Mon Jan 12 15:49:21 UTC 2009
Hi.
I've installed this model to test Gluster :
+ 2 servers ( A B )
- with glusterfsd server ( glusterfs--mainline--3.0--patch-842 )
- with glusterfs client
server conf file .
+ 1 server C only client mode.
My issue :
If C open big file in this client configuration and then i stop server A (or
B )
gluster mount point on server C seems to be block, i can not do 'ls -l' for
example.
Is a this thing is normal ? as C open his file on A or B , then it is
blocking when server down ?
I was thinking in client AFR, client can reopen file/block an other server ,
i'm wrong ?
Should use HA translator ?
Regards,
Nicolas Prochazka.
volume brickless
type storage/posix
option directory /mnt/disks/export
end-volume
volume brick
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes brickless
end-volume
volume server
type protocol/server
subvolumes brick
option transport-type tcp
option auth.addr.brick.allow 10.98.98.*
end-volume
---------------------------
client config
*volume brick_10.98.98.1
type protocol/client
option transport-type tcp/client
option remote-host 10.98.98.1
option remote-subvolume brick
end-volume
**volume brick_10.98.98.2
type protocol/client
option transport-type tcp/client
option remote-host 10.98.98.2
option remote-subvolume brick
end-volume*
*
volume last
type cluster/replicate
subvolumes brick_10.98.98.1 **brick_10.98.98.2*
*end-volume
volume iothreads
type performance/io-threads
option thread-count 2
option cache-size 32MB
subvolumes last
end-volume
volume io-cache
type performance/io-cache
option cache-size 1024MB # default is 32MB
option page-size 1MB #128KB is default option
option force-revalidate-timeout 2 # default is 1
subvolumes iothreads
end-volume
volume writebehind
type performance/write-behind
option aggregate-size 256KB # default is 0bytes
option window-size 3MB
option flush-behind on # default is 'off'
subvolumes io-cache
end-volume
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090112/77c2cbf1/attachment-0003.html>
More information about the Gluster-devel
mailing list