[Gluster-devel] AFR between two bricks over 3000 miles
nathan at robotics.net
nathan at robotics.net
Sat Mar 1 03:18:36 UTC 2008
/etc/sysconfig is only 748K
cp /etc/sysconfig to /share/mirror took real 0m55.319s
cp /etc/sysconfig to /share took real 0m0.030s
I expected it to be much faster. :)
# Client
volume nyc
type protocol/client
option transport-type tcp/client
option remote-host 10.11.0.1
option remote-subvolume nyc
end-volume
volume sjc
type protocol/client
option transport-type tcp/client
option remote-host 10.12.0.1
option remote-subvolume sjc
end-volume
volume sjc_iocache
type performance/io-cache
option page-size 256KB
option page-count 2
subvolumes sjc
end-volume
volume sjc_write-behind
type performance/write-behind
option aggregate-size 1MB
option flush-behind on
subvolumes sjc_iocache
end-volume
volume mirror
type cluster/afr
subvolumes nyc sjc
end-volume
><>
Nathan Stratton
nathan at robotics.net
http://www.robotics.net
On Thu, 28 Feb 2008, Anand Avati wrote:
> on the client. you also might want to put this write-behind + io-cache pair
> in the subvolume path of afr which leads towards the remote site alone. Also
> make sure afr does not have the remote site as the first subvolume, and has
> the option read-subvolume <local-volume> so that reads are not scheduled to
> the remote site.
>
> avati
>
> 2008/2/28, nathan at robotics.net <nathan at robotics.net>:
>>
>>
>> On Thu, 28 Feb 2008, Anand Avati wrote:
>>
>>> using a combination of write-behind with io-threads (with option
>>> flush-behind on) prevents the wait of upto N-MB of data (where N is
>> 'option
>>> cache-size NMB' of io-threads)
>>
>>
>> Client and server are on each host, should I do this on the client or
>> server?
>>
>> -Nathan
>>
>
>
>
> --
> If I traveled to the end of the rainbow
> As Dame Fortune did intend,
> Murphy would be there to tell me
> The pot's at the other end.
>
More information about the Gluster-devel
mailing list