[Gluster-users] Some Unify questions
Deian Chepishev
dchepishev at nexbrod.com
Mon Oct 13 08:43:19 UTC 2008
Hello Raghavendra,
Raghavendra G wrote:
> Hi Deian,
>
> On Fri, Oct 10, 2008 at 8:51 PM, Deian Chepishev
> <dchepishev at nexbrod.com <mailto:dchepishev at nexbrod.com>> wrote:
>
> Hi guys,
>
> I have a few questions about UNIFY and volume creation.
>
> You will find my config files at the end of this post. I will post my
> questions before the config.
>
> 1. I want to use writebehind and readahead translators, because I
> think
> it speeds the transfer. Can you please take a look i let me know if it
> is correctly written.
> I basically do this:
> create one volume from the exported bricks lets say "unify"
> create another volume named "writebehind" with subvolumes unify
> then create another volume named "readahead" with subvolumes
> writebehind
> then mount the volume named writebehind.
>
>
> If you are using --volume-name option to glusterfs to attach to
> writebehind, then you are bypassing readahead and hence will not get
> readahead functionality. If you want to have both read-ahead and
> write-behind functionalities, do not specify --volume-name option (or
> give readahead as the argument to the option, if at all you want to
> use it).
===> I am even more confused by your answer :).
I want to have single volume for which I want load both translators
readahead and writebehind. That is why I thought that this is
accomplished with the above mentioned definition scheme. Looks like I am
wrong.
What is the correct way to define the volume in order to have both
translators loaded for it ?
>
>
> I have the following server and client files:
>
>
> volume brick
> type storage/posix
> option directory /storage/gluster-export/data/
> end-volume
>
> volume brick-ns
> type storage/posix
> option directory /storage/gluster-export/ns
> end-volume
>
> ### Add network serving capability to above brick.
>
> volume server
> type protocol/server
> option transport-type tcp/server
> subvolumes brick brick-ns
> option auth.ip.brick.allow 10.1.124. <http://10.1.124.>*
> option auth.ip.brick-ns.allow 10.1.124. <http://10.1.124.>*
> end-volume
>
> =========================
>
> Client:
>
> volume brick1-stor01
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.1.124.200 <http://10.1.124.200>
> option remote-subvolume brick
> end-volume
>
> volume brick1-stor02
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.1.124.201 <http://10.1.124.201>
> option remote-subvolume brick
> end-volume
>
> volume brick-ns1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.1.124.200 <http://10.1.124.200>
> option remote-subvolume brick-ns
> end-volume
>
>
> volume brick-ns2
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.1.124.201 <http://10.1.124.201>
> option remote-subvolume brick-ns # Note the different remote
> volume name.
> end-volume
>
> volume afr-ns
> type cluster/afr
> subvolumes brick-ns1 brick-ns2
> end-volume
>
> volume unify
> type cluster/unify
> option namespace afr-ns
> option scheduler rr
> option scheduler alu # use the ALU scheduler
> option alu.limits.min-free-disk 5% # Don't create files one a
> volume with less than 5% free diskspace
> option alu.limits.max-open-files 10000 # Don't create files on a
> volume with more than 10000 files open
> option alu.order
> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
> option alu.disk-usage.entry-threshold 100GB # Kick in if the
> discrepancy in disk-usage between volumes is more than 2GB
> option alu.disk-usage.exit-threshold 50MB # Don't stop writing to
> the least-used volume until the discrepancy is 1988MB
> option alu.open-files-usage.entry-threshold 1024 # Kick in if the
> discrepancy in open files is 1024
> option alu.open-files-usage.exit-threshold 32 # Don't stop
> until 992
> files have been written the least-used volume
> option alu.stat-refresh.interval 10sec # Refresh the statistics
> used
> for decision-making every 10 seconds
> subvolumes brick1-stor01 brick1-stor02
> end-volume
>
> volume writebehind
> type performance/write-behind
> option aggregate-size 512kb # default is 0bytes
> option flush-behind on # default is 'off'
> subvolumes unify
> end-volume
>
> volume readahead
> type performance/read-ahead
> option page-size 512kB
> option page-count 4
> option force-atime-update off
> subvolumes writebehind
> end-volume
>
Thank you.
Regards,
Deian
More information about the Gluster-users
mailing list