[Gluster-users] Some Unify questions

Raghavendra G raghavendra.hg at gmail.com
Mon Oct 13 09:19:07 UTC 2008


By volumes, I meant translators.

Glusterfs builds a graph which is rooted at the bottom most translator in
the volume-specification file. This behaviour can be changed through the
--volume-name option. With --volume-name option, the graph will be rooted at
the translator provided as the argument to the option. This is helpful for
debugging.

As far as your volume specification file is concerned, its correct and you
can obtain both functionalities of read-ahead and write-behind by simply
starting glusterfs without the --volume-name option, since the graph will be
already rooted at the read-ahead translator.

regards,
On Mon, Oct 13, 2008 at 12:43 PM, Deian Chepishev <dchepishev at nexbrod.com>wrote:

> Hello Raghavendra,
>
> Raghavendra G wrote:
> > Hi Deian,
> >
> > On Fri, Oct 10, 2008 at 8:51 PM, Deian Chepishev
> > <dchepishev at nexbrod.com <mailto:dchepishev at nexbrod.com>> wrote:
> >
> >     Hi guys,
> >
> >     I have a few questions about UNIFY and volume creation.
> >
> >     You will find my config files at the end of this post. I will post my
> >     questions before the config.
> >
> >     1. I want to use writebehind and readahead translators, because I
> >     think
> >     it speeds the transfer. Can you please take a look i let me know if
> it
> >     is correctly written.
> >     I basically do this:
> >     create one volume from the exported bricks lets say "unify"
> >     create another volume named "writebehind" with subvolumes unify
> >     then create another volume named "readahead" with subvolumes
> >     writebehind
> >     then mount the volume named writebehind.
> >
> >
> > If you are using  --volume-name option to glusterfs to attach to
> > writebehind, then you are bypassing readahead and hence will not get
> > readahead functionality. If you want to have both read-ahead and
> > write-behind functionalities, do not specify --volume-name option (or
> > give readahead as the argument to the option, if at all you want to
> > use it).
>
> ===> I am even more confused by your answer :).
> I want to have single volume for which I want load both translators
> readahead and writebehind. That is why I thought that this is
> accomplished with the above mentioned definition scheme. Looks like I am
> wrong.
>
> What is the correct way to define the volume in order to have both
> translators loaded for it ?
>
> >
> >
> >     I have the following server and client files:
> >
> >
> >     volume brick
> >      type storage/posix
> >      option directory /storage/gluster-export/data/
> >     end-volume
> >
> >     volume brick-ns
> >      type storage/posix
> >      option directory /storage/gluster-export/ns
> >     end-volume
> >
> >     ### Add network serving capability to above brick.
> >
> >     volume server
> >      type protocol/server
> >      option transport-type tcp/server
> >      subvolumes brick brick-ns
> >      option auth.ip.brick.allow 10.1.124. <http://10.1.124.>*
> >      option auth.ip.brick-ns.allow 10.1.124. <http://10.1.124.>*
> >     end-volume
> >
> >     =========================
> >
> >     Client:
> >
> >     volume brick1-stor01
> >      type protocol/client
> >      option transport-type tcp/client
> >      option remote-host 10.1.124.200 <http://10.1.124.200>
> >      option remote-subvolume brick
> >     end-volume
> >
> >     volume brick1-stor02
> >      type protocol/client
> >      option transport-type tcp/client
> >      option remote-host 10.1.124.201 <http://10.1.124.201>
> >      option remote-subvolume brick
> >     end-volume
> >
> >     volume brick-ns1
> >      type protocol/client
> >      option transport-type tcp/client
> >      option remote-host 10.1.124.200 <http://10.1.124.200>
> >      option remote-subvolume brick-ns
> >     end-volume
> >
> >
> >     volume brick-ns2
> >      type protocol/client
> >      option transport-type tcp/client
> >      option remote-host 10.1.124.201 <http://10.1.124.201>
> >      option remote-subvolume brick-ns  # Note the different remote
> >     volume name.
> >     end-volume
> >
> >     volume afr-ns
> >      type cluster/afr
> >      subvolumes brick-ns1 brick-ns2
> >     end-volume
> >
> >     volume unify
> >      type cluster/unify
> >      option namespace afr-ns
> >      option scheduler rr
> >      option scheduler alu   # use the ALU scheduler
> >      option alu.limits.min-free-disk  5%      # Don't create files one a
> >     volume with less than 5% free diskspace
> >      option alu.limits.max-open-files 10000   # Don't create files on a
> >     volume with more than 10000 files open
> >      option alu.order
> >     disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
> >      option alu.disk-usage.entry-threshold 100GB   # Kick in if the
> >     discrepancy in disk-usage between volumes is more than 2GB
> >      option alu.disk-usage.exit-threshold  50MB   # Don't stop writing to
> >     the least-used volume until the discrepancy is 1988MB
> >      option alu.open-files-usage.entry-threshold 1024   # Kick in if the
> >     discrepancy in open files is 1024
> >      option alu.open-files-usage.exit-threshold 32   # Don't stop
> >     until 992
> >     files have been written the least-used volume
> >      option alu.stat-refresh.interval 10sec   # Refresh the statistics
> >     used
> >     for decision-making every 10 seconds
> >      subvolumes brick1-stor01 brick1-stor02
> >     end-volume
> >
> >     volume writebehind
> >      type performance/write-behind
> >      option aggregate-size 512kb # default is 0bytes
> >      option flush-behind on    # default is 'off'
> >      subvolumes unify
> >     end-volume
> >
> >     volume readahead
> >      type performance/read-ahead
> >      option page-size 512kB
> >      option page-count 4
> >      option force-atime-update off
> >      subvolumes writebehind
> >     end-volume
> >
>
>
> Thank you.
>
> Regards,
> Deian
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081013/eee6e3f8/attachment.html>


More information about the Gluster-users mailing list