[Gluster-users] performance translators
m.c.wilkins at massey.ac.nz
m.c.wilkins at massey.ac.nz
Thu Oct 23 21:34:23 UTC 2008
Hi,
On Thu, Oct 23, 2008 at 08:49:19AM +0530, Basavanagowda Kanur wrote:
> Wilkins,
> I have added relevant performance translator inline. please go through
> translator options document and change translator parameters according to your
> needs.
Thank you very much Basavanagowda. Certainly wasn't obvious to me
where to stick what, you have cleared things up a lot for me.
Thanks
Matt
> On Thu, Oct 23, 2008 at 1:31 AM, <m.c.wilkins at massey.ac.nz> wrote:
>
>
> Hi,
>
> I only heard about GlusterFS last week, so am still a newbie. I have
> a question regarding using performance translators, in particular in a
> NUFA setup.
>
> A quick summary of my setup. I have two machines (a third is to be
> added): k9 has two bricks (16T and 2T), orac has one brick of 5T. I
> have used AFR for the namespace. My config is below.
>
> Everything seems to be working OK, but I would like to add in some
> performance translators and I'm not exactly sure where. There are
> five: read ahead, write behind, threaded IO, IO-cache, and booster.
> Which go where? On server or client? On each individual brick, or
> after the unify or afr? I have read the doco, that is why I've
> managed to get this far, I can see how I can stick in one or two
> translators, but not if I should have all of them and where they
> should all go. For instance I see IO-cache should go on the client
> side, but should it be on each brick, or on the unify or what?
>
> I know this is quite a big ask, but if someone could have a read
> through my config and perhaps show where I should stick in all the
> translators that would be great.
>
> Thank you muchly!
>
> Matt
>
> This is the config on k9 (the one on orac is very similar, I won't
> bother showing it here):
>
> volume brick0
> type storage/posix
> option directory /export/brick0
> end-volume
>
>
> volume iot-0
> type performance/io-threads
> subvolume brick0
> end-volume
>
>
>
> volume brick1
> type storage/posix
> option directory /export/brick1
> end-volume
>
>
> volume iot-1
> type performance/io-threads
> subvolume brick1
> end-volume
>
>
>
> volume brick-ns
> type storage/posix
> option directory /export/brick-ns
> end-volume
>
>
> volume iot-ns
> type performance/io-threads
> subvolume brick-ns
> end-volume
>
>
> volume server
> type protocol/server
> subvolumes brick0 brick1 brick-ns
> option transport-type tcp/server
> #option auth.ip.brick0.allow 127.0.0.1,130.123.129.121,130.123.128.35,
> 130.123.128.28 # this is what i want, but it doesn't seem to work
> option auth.ip.brick0.allow *
> option auth.ip.brick1.allow *
> option auth.ip.brick-ns.allow *
>
> option auth.ip.iot-0.allow *
> option auth.ip.iot-1.allow *
> option auth.ip.iot-ns.allow *
>
>
>
> end-volume
>
> volume client-orac-0
> type protocol/client
> option transport-type tcp/client
> option remote-host orac
> option remote-subvolume iot-0
> end-volume
>
> volume client-orac-ns
> type protocol/client
> option transport-type tcp/client
> option remote-host orac
> option remote-subvolume iot-ns
> end-volume
>
> volume afr-ns
> type cluster/afr
> subvolumes iot-ns client-orac-ns
> end-volume
>
> volume unify
> type cluster/unify
> option namespace afr-ns
> option scheduler nufa
> option nufa.local-volume-name iot-0,iot-1
> option nufa.limits.min-free-disk 5%
> subvolumes iot-0 iot-1 client-orac-0
> end-volume
>
>
> volume ra
> type performance/read-ahead
> subvolume unify
> end-volume
>
> volume ioc
> type performance/io-cache
> subvolume ra
> end-volume
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
>
> --
> hard work often pays off after time, but laziness always pays off now
More information about the Gluster-users
mailing list