[Gluster-devel] Gluster resource scheduling
Raghavendra G
raghavendra.hg at gmail.com
Fri Jun 27 06:34:34 UTC 2008
Hi,
afr supports an option "read-subvolume". Reads are by default scheduled to
the node set in this option. Hence In your configuration, each client can
use a different value for the above option to spread out the reads to
different servers.
regards,
On Wed, Jun 25, 2008 at 9:31 PM, <jcanter at clemson.edu> wrote:
> Hi,
>
> I have a cluster with 10 servers setup in afr mode(set from client side)
> which may be called upon
> by any of 14 clients. When I attempt to read from the machines, they all
> try to pull data from a
> single server. Is there a way to configure the load to be spread out among
> the mirrored servers
> rather than on a single one? Would server side afr work better in such a
> case? My setup is as
> follows:
>
> #Client Setup
> volume server1
> type protocol/client
> option transport-type tcp/client
> option remote-host server1
> option remote-subvolume brick
> end-volume
> ...
> volume server10
> type protocol/client
> option transport-type tcp/client
> option remote-host server10
> option remote-subvolume brick
> end-volume
>
> volume mirror0
> type cluster/afr
> subvolumes server1 server2 server3 server4 server5 server6 server7 server8
> server9 server10
> end-volume
>
>
>
> #server config
> volume brick
> type storage/posix
> option directory /dfs/
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> option auth.ip.brick.allow *
> subvolumes brick
> end-volume
>
>
> Thank you
>
> Josh
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous
More information about the Gluster-devel
mailing list