RE : RE : RE : [Gluster-devel] user question : multiple clients concurrent access
GARDAIS Ionel
Ionel.Gardais at tech-advantage.com
Sat Dec 15 22:58:08 UTC 2007
OK.
One last point : I tried the rr scheduler with default settings.
To unleash storage bandwidth, is it possible to set the round-robin or the random scheduler to use a volume-based policy instead of a time-based policy (that is, "switch to the other volumes every megabyte wrote" instead of "switch to the other volume every 10 seconds") ?
If so, what is the lowest value I could use ? (If possible, I'd like to test a 1MB round-robin or random sched).
Also, from a network point of view, a simple client/server setup did not exceed 35MB/s with a bonnie++ test. (no options set to bonnie, just the directory to use, and a GbE network) whereas a local test (server and client on the same computer) shows a result of nearly 60MB/s.
What is the theorical overhead implied by GlusterFS ?
Can I expect to get something close to the theorical values (100MB/s for the network connection limited by the 80MB/s of the FC RAID) or should ?
Using TCP transport and a raw, untuned setup, GlusterFS seems equal to NFS (with the great advantage of storage aggregation for GlusterFS)
Note : is it correct to say that "namespace" is like "metadata" ? (I'm lost in what a namespace can be)
Ionel
-------- Message d'origine--------
De: anand.avati at gmail.com de la part de Anand Avati
Date: sam. 15/12/2007 02:09
À: GARDAIS Ionel
Cc: Onyx; gluster-devel at nongnu.org
Objet : Re: RE : RE : [Gluster-devel] user question : multiple clients concurrent access
>
> I think I'm gonna love GlusterFS.
> NUFA option for local volume in the Translator page shows one local volume
>
> option nufa.local-volume-name posix1
>
> If the local server has 2 local volumes, is it okay to define them both
> under nufa.local-volume-name or should it be unique ?
>
as of now it is just one of them (unique). but you can have a kind of
workaround by making each node export its own unify (with its own local
namespace, and maybe round robin across the local disks) and unify these
low-level unifys into a NUFA scheduled top level global namespaced unify.
as a philosophy -- while setting up glusterfs you should really set your
mindset as that of a programmer who is writing code (spec file) using the
the language primitives (xlators). almost all the time, any configuration is
possible by stitching the right spec file using the xlators in the right
way.
Also, is it better to define one io-threads translator for a defined unified
> volume
> or one io-threads translator per volume used by the unified volume ?
>
a single io-threads around unify should suffice. there is really no need to
have seperate io-threads per subvolume of unify. the idea would be to have
one io-threads per network interface (client side) and one io-threads per
disk on the server (multiple io-threads leading to the same network iface or
disk is superfluous)
thanks,
avati
--
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.
More information about the Gluster-devel
mailing list