[Gluster-devel] two disks on each node.
Amar S. Tumballi
amar at zresearch.com
Thu Dec 13 17:45:32 UTC 2007
you can do this instead,
====================
server spec: (write proper type option and other things)
volume brick1
option directory /export # the '/' partition
end-volume
volume brick2
option directory /glusterfs/export # disk reserved for glusterfs
end-volume
volume ns-local
option directory /glusterfs/ns-local
end-volume
volume unify
option scheduler rr # or alu
option namespace ns-local
subvolumes brick1 brick2
end-volume
volume ns
option directory /glusterfs/ns
end-volume
volume server
subvolumes ns unify
# export 'unify' and 'ns'
. . . . . .
end-volume
=================
client spec:
volume client-local
remote-host localhost
remote-subvolume unify
end-volume
volume client[1-n]
..
..
..
volume ns
remote-host IP
remote-subvolume ns
end-volume
volume unify
option scheduler nufa
option nufa.local-volume-name client-local
option namespace ns
subvolumes client-local client[1-n]
end-volume
volume writebehind
volume iocache
===========
Hope this works better and simple for you..
-amar
On Dec 13, 2007 10:14 PM, Albert Shih <Albert.Shih at obspm.fr> wrote:
> Hi all.
>
> On my cluster (for computation) I've 2 disks. One for the / and software
> and another for glusterfs.
>
> But on the first disk I've a unused partition (because the disk size is
> 140Go, and Linux don't eat enought Mo ;-) ).
>
> Until today I've use only the second disk for glusterfs. But If I
> calculate
> the sum of the unused partition for all cluster I've 1.3To. It's lot of
> space.
>
> Well now I want use this place too. But I don't see how I can do that.
>
> Of course this two partitions have different size.
>
> Actually I use this kind of configuration :
>
> On node X
>
> volume nodeX
> type storage/posix
> option directory /_glusterfs
> end-volume
> volume nodeY
> type protocol/client
> option transport-type tcp/client # for TCP/IP transport
> option remote-host ip_address_of_nodeY
> option transport-timeout 30
> option remote-subvolume brick
> end-volume
> volume node....
>
>
> end-volume
> etc...
>
> volume unify
> type cluster/unify
> subvolumes node1....nodeN
> option scheduler nufa
> option nufa.local-volume-name nodeX
> option nufa.limits.min-free-disk 10
> option nufa.refresh-interval 1
> option namespace ns
> end-volume
> volume work
> type performance/write-behind
> option aggregate-size 1MB
> option flush-behind on
> subvolumes unify
> end-volume
>
> As you can see I want use nufa scheduler.
>
> How can I use nufa scheduler with two local disk ? Is that mean anything ?
> Maybe the solution is I add
>
> volume nodeX-bis
> type storage/posix
> option directory my_second_partition
> end-volume
> volume nodexY-bis
> type protocol/client
> ...
> end-volume
>
> volume unify-bis
> type cluster/unify
> subvolumes node1-bis....nodeN-bis
> option scheduler nufa
> option nufa.local-volume-name nodeX-bis
> option nufa.limits.min-free-disk 10
> option nufa.refresh-interval 1
> option namespace ns2
> end-volume
>
> volume bigunify
> type cluster/unify
> subvolumes unify unify-bis
>
> ? what kind of scheduler ?
>
> option namespace ns3
> end-volume
>
> volume work
> type performance/write-behind
> option aggregate-size 1MB
> option flush-behind on
> subvolumes bigunify
> end-volume
>
> What's your opinion ?
>
>
> Regards.
>
> --
> Albert SHIH
> Observatoire de Paris Meudon
> SIO batiment 15
> Heure local/Local time:
> Jeu 13 déc 2007 17:28:26 CET
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
--
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
More information about the Gluster-devel
mailing list