[Gluster-devel] hi, all! how to make GlusterFS wokr like RAID 1+0 ??

Michael Cassaniti m.cassaniti at gmail.com
Thu Jul 30 23:59:37 UTC 2009


Second option certainly looks much cleaner and is more predictable. You
should be able to lose one server from your pool and still stripe properly
over the remaining machines.

2009/7/30 Yi Ling <lingyi.pro at gmail.com>

> platform information:
>         CentOS 5.3 (2.6.18-128.1.16.el5)
>         GlusterFS (glusterfs-2.0.3)
>         FUSE (fuse-2.7.4)
>
> my simple aim is a clustered system providing RAID 1+0 like functionality.
> now there are two schemes provided by my friend (a novice too). in my
> opinion, the sheme 2 would work well. but i haven't test them. so anyone
> here could tell me what sheme is right/wrong/better,and why??!! thanks in
> advance.
>
> ## scheme 1 ###
>         server 1  (192.168.1.11) => brick (only one exported directory)
>         server 2  (192.168.1.11) => brcik
>         server 3  (192.168.1.11) => brick
>         client (192.168.1.10)
>
> and volume filese are as follows:
>  ######################################################
> # volume files on servers  are same
> volume posix
>         type storage/posix
>         option directory /home/brick
> end-volume
>
> volume brick
> type features/posix-locks
>         subvolumes posix
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option transport.socket.listen-port 6996
>         option transport.socket.bind-address 192.168.1.11
>         subvolumes brick
>         option auth.addr.brick.allow *
>         option auth.addr.posix.allow *
> end-volume
>
>
> #############################
> ## volume file on client
> #############################
> volume brick1
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.11
>         option remote-port 6996
>         option remote-subvolume brick
> end-volume
>
> volume brick2
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.12
>         option remote-port 6996
>         option remote-subvolume brick
> end-volume
>
> volume brick3
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.13
>         option remote-port 6996
>         option remote-subvolume brick
> end-volume
>
> ## configuration of all 3 replicated bricks
> # afr1 <-- brick1 + brick2
> volume afr1
>         type cluster/afr
>          subvolumes brick1 brick2
> end-volume
>
> # afr2 <-- brick2 + brick3
> volume afr2
>         type cluster/afr
>         subvolumes brick2 brick3
> end-volume
>
> # afr3 <-- brick3 + brick1
> volume afr3
>         type cluster/afr
>         subvolumes brick3 brick1
> end-volume
>
> ## stripe on afr1,afr2,afr3
> volume stripe
>         type cluster/stripe
>         subvolumes afr1 afr2 afr3
> end-volume
>
> ######### sheme 1 ends ############
>
> #################################################
> ### scheme 2
> #################################################
>
>         server 1  (192.168.1.11) => brick1 + brick2 (there are two exported
> directory on each server)
>         server 2  (192.168.1.11) => brcik3 + brick4
>         server 3  (192.168.1.11) => brick5 + brick6
>         client (192.168.1.10)
>
> ### volume files of servers (they are similiar)
> # on server 1
> volume posix1
>         type storage/posix
>         option directory /home/brick1
> end-volume
>
> volume brick1
> type features/posix-locks
>         subvolumes posix1
> end-volume
>
> volume posix2
>         type storage/posix
>         option directory /home/brick2
> end-volume
>
> volume brick2
>         type features/posix-locks
>         subvolumes posix2
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option transport.socket.listen-port 6996
>         option transport.socket.bind-address 192.168.1.11
>         subvolumes brick1 brick2
>         option auth.addr.brick1.allow *
>         option auth.addr.posix1.allow *
>         option auth.addr.brick2.allow *
>         option auth.addr.posix2.allow *
> end-volume
>
> ## on server 2
> volume posix1
>         type storage/posix
>         option directory /home/brick3
> end-volume
>
> volume brick3
>         type features/posix-locks
>         subvolumes posix1
> end-volume
>
> volume posix2
>         type storage/posix
>         option directory /home/brick4
> end-volume
>
> volume brick4
>         type features/posix-locks
>         subvolumes posix2
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option transport.socket.listen-port 6996
>         option transport.socket.bind-address 192.168.1.12
>         subvolumes brick3 brick4
>         option auth.addr.brick3.allow *
>         option auth.addr.posix1.allow *
>         option auth.addr.brick4.allow *
>         option auth.addr.posix2.allow *
> end-volume
>
> ##on server 3
> volume posix1
>         type storage/posix
>         option directory /home/brick5
> end-volume
>
> volume brick5
>         type features/posix-locks
>         subvolumes posix1
> end-volume
>
> volume posix2
>         type storage/posix
>         option directory /home/brick6
> end-volume
>
> volume brick6
>         type features/posix-locks
>         subvolumes posix2
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option transport.socket.bind-address 192.168.1.13
>         option transport.socket.listen-port 6996
>         subvolumes brick5 brick6
>         option auth.addr.brick5.allow *
>         option auth.addr.posix1.allow *
>         option auth.addr.brick6.allow *
>         option auth.addr.posix2.allow *
> end-volume
>
> ###################
> ## volume file on client
> # brick1 on server 1
> volume brick1
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.11
>         option remote-port 6996
>         option remote-subvolume brick1
> end-volume
>
> # brick2 on server 1
> volume brick2
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.11
>         option remote-port 6996
>         option remote-subvolume brick2
> end-volume
>
> # brick3 on server 2
> volume brick3
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.12
>         option remote-port 6996
>         option remote-subvolume brick3
> end-volume
>
> # brick4 on server 2
> volume brick4
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.12
>         option remote-port 6996
>         option remote-subvolume brick4
> end-volume
>
> # brick5 on server 3
> volume brick5
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.13
>         option remote-port 6996
>         option remote-subvolume brick5
> end-volume
>
> # brick6 on server 3
> volume brick6
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.13
>         option remote-port 6996
>         option remote-subvolume brick6
> end-volume
>
>
> ###############################################
> ## configuration of all 3 replicated bricks
> ###############################################
>
> # a mirror between server 1 (exported dir is afr1-1)
> # and server 2 (exported dir is afr2-1)
> # afr1 <-- brick1 + brick3
> volume afr1
>         type cluster/afr
>          subvolumes brick1 brick3
> end-volume
>
> # afr2 <-- brick2 + brick5
> volume afr2
>         type cluster/afr
>         subvolumes brick2 brick5
> end-volume
>
> # afr3 <-- brick4 + brick6
> volume afr3
>         type cluster/afr
>         subvolumes brick4 brick6
> end-volume
>
> #####################################
> ## stripe on afr1,afr2,afr3
> ####################################
> volume stripe
>         type cluster/stripe
>         subvolumes afr1 afr2 afr3
> end-volume
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090731/e74e223c/attachment-0003.html>


More information about the Gluster-devel mailing list