[Gluster-devel] combining AFR and cluster/unify

Krishna Srinivas krishna at zresearch.com
Wed Mar 14 10:46:11 UTC 2007


Pooya,

Your client spec was wrong. For a 4 node cluster with 2 replicas of
each file following will be the spec file: (similarly you can write
for 20 nodes)

### CLIENT client.vol ####
volume brick1
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.11
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick1-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.12
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick2
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.12
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick2-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.13
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick3
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.13
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick3-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.14
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick4
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.14
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick4-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.11
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume afr1
  type protocol/client
  subvolumes brick1 brick1-afr
  option replicate *:2
endvolume

volume afr2
  type protocol/client
  subvolumes brick2 brick2-afr
  option replicate *:2
endvolume

volume afr3
  type protocol/client
  subvolumes brick3 brick3-afr
  option replicate *:2
endvolume

volume afr4
  type protocol/client
  subvolumes brick4 brick4-afr
  option replicate *:2
endvolume

volume unify1
  type cluster/unify
  subvolumes afr1 afr2 afr3 afr4
...
..
endvolume


On 3/14/07, Pooya Woodcock <pooya at packetcloud.net> wrote:
>
> > http://www.gluster.org/docs/index.php/GlusterFS
> > Let us know if you need any help.
> >
> > Krishna
>
>
> Krishna,
>
> I am trying to figure out a config that will give me 4+ servers in
> cluster/unify and with replicate *:2 on AFR. Here is my config
> without iothreads,writebehind, etc... I modeled this after http://
> www.gluster.org/docs/index.php/
> GlusterFS_User_Guide#AFR_Example_in_Clustered_Mode
>
> What am I doing wrong? Can you fix my example?  I want to scale to 20
> nodes also, what do I need to change to get to 20 nodes?
>
>
> ### NODE 1 server.vol
> volume brick
>          type storage/posix
>          option directory /var/GlusterFS
> end-volume
>
> volume brick-afr
>          type storage/posix
>          option directory /var/GlusterFS-AFR
> end-volume
>
> volume server
>          type protocol/server
>          option transport-type tcp/server
>          option bind-address 172.16.30.11
>          option listen-port 6996
>          subvolumes brick brick-afr
>          option auth.ip.brick.allow 172.16.30.*
>          option auth.ip.brick-afr.allow 172.16.30.*
> end-volume
>
>
> ### NODES 2-4 server.vol
> same as above but with 172.16.30.12 bound
>
>
> ### CLIENT client.vol ####
> volume brick1
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.11
>    option remote-port 6996
>    option remote-subvolume brick
> end-volume
>
> volume brick1-afr
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.11
>    option remote-port 6996
>    option remote-subvolume brick-afr
> end-volume
>
> volume brick2
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.12
>    option remote-port 6996
>    option remote-subvolume brick
> end-volume
>
> volume brick2-afr
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.12
>    option remote-port 6996
>    option remote-subvolume brick-afr
> end-volume
>
> volume brick3
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.13
>    option remote-port 6996
>    option remote-subvolume brick
> end-volume
>
> volume brick3-afr
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.13
>    option remote-port 6996
>    option remote-subvolume brick-afr
> end-volume
>
> volume brick4
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.14
>    option remote-port 6996
>    option remote-subvolume brick
> end-volume
>
> volume brick4-afr
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 172.16.30.14
>    option remote-port 6996
>    option remote-subvolume brick-afr
> end-volume
>
>
> volume afr1
>    type cluster/afr
>    subvolumes brick1 brick2-afr brick3 brick4-afr
>    option replicate *:2
> end-volume
>
> volume afr2
>    type cluster/afr
>    subvolumes brick1-afr brick2 brick3-afr brick4
> option replicate *:2
> end-volume
>
>
> volume bricks
>    type cluster/unify
>    subvolumes afr1 afr2
>    option scheduler rr
>    option rr.limits.min-free-disk 10GB
> end-volume
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list