[Gluster-devel] combining AFR and cluster/unify
Krishna Srinivas
krishna at zresearch.com
Wed Mar 14 13:30:09 UTC 2007
On 3/14/07, Daniel van Ham Colchete <daniel.colchete at gmail.com> wrote:
> On 3/14/07, Krishna Srinivas <krishna at zresearch.com> wrote:
> >
> > Pooya,
> >
> > Your client spec was wrong. For a 4 node cluster with 2 replicas of
> > each file following will be the spec file: (similarly you can write
> > for 20 nodes)
> >
> > ### CLIENT client.vol ####
> > volume brick1
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.11
> > option remote-port 6996
> > option remote-subvolume brick
> > end-volume
> >
> > volume brick1-afr
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.12
> > option remote-port 6996
> > option remote-subvolume brick-afr
> > end-volume
> >
> > volume brick2
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.12
> > option remote-port 6996
> > option remote-subvolume brick
> > end-volume
> >
> > volume brick2-afr
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.13
> > option remote-port 6996
> > option remote-subvolume brick-afr
> > end-volume
> >
> > volume brick3
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.13
> > option remote-port 6996
> > option remote-subvolume brick
> > end-volume
> >
> > volume brick3-afr
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.14
> > option remote-port 6996
> > option remote-subvolume brick-afr
> > end-volume
> >
> > volume brick4
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.14
> > option remote-port 6996
> > option remote-subvolume brick
> > end-volume
> >
> > volume brick4-afr
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 172.16.30.11
> > option remote-port 6996
> > option remote-subvolume brick-afr
> > end-volume
> >
> > volume afr1
> > type protocol/client
> > subvolumes brick1 brick1-afr
> > option replicate *:2
> > endvolume
> >
> > volume afr2
> > type protocol/client
> > subvolumes brick2 brick2-afr
> > option replicate *:2
> > endvolume
> >
> > volume afr3
> > type protocol/client
> > subvolumes brick3 brick3-afr
> > option replicate *:2
> > endvolume
> >
> > volume afr4
> > type protocol/client
> > subvolumes brick4 brick4-afr
> > option replicate *:2
> > endvolume
> >
> > volume unify1
> > type cluster/unify
> > subvolumes afr1 afr2 afr3 afr4
> > ...
> > ..
> > endvolume
> >
>
> I'm no gluster expert but I think this config will put each file pair in the
> same server, doesn't it? Like, volume afr4 uses the brick4 and brick4-afr,
> that happend to be on the same server, on it's subvolumes.
>
> Shouldn't it be something like:
>
> volume afr1
> type protocol/client
> subvolumes brick1 brick2-afr
> option replicate *:2
> endvolume
>
> volume afr2
> type protocol/client
> subvolumes brick2 brick1-afr
> option replicate *:2
> endvolume
>
> volume afr3
> type protocol/client
> subvolumes brick3 brick4-afr
> option replicate *:2
> endvolume
>
> volume afr4
> type protocol/client
> subvolumes brick4 brick3-afr
> option replicate *:2
> endvolume
>
> So that everyfile has a copy of itself on two diferent servers?
>
> Best regards,
> Daniel Colchete
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
No, If you observe the following:
### CLIENT client.vol ####
volume brick1
type protocol/client
option transport-type tcp/client
option remote-host 172.16.30.11
option remote-port 6996
option remote-subvolume brick
end-volume
volume brick1-afr
type protocol/client
option transport-type tcp/client
option remote-host 172.16.30.12
option remote-port 6996
option remote-subvolume brick-afr
end-volume
brick1-afr is actually on the 2nd server. I just deviated from the
naming conventions used at our wiki. But concept is still the same.
More information about the Gluster-devel
mailing list