[Gluster-devel] Server-side AFR

Krishna Srinivas krishna at zresearch.com
Wed Apr 23 19:14:47 UTC 2008


You can check the foll wiki:
http://www.gluster.org/docs/index.php/GlusterFS_1.3_High_Availability_Storage_with_GlusterFS

Though it is not entirely correct (unify does not serve any purpose in that
example) you will get an idea on how to use AFR on the server side.
Check how "mailspool-ds-afr" is defined.

You can mail back if it still confusing.

Regards
Krishna

On Wed, Apr 23, 2008 at 8:45 PM,  <gordan at bobich.net> wrote:
>
>
>  On Wed, 23 Apr 2008, Krishna Srinivas wrote:
>
>
> > It is a wrong setup. Few syscalls would hang and few would go to infinite
> loop.
> > Just guessing but things will go wrong.
> >
> > Correct setup will have each server exporting two volumes
> > 1) AFR vol to be used by clients (not the other server)
> >
>
>  So, the setup below would be OK for the client mounted volume? Or would
> both volumes need to be of protocol/client type, with one mounted via
> loopback?
>
>
>
> > 2) storage/posix vol to be used by the AFR vol on the other server.
> >
>
>  So foo1 would need to be in a separate volume definition file, and exported
> on it's own?
>
>  How would the changes propagate in this case? I'm guessing that client
> mounted AFR volume would have to consist of two protocol/client volumes, one
> local and one remote. But would this not lead to the same looping condition?
>
>  Gordan
>
>
>
>
> > On Wed, Apr 23, 2008 at 8:14 PM,  <gordan at bobich.net> wrote:
> >
> > > I'm trying to do server-side AFR, and the sort of thing I'm coming up
> with
> > > is a bit like the following:
> > >
> > >  server.vol
> > >  ->snip
> > >  volume foo1
> > >        type storage/posix
> > >        option directory /gluster
> > >  end-volume
> > >
> > >  volume foo2
> > >        type protocol/client
> > >        option transport-type tcp/client
> > >        option remote-host 192.168.0.1
> > >        option remote subvolume foo
> > >  end-volume
> > >
> > >  volume foo
> > >        type cluster/afr
> > >        subvolumes foo1 foo2
> > >  end-volume
> > >
> > >  volume server
> > >        type protocol/server
> > >        option transport-type tcp/server
> > >        subvolumes foo
> > >        option auth.ip.foo.allow 127.0.0.1,192.168.*
> > >  end-volume
> > >  <-snap
> > >
> > >  The only difference between the two servers is the IP address in the
> remote
> > > AFR block (192.168.0.2 instead of .1).
> > >
> > >  The question I have is - would this cause a circular replication
> meltdown?
> > > Or are loops somehow detected/prevented/avoided? Effectively, the client
> > > would connect to one server only, and upload the data, which would get
> > > replicated to the other server, which, since it also replicates back,
> > > replicates the file back, which triggers the local server to replicate,
> etc,
> > > etc, etc.
> > >
> > >  What prevents this sort of thing from occuring, and is there a better
> way
> > > to achieve this kind of a setup?
> > >
> > >  Gordan
> > >
> > >
> > >  _______________________________________________
> > >  Gluster-devel mailing list
> > >  Gluster-devel at nongnu.org
> > >  http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> > >
> >
> >
>
>
>  _______________________________________________
>  Gluster-devel mailing list
>  Gluster-devel at nongnu.org
>  http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list