[Gluster-devel] GlusterFS 1.3.0-pre2.2: AFR setup

Krishna Srinivas krishna at zresearch.com
Mon Mar 5 06:14:30 UTC 2007


Avati,
You are right. But for his requirement primary child (first child)
can not be same for all the clients, i.e first child has to be local
storage.
Krishna

On 3/5/07, Anand Avati <avati at zresearch.com> wrote:
> Gerry/Krishna,
>   In this specific setup what has been discussed, there is no need for
> any kind of locking. The namespace lock what unify uses is when
> glusterfs takes the responsibility of merging namespaces of multiple
> storage bricks. In this specific situation (for AFR) there is no
> namespace management. Hence even creation and read/write will be
> handled by server VFS. You have to ensure that the primary child of
> AFR is the same for all webservers.
>
> regards,
> avati
>
>
> On Sun, Mar 04, 2007 at 01:39:58PM -0500, Gerry Reno wrote:
> > Krishna Srinivas wrote:
> > >Hi Gerry,
> > >
> > >If there are four machines: server1 server2 server3 server4
> > >
> > >server1 server spec file:
> > >volume brick
> > >    type storage/posix
> > >    option directory /var/www
> > >end-volume
> > >
> > >### Add network serving capabilities to the "brick" volume
> > >volume server
> > >    type protocol/server
> > >    option transport-type tcp/server
> > >    option listen-port 6996
> > >    subvolumes brick
> > >    option auth.ip.brick.allow *
> > >end-volume
> > >
> > >server1 client spec file will be like this:
> > >volume client1
> > >    type storage/posix
> > >       option directory /var/www
> > >end-volume
> > >
> > >volume client2
> > >    type protocol/client
> > >    option transport-type tcp/client
> > >    option remote-host server2
> > >    option remote-port 6996
> > >    option remote-subvolume brick
> > >end-volume
> > >
> > >volume client3
> > >    type protocol/client
> > >    option transport-type tcp/client
> > >    option remote-host server3
> > >    option remote-port 6996
> > >    option remote-subvolume brick
> > >end-volume
> > >
> > >volume client4
> > >    type protocol/client
> > >    option transport-type tcp/client
> > >    option remote-host server4
> > >    option remote-port 6996
> > >    option remote-subvolume brick
> > >end-volume
> > >
> > >volume afr
> > >    type cluster/afr
> > >    subvolumes client1 client2 client3 client4
> > >    option replicate *:4
> > >end-volume
> > >
> > >here all read() operations happen on client1 as it is mentioned
> > >first in the list of subvolumes. so it serves your purpose.
> > >
> > >Let us know if you need more help.
> > >
> >
> > Hi Krishna,
> >  Ok if I understand this correctly are you saying that I only need the
> > server spec file on server1?  Or do I need one on all servers?
> > And I would need a slightly different client spec file on each client?
> > If so, is there a way to write the client spec file so that the same
> > client spec file could be copied to each node?
> >
> > Regards,
> > Gerry
> >
> >
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
> --
> Shaw's Principle:
>         Build a system that even a fool can use,
>         and only a fool will want to use it.
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list