[Gluster-devel] clustered afr

Daniel van Ham Colchete daniel.colchete at gmail.com
Mon Mar 12 19:36:59 UTC 2007


Krishna,

in all the messages I'm seeing here at the list I only see 2 AFR sub volumes
working. Is there any meaning is having 3 or more sub volumes at the AFR
translator?  In Tibor's config, shouldn't the translator create files at all
three bricks? Shouldn't it use the third brick at least when the seconds was
offline?

I know we will always have only half the total space available when we use
AFR making two copies of each file, but I think that the advantage of
distributing the file's copies over different servers, like three in one
AFR, is the fact that the failed server load average will also be
distributed over the other 3 servers, instead of to just one that was
designed to mirror the failed.

The disadvantage of having 3 or more is that it's more complicated get a
server back on-line after one fails. I think it's more complicated because
when you have only 2 server it's easy know exactly witch files should be
copied and you can use a simple rsync to copy them But, when you have 3 or
more servers, you have to check on every server to see witch files only have
one copy of it. My second question is: how will the AFR FSCK deal with this
situations?

Best regards,
Daniel Colchete

On 3/12/07, Krishna Srinivas <krishna at zresearch.com> wrote:
>
> Hi Tibor,
>
> It is behaving the way it is expected to.
>
> Your requirement is you have 3 nodes, you want 2 copies of every file
> and if one node goes down, all files should still be available.
>
> It can be achieved through a config similar to what is explained here:
>
> http://www.gluster.org/docs/index.php/GlusterFS_User_Guide#AFR_Example_in_Clustered_Mode
>
> Regards
> Krishna
>
> On 3/12/07, Tibor Veres <tibor.veres at gmail.com> wrote:
> > i'm trying to build a 3-node storage cluster which should be able to
> > withstand 1 node going down.
> > first I tried glusterfs 1.3.0-pre2.2, but had some memory leakage
> > which seems to be fixed in the source checked from the repository
> >
> > i'm exporting 3 bricks with this configs like this:
> > volume brick[1-3]
> >   type storage/posix
> >   option directory /mnt/export/shared/[1-3]
> > end-volume
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server     # For TCP/IP transport
> >  option listen-port 699[6-8]              # Default is 6996
> >   subvolumes brick[1-3]
> >   option auth.ip.brick[1-3].allow * # Allow access to "brick" volume
> > end-volume
> >
> > my client config looks like this:
> > volume b[1-3]
> >   type protocol/client
> >   option transport-type tcp/client     # for TCP/IP transport
> >   option remote-host 127.0.0.1         # IP address of the remote brick
> >  option remote-port 699[6-8]              # default server port is 6996
> >   option remote-subvolume brick[1-3]        # name of the remote volume
> > end-volume
> > volume afr
> >     type cluster/afr
> >     subvolumes b1 b2 b3
> >     option replicate *:2
> >     option scheduler rr
> >     option rr.limits.min-free-disk 512MB
> >     option rr.refresh-interval 10
> > end-volume
> >
> > i didnt activate any performance-enhance translators.
> >
> > This setup sort of works, except that i saw files created only on
> > bricks1 and 2, brick3 got only the directories and symlinks created on
> > it. After killing the brick2 glusterfsd, the filesystem stayed up,
> > which is promising, but still no files are created on brick3.
> >
> > is this setup supposed to work? can i get comparable functionality set
> > up with current glusterfs? preferably in a way that can be extended to
> > 5 nodes, withstanding 2 going down. is there any plan for some
> > raid6-like functionality, or this would kill performance alltogether?
> >
> >
> > --
> > Tibor Veres
> >   tibor.veres at gmail.com
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



More information about the Gluster-devel mailing list