[Gluster-devel] Why I would rather have server side AFR

Brandon Lamb brandonlamb at gmail.com
Fri May 2 06:05:39 UTC 2008


On Thu, May 1, 2008 at 11:03 PM, Krishna Srinivas <krishna at zresearch.com> wrote:
> On Fri, May 2, 2008 at 7:42 AM, Brandon Lamb <brandonlamb at gmail.com> wrote:
> > Faster interconnect hardware costs lots of $$$. Wouldnt there be less
> >  servers in most cases, meaning less hardware to buy?
> >
> >  I just took a look at infiniband hardware, its expensive. If I wanted
> >  to upgrade my network, I would much rather upgrade my server machines
> >  at 2-4 computer instead of 10 mail servers, 4 web servers AND 2-4
> >  server machines.
> >
> >  Although you still have that problem of server2 going down and having
> >  a client connected to it directly. But I guess couldnt you use LVS or
> >  something to failover to the other servers that are up?
> >
> >  What other cons are there to server side afr am I missing (other than
> >  the whole cluster doesnt work if one server goes down)?
>
>
> This problem you faced should have worked, I have asked you for clues
> from the logs in other thread.
>
>
> >
> >  If using server side afr, and a client does a write, is this faster
> >  when it only has to send the write to one server, or does it still
> >  have to wait for the server to replicate to the other servers and
> >  reply back that the write was successful on all servers? That might be
> >  worded strangely...
>
> Correct, server will write to other servers before returning the call.
> You could use write-behind for it, you could also use it on the client
> side. A clear  performance measure comparing both the setup
> will give an idea on which is better.
>
> Krishna
>
> >
> >
> >  _______________________________________________
> >  Gluster-devel mailing list
> >  Gluster-devel at nongnu.org
> >  http://lists.nongnu.org/mailman/listinfo/gluster-devel

Ok I will set this up again but change the order of the subvolumes on
client2 and try again and get a copy of the logs.





More information about the Gluster-devel mailing list