[Gluster-users] simple AFR setup, one server crashes, entire cluster becomes unusable ?
Daniel Maher
dma+gluster at witbe.net
Mon Dec 8 14:17:41 UTC 2008
Stas Oskin wrote:
> Based on my limited knowledge of GlusterFS, the most reliable and
> recommended way (in wiki) is client-side AFR, where the clients aware of
> the servers status, and replicate the files accordingly.
I've reviewed the AFR-related sections of the documentation on the wiki...
http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#Automatic_File_Replication_Translator_.28AFR.29
http://www.gluster.org/docs/index.php/Understanding_AFR_Translator
Nowhere in those sections is it stated, either directly or implicitly,
that client-side AFR is more reliable than server-side AFR. I'm not
saying that the statement is incorrect, but rather that the
documentation noted above doesn't seem to suggest that this is the case.
How, exactly, does relying on the clients to perform the AFR logic
become more reliable than allowing the servers to do so ? In either
case, Gluster is responsible for all of the transactions, and for
determining how to deal with node failure...
I am also curious about the network traffic with such a change. In the
current setup, the overhead of replication is restricted to two nodes -
the servers. Perhaps i mis-understand client-based AFR (which is
entirely possible!), but i suspect that my replication overhead would
increase for each client, since each client would send writes to both
servers. Currently this isn't a problem, but as the number of clients
increases, so would the overhead - correct ? We intended to double the
number of servers as well (remote site) - wouldn't this in turn double
the replication overhead for each client ? This would get out of hand
fairly quickly...
Don't get me wrong, i am more than happy to try a client-based AFR
config if it truly is superior ; however as of right now i don't know
how or why this would be the case.
Thank you all for your continued suggestions and discourse.
--
Daniel Maher <dma+gluster AT witbe DOT net>
More information about the Gluster-users
mailing list