[Gluster-users] afr with gluster command line - possible?
lejeczek
peljasz at yahoo.co.uk
Mon Apr 23 15:44:32 UTC 2012
yes, precisely
in the past I had running AFRs, this way
box A looback client -> box A server <-> box B server <-
box B loopback client
but similarly replace local loopback client with legitimate
separate client that would have only access to one brick's
one NIC
the simple idea was the client did not have to know about
all the bricks/servers
and I'd think this would be what most of of us would like,
there would be quite a few situations where this is greatly
helpful
nowadays this seems impossible, or am I wrong?
On 23/04/12 16:14, Brian Candler wrote:
> On Mon, Apr 23, 2012 at 03:46:01PM +0100, lejeczek wrote:
>> but is a true server-side replication?
> No you're right, it's driven from the client side I believe. This is so that
> the client can connect to either server if the other is down.
>
>> if I'm not mistaken, afr would take take care of it while client(fuse)
>> would suffice if only map/connect one brick
>> lets say two nodes/peers are clients at the same time, both
>> clients/bricks would only mountpoint themselves on 127.0.0.1 and
>> replication would still work, does it?
> Sorry, I don't understand that question.
>
> Using the native client, the mount is only used to make initial contact to
> retrieve the volume info. After that point, the client talks directly to
> the brick(s) it needs to, as defined in the volume info.
>
> So if you
>
> mount 127.0.0.1:/foo /foo
>
> (because 127.0.0.1 happens to be one of the nodes in the cluster), and
> volume /foo contains server1:/brick1 and server2:/brick2, then the client
> will talk to "server1" and/or "server2" when reading and writing files.
>
> On server1, you could put "127.0.0.1 server1" in the hosts file if you like
> to force communication over that IP, but in practice using server1's public
> IP is fine - it's still a loopback communication.
>
> Indeed, if you have three nodes in your cluster, you can
>
> mount server3:/foo /foo
>
> and once the volume is mounted, data transfer will only take place between
> the client and server1/server2. (This is the native client remember - NFS is
> different, the traffic will hit server3 and then be forwarded to server1
> and/or server2 as required)
>
> The only other "server side" replication that I know of is geo-replication.
>
> Regards,
>
> Brian.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120423/99564791/attachment.html>
More information about the Gluster-users
mailing list