[Gluster-users] Read from fastest node only
hunter86_bg at yahoo.com
Thu Aug 5 04:00:47 UTC 2021
I'm not so sure. Imagine that local copy needs healing (outdated). Then gluster will check if other node's copy is blaming the local one and if it's "GREEN" , it will read locally. This check to the other servers is the slowest part due to the lattency between the nodes.
I guess the only way is to use the FUSE client mount options and manually change the source brick.
Another option that comes to my mind is pacemaker with a IPaddr2 reaource and the option globally-unique=true. If done properly, pacemaker will bring the IP on all nodes, but using IPTABLES (manipulated automatically by the cluster) only 1 node will be active at a time with a preference to the fastest node.Then the FUSE client can safely be configured to use that VIP, which in case of failure (of the fast node), will be moved to another node of the Gluster TSP.Yet, this will be a very complex design.
Best Regards,Strahil Nikolov
On Wed, Aug 4, 2021 at 22:28, Gionatan Danti<g.danti at assyoma.it> wrote: Il 2021-08-03 19:51 Strahil Nikolov ha scritto:
> The difference between thin and usual arbiter is that the thin arbiter
> takes in action only when it's needed (one of the data bricks is down)
> , so the thin arbiter's lattency won't affect you as long as both data
> bricks are running.
> Keep in mind that thin arbiter is less used. For example, I have never
> deployed a thin arbiter.
Maybe I am horribly wrong, but local-node reads should *not* involve
other nodes in any manner - ie: no checksum or voting is done for read.
AFR hashing should spread different files to different nodes when doing
striping, but for mirroring any node should have a valid copy of the
So when using choose-local all reads which can really be local (ie: the
requested file is available) should not suffer from remote party
Is that correct?
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users