[Gluster-users] How gluster parallelize reads

Alastair Neil ajneil.tech at gmail.com
Mon Oct 3 18:13:58 UTC 2016


I think this might give you something like  the behaviour you are looking
for, it will not balance blocks across different servers but will
distribute reads from clients across all the servers.

cluster.read-hash-mode 2

0 means use the first server to respond I think - at least that's my guess
of what "first up server" means
1 hashed by GFID,  so clients will use the same server for a given file but
different files may be accessed from different nodes.

On 3 October 2016 at 05:50, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> 2016-10-03 11:33 GMT+02:00 Joe Julian <joe at julianfamily.org>:
> > By default, the client reads from localhost first, if the client is also
> a
> > server, or the first to respond. This can be tuned to balance the load
> > better (see "gluster volume set help") but that's not necessarily more
> > efficient. As always, it depends on the workload.
>
> So, is no true saying that gluster aggregate bandwidth in readings.
> Each client will always read from 1 node. Having 3 nodes means that
> I can support a number of clients increased by 3.
>
> Something like an ethernet bonding, each transfer is always subject to the
> single port speed, but I can support twice the connections by creating
> a bond of 2.
>
> > Reading as you suggested is actually far less efficient. The reads would
> > always be coming from disk and never in any readahead cache.
>
> What I mean is to read the same file in multiple parts from multiple
> servers and not
> reading the same file part from multiple servers.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161003/8c817f30/attachment.html>


More information about the Gluster-users mailing list