[Gluster-devel] gfproxy

Vijay Bellur vbellur at redhat.com
Wed Aug 30 03:26:02 UTC 2017


On Wed, Aug 23, 2017 at 12:41 PM, Poornima Gurusiddaiah <pgurusid at redhat.com
> wrote:

> Hi,
>
> This mail is regarding the gfproxy feature, please go through the same and
> let us know your thoughts.
>
> About the gfproxy feature:
> -----------------------------------
> As per the current architecture of Gluster, the client is more intelligent
> and has all the clustering logic. This approach has its own pros and cons.
> In several use cases, it is desirable to have all this clustering logic on
> the server side and have, as thin client as possible. Eg: Samba, Qemu,
> Block device export etc. This makes the upgrades easier, and is more
> scalable as the resources consumed by thin clients are much less than
> normal client.
>
> Approach:
> Client volfile is split into two volfiles:
> 1. Thin client volfile: master(gfapi/Fuse) followed by Protocol/client
> 2. gfproxyd volfile: protocol/server, performance xlators, cluster
> xlators, protocol/servers.
> With this model, the thin client connects to gfproxyd and glusterd(like
> always). gfproxyd connects to all the bricks. The major problem with this
> is performance, when the client and gfproxyd are not co-located.
>
>
> What is already done by Facebook:
> ---------------------------------------------
> 1. Volgen code for generating thin client volfile and the gfproxyd daemon
> volfile.
> 2. AHA translator on the thin client, so that on a restart/network
> disruptions between thin client and gfproxyd, we retry fops and the client
> doesn't become inaccessible.
>
>
> What remains to be done:
> ---------------------------------
> 1. Glusterd managing the gfproxyd
>     Currently the gfproxy daemon listens on 40000 port, if we want to run
> multiple gfproxyd (one per volume)
>

One per volume seems reasonable. However as we start scaling the number of
volumes, the number of gfproxy processes might become overwhelming and
necessitate us to multiplex (as has been the case with bricks). We can also
consider the possibility of a subset of volumes being exported through a
node and have different subset of volumes be exported through other nodes.
Further thought is necessary to evolve the set of policies needed for
managing gfproxyd daemons on trusted storage pool nodes and possibly
outside trusted storage pool too.



> 2. Redo the volgen and daemon management in glusterd2
>     -  Ability to be able to run daemons on subset of cluster nodes
>     -  ssl
>     - Validate with other features like snap, tier,
> 3. Graph switch for the gfproxyd
>


I wonder if we can implement a delay interval before failing over to a
different server in AHA. If we can do that, then we may not have to worry
about graph switch and instead resort to restart of gfproxyd daemons upon
configuration changes that affect graph topology. Delay before failing over
will also help in situations where there is a transient network
interruption.



> 4. Failover from one gfproxyd to another
>

What are the problems we need to consider here?


> 5. Less resource consumption on thin client - Memory and threads
> 6. Performance analysis
>
> Issue: https://github.com/gluster/glusterfs/issues/242
> <https://github.com/gluster/glusterfs/issues/242Issue:>
>


Might be a good idea to capture this discussion on the issue and continue
there!

Thanks,
Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170829/110b5149/attachment.html>


More information about the Gluster-devel mailing list