[Gluster-devel] FOP ratelimit?

Venky Shankar vshankar at redhat.com
Thu Sep 10 06:46:41 UTC 2015


On Thu, Sep 3, 2015 at 11:36 AM, Raghavendra Gowdappa
<rgowdapp at redhat.com> wrote:
>
>
> ----- Original Message -----
>> From: "Emmanuel Dreyfus" <manu at netbsd.org>
>> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
>> Cc: gluster-devel at gluster.org
>> Sent: Wednesday, September 2, 2015 8:12:37 PM
>> Subject: Re: [Gluster-devel] FOP ratelimit?
>>
>> Raghavendra Gowdappa <rgowdapp at redhat.com> wrote:
>>
>> > Its helpful if you can give some pointers on what parameters (like
>> > latency, throughput etc) you want us to consider for QoS.
>>
>> Full blown QoS would be nice, but a first line of defense against
>> resource hogs seems just badly required.
>>
>> A bare minimum could be to process client's FOP in a round robin
>> fashion. That way even if one client sends a lot of FOPs, there is
>> always some window for others to slip in.
>>
>> Any opinion?
>
> As of now we depend on epoll/poll events informing servers about incoming messages. All sockets are put in the same event-pool represented by a single poll-control fd. So, the order of our processing of msgs from various clients really depends on how epoll/poll picks events across multiple sockets. Do poll/epoll have any sort of scheduling? or is it random? Any pointers on this are appreciated.

I haven't come across any kind of scheduling for picking events for
sockets. Routers use synthetic throttling for traffic shaping. Most
commonly used technique is by using TBF (token bucket filter) to
"induce" latency for outbound traffic. Lustre had some work[1] done
for QoS along the lines of TBF.

HTH.

[1]: http://cdn.opensfs.org/wp-content/uploads/2014/10/7-DDN_LiXi_lustre_QoS.pdf

>
>>
>> --
>> Emmanuel Dreyfus
>> http://hcpnet.free.fr/pubz
>> manu at netbsd.org
>>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list