[Gluster-devel] I/O performance

Vijay Bellur vbellur at redhat.com
Fri Feb 1 06:53:48 UTC 2019


On Thu, Jan 31, 2019 at 10:01 AM Xavi Hernandez <xhernandez at redhat.com>
wrote:

> Hi,
>
> I've been doing some tests with the global thread pool [1], and I've
> observed one important thing:
>
> Since this new thread pool has very low contention (apparently), it
> exposes other problems when the number of threads grows. What I've seen is
> that some workloads use all available threads on bricks to do I/O, causing
> avgload to grow rapidly and saturating the machine (or it seems so), which
> really makes everything slower. Reducing the maximum number of threads
> improves performance actually. Other workloads, though, do little I/O
> (probably most is locking or smallfile operations). In this case limiting
> the number of threads to a small value causes a performance reduction. To
> increase performance we need more threads.
>
> So this is making me thing that maybe we should implement some sort of I/O
> queue with a maximum I/O depth for each brick (or disk if bricks share same
> disk). This way we can limit the amount of requests physically accessing
> the underlying FS concurrently, without actually limiting the number of
> threads that can be doing other things on each brick. I think this could
> improve performance.
>

Perhaps we could throttle both aspects - number of I/O requests per disk
and the number of threads too?  That way we will have the ability to behave
well when there is bursty I/O to the same disk and when there are multiple
concurrent requests to different disks. Do you have a reason to not limit
the number of threads?


> Maybe this approach could also be useful in client side, but I think it's
> not so critical there.
>

Agree, rate limiting on the server side would be more appropriate.


Thanks,
Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190131/7be719e3/attachment.html>


More information about the Gluster-devel mailing list