[Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

Raghavendra Gowdappa rgowdapp at redhat.com
Mon Mar 4 10:31:04 UTC 2019

what is the per thread CPU usage like on these clients? With highly
concurrent workloads we've seen single thread that reads requests from
/dev/fuse (fuse reader thread) becoming bottleneck. Would like to know what
is the cpu usage of this thread looks like (you can use top -H).

On Mon, Mar 4, 2019 at 3:39 PM Hu Bert <revirii at googlemail.com> wrote:

> Good morning,
> we use gluster v5.3 (replicate with 3 servers, 2 volumes, raid10 as
> brick) with at the moment 10 clients; 3 of them do heavy I/O
> operations (apache tomcats, read+write of (small) images). These 3
> clients have a quite high I/O wait (stats from yesterday) as can be
> seen here:
> client: https://abload.de/img/client1-cpu-dayulkza.png
> server: https://abload.de/img/server1-cpu-dayayjdq.png
> The iowait in the graphics differ a lot. I checked netstat for the
> different clients; the other clients have 8 open connections:
> https://pastebin.com/bSN5fXwc
> 4 for each server and each volume. The 3 clients with the heavy I/O
> have (at the moment) according to netstat 170, 139 and 153
> connections. An example for one client can be found here:
> https://pastebin.com/2zfWXASZ
> gluster volume info: https://pastebin.com/13LXPhmd
> gluster volume status: https://pastebin.com/cYFnWjUJ
> I just was wondering if the iowait is based on the clients and their
> workflow: requesting a lot of files (up to hundreds per second),
> opening a lot of connections and the servers aren't able to answer
> properly. Maybe something can be tuned here?
> Especially the server|client.event-threads (both set to 4) and
> performance.(high|normal|low|least)-prio-threads (all at default value
> 16) and performance.io-thread-count (32) options, maybe these aren't
> properly configured for up to 170 client connections.
> Both servers and clients have a Xeon CPU (6 cores, 12 threads), a 10
> GBit connection and 128G (servers) respectively 256G (clients) RAM.
> Enough power :-)
> Thx for reading && best regards,
> Hubert
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190304/09463b3d/attachment.html>

More information about the Gluster-users mailing list