[Gluster-users] 120k context switches on GlsuterFS nodes
Jamie Lawrence
jlawrence at squaretrade.com
Wed May 17 17:49:26 UTC 2017
> On May 17, 2017, at 10:20 AM, mabi <mabi at protonmail.ch> wrote:
>
> I don't know exactly what kind of context-switches it was but what I know is that it is the "cs" number under "system" when you run vmstat.
>
> Also I use the percona linux monitoring template for cacti (https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html) which monitors context switches too. If that's of any use interrupts where also quite high during that time with peaks up to 50k interrupts.
You can't read or write data from the disk or send data over the network from userspace without making system calls. System calls mean context switches. So you should expect to see the CS number scale with load - the whole point of Gluster is to read and write and send data over the network.
As far as them being "excessive", I don't know how to think about that without at least a comparison , or better, some evidence that something is doing more work than it "should". (Or best, line numbers where unnecessary work is being performed.)
Is there something other than a surprising number to make you think it isn't behaving well? Did the number jump after an upgrade? Do you have other systems doing roughly the same thing with other software that performs better? Keep in mind that, say, a vanilla NFS or SMB server doesn't have the inter-gluster-node overhead, and how much of that traffic there is depends on how you've configured Gluster.
-j
More information about the Gluster-users
mailing list