[Gluster-devel] [Gluster-users] 120k context switches on GlsuterFS nodes

Pranith Kumar Karampuri pkarampu at redhat.com
Wed May 17 14:39:54 UTC 2017


+gluster-devel.
I would expect it to be high because context switch is switching CPU from
one task to other and syscalls do that. All bricks do are syscalls at the
end. That said, I am not sure how to measure what is normal. Adding
gluster-devel.

On Tue, May 16, 2017 at 11:13 PM, mabi <mabi at protonmail.ch> wrote:

> Today I even saw up to 400k context switches for around 30 minutes on my
> two nodes replica... Does anyone else have so high context switches on
> their GlusterFS nodes?
>
> I am wondering what is "normal" and if I should be worried...
>
>
>
>
> -------- Original Message --------
> Subject: 120k context switches on GlsuterFS nodes
> Local Time: May 11, 2017 9:18 PM
> UTC Time: May 11, 2017 7:18 PM
> From: mabi at protonmail.ch
> To: Gluster Users <gluster-users at gluster.org>
>
> Hi,
>
> Today I noticed that for around 50 minutes my two GlusterFS 3.8.11 nodes
> had a very high amount of context switches, around 120k. Usually the
> average is more around 1k-2k. So I checked what was happening and there
> where just more users accessing (downloading) their files at the same time.
> These are directories with typical cloud files, which means files of any
> sizes ranging from a few kB to MB and a lot of course.
>
> Now I never saw such a high number in context switches in my entire life
> so I wanted to ask if this is normal or to be expected? I do not find any
> signs of errors or warnings in any log files.
>
> My volume is a replicated volume on two nodes with ZFS as filesystem
> behind and the volume is mounted using FUSE on the client (the cloud
> server). On that cloud server the glusterfs process was using quite a lot
> of system CPU but that server (VM) only has 2 vCPUs so maybe I should
> increase the number of vCPUs...
>
> Any ideas or recommendations?
>
>
>
> Regards,
> M.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170517/d1376c32/attachment.html>


More information about the Gluster-devel mailing list