[Gluster-users] 120k context switches on GlsuterFS nodes
Ravishankar N
ravishankar at redhat.com
Wed May 17 00:37:21 UTC 2017
On 05/16/2017 11:13 PM, mabi wrote:
> Today I even saw up to 400k context switches for around 30 minutes on
> my two nodes replica... Does anyone else have so high context switches
> on their GlusterFS nodes?
>
> I am wondering what is "normal" and if I should be worried...
>
>
>
>
>> -------- Original Message --------
>> Subject: 120k context switches on GlsuterFS nodes
>> Local Time: May 11, 2017 9:18 PM
>> UTC Time: May 11, 2017 7:18 PM
>> From: mabi at protonmail.ch
>> To: Gluster Users <gluster-users at gluster.org>
>>
>> Hi,
>>
>> Today I noticed that for around 50 minutes my two GlusterFS 3.8.11
>> nodes had a very high amount of context switches, around 120k.
>> Usually the average is more around 1k-2k. So I checked what was
>> happening and there where just more users accessing (downloading)
>> their files at the same time. These are directories with typical
>> cloud files, which means files of any sizes ranging from a few kB to
>> MB and a lot of course.
>>
>> Now I never saw such a high number in context switches in my entire
>> life so I wanted to ask if this is normal or to be expected? I do not
>> find any signs of errors or warnings in any log files.
>>
What context switch are you referring to (syscalls context-switch on the
bricks?) ? How did you measure this?
-Ravi
>> My volume is a replicated volume on two nodes with ZFS as filesystem
>> behind and the volume is mounted using FUSE on the client (the cloud
>> server). On that cloud server the glusterfs process was using quite a
>> lot of system CPU but that server (VM) only has 2 vCPUs so maybe I
>> should increase the number of vCPUs...
>>
>> Any ideas or recommendations?
>>
>>
>>
>> Regards,
>> M.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170517/f239081d/attachment.html>
More information about the Gluster-users
mailing list