<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 05/17/2017 11:07 PM, Pranith Kumar
Karampuri wrote:<br>
</div>
<blockquote
cite="mid:CAOgeEnYt67XOab_tOtxF_ovddKE4d=e3zPK9W0tSq+27uOdZqA@mail.gmail.com"
type="cite">
<div dir="ltr">+ gluster-devel<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, May 17, 2017 at 10:50 PM, mabi
<span dir="ltr"><<a moz-do-not-send="true"
href="mailto:mabi@protonmail.ch" target="_blank">mabi@protonmail.ch</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>I don't know exactly what kind of context-switches it
was but what I know is that it is the "cs" number under
"system" when you run vmstat.<br>
</div>
</blockquote>
</div>
</div>
</blockquote>
Okay, that could be due to the syscalls themselves or pre-emptive
multitasking in case there aren't enough cpu cores. I think the
spike in numbers is due to more users accessing the files at the
same time like you observed, translating into more syscalls. You
can try capturing the gluster volume profile info the next time it
occurs and co-relate with the cs count. If you don't see any
negative performance impact, I think you don't need to be bothered
much by the numbers.<br>
<br>
HTH,<br>
Ravi<br>
<blockquote
cite="mid:CAOgeEnYt67XOab_tOtxF_ovddKE4d=e3zPK9W0tSq+27uOdZqA@mail.gmail.com"
type="cite">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><br>
</div>
<div>Also I use the percona linux monitoring template for
cacti (<a moz-do-not-send="true"
href="https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html"
target="_blank">https://www.percona.com/doc/<wbr>percona-monitoring-plugins/<wbr>LATEST/cacti/linux-templates.<wbr>html</a>)
which monitors context switches too. If that's of any use
interrupts where also quite high during that time with
peaks up to 50k interrupts.<br>
</div>
<div class="HOEnZb">
<div class="h5">
<div><br>
</div>
<div
class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty">
<div
class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br>
</div>
</div>
<div><br>
</div>
<blockquote
class="m_-9093338394098711715protonmail_quote"
type="cite">
<div>-------- Original Message --------<br>
</div>
<div>Subject: Re: [Gluster-users] 120k context
switches on GlsuterFS nodes<br>
</div>
<div>Local Time: May 17, 2017 2:37 AM<br>
</div>
<div>UTC Time: May 17, 2017 12:37 AM<br>
</div>
<div>From: <a moz-do-not-send="true"
href="mailto:ravishankar@redhat.com"
target="_blank">ravishankar@redhat.com</a><br>
</div>
<div>To: mabi <<a moz-do-not-send="true"
href="mailto:mabi@protonmail.ch" target="_blank">mabi@protonmail.ch</a>>,
Gluster Users <<a moz-do-not-send="true"
href="mailto:gluster-users@gluster.org"
target="_blank">gluster-users@gluster.org</a>><br>
</div>
<div><br>
</div>
<div> <br>
</div>
<div class="m_-9093338394098711715moz-cite-prefix">On
05/16/2017 11:13 PM, mabi wrote:<br>
</div>
<blockquote type="cite">
<div>Today I even saw up to 400k context switches
for around 30 minutes on my two nodes replica...
Does anyone else have so high context switches on
their GlusterFS nodes?<br>
</div>
<div><br>
</div>
<div>I am wondering what is "normal" and if I should
be worried...<br>
</div>
<div><br>
</div>
<div
class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty">
<div
class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty">
<div><br>
</div>
</div>
<div
class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br>
</div>
</div>
<div><br>
</div>
<blockquote
class="m_-9093338394098711715protonmail_quote"
type="cite">
<div>-------- Original Message --------<br>
</div>
<div>Subject: 120k context switches on GlsuterFS
nodes<br>
</div>
<div>Local Time: May 11, 2017 9:18 PM<br>
</div>
<div>UTC Time: May 11, 2017 7:18 PM<br>
</div>
<div>From: <a moz-do-not-send="true"
rel="noreferrer nofollow noopener"
href="mailto:mabi@protonmail.ch"
class="m_-9093338394098711715moz-txt-link-abbreviated"
target="_blank">mabi@protonmail.ch</a><br>
</div>
<div>To: Gluster Users <a moz-do-not-send="true"
rel="noreferrer nofollow noopener"
href="mailto:gluster-users@gluster.org"
class="m_-9093338394098711715moz-txt-link-rfc2396E"
target="_blank"><gluster-users@gluster.org></a><br>
</div>
<div><br>
</div>
<div>Hi,<br>
</div>
<div><br>
</div>
<div>Today I noticed that for around 50 minutes my
two GlusterFS 3.8.11 nodes had a very high
amount of context switches, around 120k. Usually
the average is more around 1k-2k. So I checked
what was happening and there where just more
users accessing (downloading) their files at the
same time. These are directories with typical
cloud files, which means files of any sizes
ranging from a few kB to MB and a lot of course.<br>
</div>
<div><br>
</div>
<div>Now I never saw such a high number in context
switches in my entire life so I wanted to ask if
this is normal or to be expected? I do not find
any signs of errors or warnings in any log
files.<br>
</div>
<div><br>
</div>
</blockquote>
</blockquote>
<div
class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty">
<div
class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty">
<div><br>
</div>
</div>
</div>
<div>What context switch are you referring to
(syscalls context-switch on the bricks?) ? How did
you measure this?<br>
</div>
<div> -Ravi<br>
</div>
<div> <br>
</div>
<blockquote type="cite">
<blockquote
class="m_-9093338394098711715protonmail_quote"
type="cite">
<div>My volume is a replicated volume on two nodes
with ZFS as filesystem behind and the volume is
mounted using FUSE on the client (the cloud
server). On that cloud server the glusterfs
process was using quite a lot of system CPU but
that server (VM) only has 2 vCPUs so maybe I
should increase the number of vCPUs...<br>
</div>
<div><br>
</div>
<div>Any ideas or recommendations?<br>
</div>
<div><br>
</div>
<div
class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty">
<div
class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty">
<div><br>
</div>
</div>
<div
class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br>
</div>
</div>
<div>Regards,<br>
</div>
<div>M.<br>
</div>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a moz-do-not-send="true" rel="noreferrer nofollow noopener" href="mailto:Gluster-users@gluster.org" class="m_-9093338394098711715moz-txt-link-abbreviated" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" rel="noreferrer nofollow noopener" href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="m_-9093338394098711715moz-txt-link-freetext" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
</pre></blockquote><p>
</p></blockquote><div>
</div></div></div>
______________________________<wbr>_________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
</blockquote></div>
--
<div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith
</div></div>
</div>
</blockquote><p>
</p></body></html>