<div>If this happens again I will try to run the profiling of gluster and post back. Fortunately it does not happen often but I need then to be in front so that I can start/stop the profiling. By the way on that server I just have two clients connected with FUSE.<br></div><div class="protonmail_signature_block protonmail_signature_block-empty"><div class="protonmail_signature_block-user protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] 120k context switches on GlsuterFS nodes<br></div><div>Local Time: May 22, 2017 7:45 PM<br></div><div>UTC Time: May 22, 2017 5:45 PM<br></div><div>From: joe@julianfamily.org<br></div><div>To: gluster-users@gluster.org<br></div><div><br></div><div> <br></div><div class="moz-cite-prefix">On 05/22/17 10:27, mabi wrote:<br></div><blockquote type="cite"><div>Sorry for posting again but I was really wondering if it is
somehow possible to tune gluster in order to make better use of
all my cores (see below for the details). I suspect that is the
reason for the high sporadic context switches I have been
experiencing.<br></div><div><br></div><div>Cheers!<br></div><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-user
protonmail_signature_block-empty"><div><br></div></div></div></blockquote><div><br></div><div>In theory, more clients and more diverse filesets.<br></div><div> <br></div><div> The only way to know would be for you to analyze the traffic pattern
and/or profile gluster on your server. There's never some magic
"tune software X to operate more efficiently" setting, or else it
would be the default (except for the "turbo" button back in the
early PC clone days).<br></div><div> <br></div><div> <br></div><blockquote type="cite"><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-proton
protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] 120k context switches on
GlsuterFS nodes<br></div><div>Local Time: May 18, 2017 8:43 PM<br></div><div>UTC Time: May 18, 2017 6:43 PM<br></div><div>From: <a rel="noreferrer nofollow noopener" href="mailto:mabi@protonmail.ch" class="moz-txt-link-abbreviated">mabi@protonmail.ch</a><br></div><div>To: Ravishankar N <a rel="noreferrer nofollow noopener" href="mailto:ravishankar@redhat.com" class="moz-txt-link-rfc2396E"><ravishankar@redhat.com></a><br></div><div>Pranith Kumar Karampuri <a rel="noreferrer nofollow noopener" href="mailto:pkarampu@redhat.com" class="moz-txt-link-rfc2396E"><pkarampu@redhat.com></a>,
Gluster Users <a rel="noreferrer nofollow noopener" href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>, Gluster Devel <a rel="noreferrer nofollow noopener" href="mailto:gluster-devel@gluster.org" class="moz-txt-link-rfc2396E"><gluster-devel@gluster.org></a><br></div><div><br></div><div>I have a single Intel Xeon CPU E5-2620 v3 @ 2.40GHz in each
nodes and this one has 6 cores and 12 threads. I thought this
would be enough for GlusterFS. When I check my CPU graphs
everything is pretty much idle and there is hardly any peeks
at all on the CPU. During the very high context switch my CPU
graphs shows the following:<br></div><div><br></div><div>1 thread was 100% busy in CPU user<br></div><div>1 thread was 100% busy in CPU system<br></div><div><br></div><div>leaving actually 10 other threads out of the total of 12
threads unused...<br></div><div><br></div><div>Is there maybe any performance tuning parameters I need to
configure in order to make a better use of my CPU cores or
threads?<br></div><div><br></div><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-user
protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton
protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] 120k context switches on
GlsuterFS nodes<br></div><div>Local Time: May 18, 2017 7:03 AM<br></div><div>UTC Time: May 18, 2017 5:03 AM<br></div><div>From: <a rel="noreferrer nofollow noopener" href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>To: Pranith Kumar Karampuri <a rel="noreferrer nofollow noopener" href="mailto:pkarampu@redhat.com" class="moz-txt-link-rfc2396E"><pkarampu@redhat.com></a>,
mabi <a rel="noreferrer nofollow noopener" href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>Gluster Users <a rel="noreferrer nofollow noopener" href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>, Gluster
Devel <a rel="noreferrer nofollow noopener" href="mailto:gluster-devel@gluster.org" class="moz-txt-link-rfc2396E"><gluster-devel@gluster.org></a><br></div><div><br></div><div><br></div><div class="moz-cite-prefix">On 05/17/2017 11:07 PM, Pranith
Kumar Karampuri wrote:<br></div><blockquote type="cite"><div dir="ltr">+ gluster-devel<br></div><div class="gmail_extra"><div><br></div><div class="gmail_quote"><div>On Wed, May 17, 2017 at 10:50 PM, mabi <span dir="ltr"><<a rel="noreferrer nofollow noopener" href="mailto:mabi@protonmail.ch">mabi@protonmail.ch</a>></span> wrote:<br></div><div><br></div><blockquote style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex" class="gmail_quote"><div>I don't know exactly what kind of
context-switches it was but what I know is that it
is the "cs" number under "system" when you run
vmstat.<br></div></blockquote></div></div></blockquote><div>Okay, that could be due to the syscalls themselves or
pre-emptive multitasking in case there aren't enough cpu
cores. I think the spike in numbers is due to more users
accessing the files at the same time like you observed,
translating into more syscalls. You can try capturing the
gluster volume profile info the next time it occurs and
co-relate with the cs count. If you don't see any negative
performance impact, I think you don't need to be bothered
much by the numbers.<br></div><div><br></div><div>HTH,<br></div><div>Ravi<br></div><div><br></div><blockquote type="cite"><div class="gmail_extra"><div class="gmail_quote"><blockquote style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex" class="gmail_quote"><div><br></div><div>Also I use the percona linux monitoring template
for cacti (<a rel="noreferrer nofollow noopener" href="https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html">https://www.percona.com/doc/<wbr>percona-monitoring-plugins/<wbr>LATEST/cacti/linux-templates.<wbr>html</a>)
which monitors context switches too. If that's of
any use interrupts where also quite high during that
time with peaks up to 50k interrupts.<br></div><div class="HOEnZb"><div class="h5"><div><br></div><div class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty"><div class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote type="cite" class="m_-9093338394098711715protonmail_quote"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] 120k context
switches on GlsuterFS nodes<br></div><div>Local Time: May 17, 2017 2:37 AM<br></div><div>UTC Time: May 17, 2017 12:37 AM<br></div><div>From: <a rel="noreferrer nofollow noopener" href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a><br></div><div>To: mabi <<a rel="noreferrer nofollow noopener" href="mailto:mabi@protonmail.ch">mabi@protonmail.ch</a>>,
Gluster Users <<a rel="noreferrer nofollow noopener" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br></div><div><br></div><div><br></div><div class="m_-9093338394098711715moz-cite-prefix">On
05/16/2017 11:13 PM, mabi wrote:<br></div><blockquote type="cite"><div>Today I even saw up to 400k context
switches for around 30 minutes on my two
nodes replica... Does anyone else have so
high context switches on their GlusterFS
nodes?<br></div><div><br></div><div>I am wondering what is "normal" and if I
should be worried...<br></div><div><br></div><div class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty"><div class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty"><div><br></div></div><div class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote type="cite" class="m_-9093338394098711715protonmail_quote"><div>-------- Original Message --------<br></div><div>Subject: 120k context switches on
GlsuterFS nodes<br></div><div>Local Time: May 11, 2017 9:18 PM<br></div><div>UTC Time: May 11, 2017 7:18 PM<br></div><div>From: <a class="m_-9093338394098711715moz-txt-link-abbreviated" href="mailto:mabi@protonmail.ch" rel="noreferrer nofollow noopener">mabi@protonmail.ch</a><br></div><div>To: Gluster Users <a class="m_-9093338394098711715moz-txt-link-rfc2396E" href="mailto:gluster-users@gluster.org" rel="noreferrer nofollow noopener"><gluster-users@gluster.org></a><br></div><div><br></div><div>Hi,<br></div><div><br></div><div>Today I noticed that for around 50
minutes my two GlusterFS 3.8.11 nodes had
a very high amount of context switches,
around 120k. Usually the average is more
around 1k-2k. So I checked what was
happening and there where just more users
accessing (downloading) their files at the
same time. These are directories with
typical cloud files, which means files of
any sizes ranging from a few kB to MB and
a lot of course.<br></div><div><br></div><div>Now I never saw such a high number in
context switches in my entire life so I
wanted to ask if this is normal or to be
expected? I do not find any signs of
errors or warnings in any log files.<br></div><div><br></div></blockquote></blockquote><div class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty"><div class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty"><div><br></div></div></div><div>What context switch are you referring to
(syscalls context-switch on the bricks?) ? How
did you measure this?<br></div><div>-Ravi<br></div><div><br></div><blockquote type="cite"><blockquote type="cite" class="m_-9093338394098711715protonmail_quote"><div>My volume is a replicated volume on two
nodes with ZFS as filesystem behind and
the volume is mounted using FUSE on the
client (the cloud server). On that cloud
server the glusterfs process was using
quite a lot of system CPU but that server
(VM) only has 2 vCPUs so maybe I should
increase the number of vCPUs...<br></div><div><br></div><div>Any ideas or recommendations?<br></div><div><br></div><div class="m_-9093338394098711715protonmail_signature_block
m_-9093338394098711715protonmail_signature_block-empty"><div class="m_-9093338394098711715protonmail_signature_block-user
m_-9093338394098711715protonmail_signature_block-empty"><div><br></div></div><div class="m_-9093338394098711715protonmail_signature_block-proton
m_-9093338394098711715protonmail_signature_block-empty"><br></div></div><div>Regards,<br></div><div>M.<br></div></blockquote><div><br></div><div><br></div><div><br></div><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-9093338394098711715moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" rel="noreferrer nofollow noopener">Gluster-users@gluster.org</a>
<a class="m_-9093338394098711715moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer nofollow noopener">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
<br></pre></blockquote><p><br></p></blockquote><div><br></div></div></div><div>______________________________<wbr>_________________
Gluster-users mailing list <a rel="noreferrer nofollow noopener" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <a rel="noreferrer nofollow noopener" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></div></blockquote></div><div>--<br></div><div data-smartmail="gmail_signature" class="gmail_signature"><div dir="ltr">Pranith<br></div></div></div></blockquote><p><br></p></blockquote><div><br></div></blockquote><div><br></div><pre wrap="">_______________________________________________
Gluster-users mailing list
<a rel="noreferrer nofollow noopener" href="mailto:Gluster-users@gluster.org" class="moz-txt-link-abbreviated">Gluster-users@gluster.org</a>
<a rel="noreferrer nofollow noopener" href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="moz-txt-link-freetext">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></pre></blockquote></blockquote><div><br></div>