[Bugs] [Bug 1467614] Performance improvements in socket layer

bugzilla at redhat.com bugzilla at redhat.com
Mon Aug 28 07:05:51 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1467614



--- Comment #13 from Krutika Dhananjay <kdhananj at redhat.com> ---
(In reply to Manoj Pillai from comment #12)
> (In reply to Krutika Dhananjay from comment #10)
> > That's interesting.
> > 
> > I actually ran pure randrd job again on Sanjay's setup (through vms) with
> > the best configuration known yet and with mempool and mem-accounting
> > disabled in code last week. And I saw that the FUSE thread is consistently
> > at 95% CPU utilization. I'm running perf-record on this build right now to
> > see where the thread is spending all its time.
> 
> What was the gain, if any, in IOPs over previous runs?

There was no gain or drop in IOPs. The only difference I noticed was that fuse
thread used to be at 78% CPU utilization before; now with memacct and mempool
disabled, it shot up to 95%.

> 
> FYI, the build I'm running with is a downstream build. This is from a few
> quick runs over the weekend on an existing setup that has other work in
> progress. The build has https://review.gluster.org/17391, and
> client-io-threads enabled.

OK.

> 
> > 
> > As for the perf record output above, I think using the "-call-graph=dwarf"
> > option should also get us the backtrace, though this would require
> > glusterfs-debuginfo package to be installed.
> > 
> > -Krutika
> 
> debuginfo package is installed, and I have some data captured with
> call-graph enabled. I found the above to be cleaner to just get the
> functions where it is spending most time.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=PbDlYmxG26&a=cc_unsubscribe


More information about the Bugs mailing list