[Gluster-users] Poor performance on a server-class system vs. desktop

Yaniv Kaul ykaul at redhat.com
Thu Nov 26 12:46:22 UTC 2020


On Thu, Nov 26, 2020 at 2:31 PM Dmitry Antipov <dmantipov at yandex.ru> wrote:

> On 11/26/20 12:49 PM, Yaniv Kaul wrote:
>
> > I run a slightly different command, which hides the kernel stuff and
> focuses on the user mode functions:
> > sudo perf record --call-graph dwarf -j any --buildid-all --all-user -p
> `pgrep -d\, gluster` -F 2000 -ag
>
> Thanks.
>
> BTW, how much is an overhead of passing data between xlators? Even if the
> most of their features
> are disabled, just passing through all of the below is unlikely to have
> near-to-zero overhead:
>

Very good question. I was always suspicious of that flow, and I do believe
we could do some optimizations, but here's the response I've received back
then:
Here's some data from some tests I was running last week - the avg
round-trip
time spent by fops in the brick stack from the top translator io-stats till
posix before it is executed on-disk is less than 20 microseconds. And this
stack includes both translators that are enabled and used in RHHI as well as
the do-nothing xls you mention. In contrast, the round-trip time spent by
these
fops between the client and server translator is of the order of a few
hundred
microseconds to sometimes even 1ms.


> Thread 14 (Thread 0x7f2c0e7fc640 (LWP 19482) "glfs_rpcrqhnd"):
> #0  data_unref (this=0x7f2bfc032e68) at dict.c:768
> #1  0x00007f2c290b90b9 in dict_deln (keylen=<optimized out>,
> key=0x7f2c163d542e "glusterfs.inodelk-dom-count", this=0x7f2bfc0bb1c8) at
> dict.c:645
> #2  dict_deln (this=0x7f2bfc0bb1c8, key=0x7f2c163d542e
> "glusterfs.inodelk-dom-count", keylen=<optimized out>) at dict.c:614
> #3  0x00007f2c163c87ee in pl_get_xdata_requests (local=0x7f2bfc0ea658,
> xdata=0x7f2bfc0bb1c8) at posix.c:238
> #4  0x00007f2c163b3267 in pl_get_xdata_requests (xdata=0x7f2bfc0bb1c8,
> local=<optimized out>) at posix.c:213
>

For example, https://github.com/gluster/glusterfs/issues/1707
optimizes pl_get_xdata_requests()
a bit.
Y.

#5  pl_writev (frame=0x7f2bfc0d5348, this=0x7f2c08014830,
> fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1, offset=108306432,
> flags=0, iobref=0x7f2c080820d0, xdata=0x7f2bfc0bb1c8) at posix.c:2299
> #6  0x00007f2c16395e31 in worm_writev (frame=0x7f2bfc0d5348,
> this=<optimized out>, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> offset=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at worm.c:429
> #7  0x00007f2c1638a55f in ro_writev (frame=frame at entry=0x7f2bfc0d5348,
> this=<optimized out>, fd=fd at entry=0x7f2bfc0bc768, vector=vector at entry=0x7f2bfc105478,
> count=count at entry=1,
> off=off at entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at read-only-common.c:374
> #8  0x00007f2c163705ac in leases_writev (frame=frame at entry=0x7f2bfc0bf148,
> this=0x7f2c0801a230, fd=fd at entry=0x7f2bfc0bc768, vector=vector at entry=0x7f2bfc105478,
> count=count at entry=1,
> off=off at entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at leases.c:132
> #9  0x00007f2c1634f6a8 in up_writev (frame=0x7f2bfc067508,
> this=0x7f2c0801bf00, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0, xdata=0x7f2bfc0bb1c8)
> at upcall.c:124
> #10 0x00007f2c2913e6c2 in default_writev (frame=0x7f2bfc067508,
> this=<optimized out>, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #11 0x00007f2c2913e6c2 in default_writev (frame=frame at entry=0x7f2bfc067508,
> this=<optimized out>, fd=fd at entry=0x7f2bfc0bc768, vector=vector at entry=0x7f2bfc105478,
> count=count at entry=1,
> off=off at entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #12 0x00007f2c16315eb7 in marker_writev (frame=frame at entry=0x7f2bfc119e48,
> this=this at entry=0x7f2c08021440, fd=fd at entry=0x7f2bfc0bc768,
> vector=vector at entry=0x7f2bfc105478, count=count at entry=1,
> offset=offset at entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at marker.c:940
> #13 0x00007f2c162fc0ab in barrier_writev (frame=0x7f2bfc119e48,
> this=<optimized out>, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at barrier.c:248
> #14 0x00007f2c2913e6c2 in default_writev (frame=0x7f2bfc119e48,
> this=<optimized out>, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #15 0x00007f2c162c5cda in quota_writev (frame=frame at entry=0x7f2bfc119e48,
> this=<optimized out>, fd=fd at entry=0x7f2bfc0bc768, vector=vector at entry=0x7f2bfc105478,
> count=count at entry=1,
> off=off at entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at quota.c:1947
> #16 0x00007f2c16299c89 in io_stats_writev (frame=frame at entry=0x7f2bfc0e4358,
> this=this at entry=0x7f2c08029df0, fd=0x7f2bfc0bc768, vector=vector at entry=0x7f2bfc105478,
> count=1, offset=108306432, flags=0,
> iobref=0x7f2c080820d0, xdata=0x7f2bfc0bb1c8) at io-stats.c:2893
> #17 0x00007f2c161f01ac in server4_writev_resume (frame=0x7f2bfc0ef5c8,
> bound_xl=0x7f2c08029df0) at server-rpc-fops_v2.c:3017
> #18 0x00007f2c161f901c in resolve_and_resume (fn=<optimized out>,
> frame=<optimized out>) at server-resolve.c:680
> #19 server4_0_writev (req=<optimized out>) at server-rpc-fops_v2.c:3943
> #20 0x00007f2c290696e5 in rpcsvc_request_handler (arg=0x7f2c1614c0b8) at
> rpcsvc.c:2233
> #21 0x00007f2c28ffa3f9 in start_thread (arg=0x7f2c0e7fc640) at
> pthread_create.c:463
> #22 0x00007f2c28f25903 in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> Dmitry
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201126/6f34b849/attachment.html>


More information about the Gluster-users mailing list