[Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench
Xavi Hernandez
jahernan at redhat.com
Wed Jan 2 07:03:03 UTC 2019
On Mon, Dec 24, 2018 at 11:30 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadhyay at gmail.com> wrote:
> [pulling the conclusions up to enable better in-line]
>
> > Conclusions:
> >
> > We should never have a volume with caching-related xlators disabled. The
> price we pay for it is too high. We need to make them work consistently and
> aggressively to avoid as many requests as we can.
>
> Are there current issues in terms of behavior which are known/observed
> when these are enabled?
>
> > We need to analyze client/server xlators deeper to see if we can avoid
> some delays. However optimizing something that is already at the
> microsecond level can be very hard.
>
> That is true - are there any significant gains which can be accrued by
> putting efforts here or, should this be a lower priority?
>
I would say that for volumes based on spinning disks this is not a high
priority, but if we want to provide good performance for NVME storage, this
is something that needs to be done. On NVME, reads and writes can be served
in few tens of microseconds, so adding 100 us in the network layer could
easily mean a performance reduction of 70% or more.
> > We need to determine what causes the fluctuations in brick side and
> avoid them.
> > This scenario is very similar to a smallfile/metadata workload, so this
> is probably one important cause of its bad performance.
>
> What kind of instrumentation is required to enable the determination?
>
> On Fri, Dec 21, 2018 at 1:48 PM Xavi Hernandez <xhernandez at redhat.com>
> wrote:
> >
> > Hi,
> >
> > I've done some tracing of the latency that network layer introduces in
> gluster. I've made the analysis as part of the pgbench performance issue
> (in particulat the initialization and scaling phase), so I decided to look
> at READV for this particular workload, but I think the results can be
> extrapolated to other operations that also have small latency (cached data
> from FS for example).
> >
> > Note that measuring latencies introduces some latency. It consists in a
> call to clock_get_time() for each probe point, so the real latency will be
> a bit lower, but still proportional to these numbers.
> >
>
> [snip]
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190102/6217b865/attachment-0001.html>
More information about the Gluster-devel
mailing list