[Gluster-users] Tiered volume performance degrades badly after a volume stop/start or system restart.

Jeff Byers jbyers.sfly at gmail.com
Tue Jan 30 23:29:49 UTC 2018


I am fighting this issue:

  Bug 1540376 – Tiered volume performance degrades badly after a
volume stop/start or system restart.
  https://bugzilla.redhat.com/show_bug.cgi?id=1540376

Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?

Thanks!

~ Jeff Byers ~

Tiered volume performance degrades badly after a volume
stop/start or system restart.

The degradation is very significant, making the performance of
an SSD hot tiered volume a fraction of what it was with the
HDD before tiering.

Stopping and starting the tiered volume causes the problem to
exhibit. Stopping and starting the Gluster services also does.

Nothing in the tier is being promoted or demoted, the volume
starts empty, a file is written, then read, then deleted. The
file(s) only ever exist on the hot tier.

This affects GlusterFS FUSE mounts, and also NFSv3 NFS mounts.
The problem has been reproduced in two test lab environments.
The issue was first seen using GlusterFS 3.7.18, and retested
with the same result using GlusterFS 3.12.3.

I'm using the default tiering settings, no adjustments.

Nothing of any significance appears to be being reported in
the GlusterFS logs.

Summary:

Before SSD tiering, HDD performance on a FUSE mount was 130.87
MB/sec writes, 128.53 MB/sec reads.

After SSD tiering, performance on a FUSE mount was 199.99
MB/sec writes, 257.28 MB/sec reads.

After GlusterFS volume stop/start, SSD tiering performance on
FUSE mount was 35.81 MB/sec writes, 37.33 MB/sec reads. A very
significant reduction in performance.

Detaching and reattaching the SSD tier restores the good
tiered performance.

~ Jeff Byers ~


More information about the Gluster-users mailing list