[Gluster-users] Fwd: Performance in VM guests when hosting VM images on Gluster

Torbjørn Thorsen torbjorn at trollweb.no
Tue Mar 5 12:57:29 UTC 2013


On Fri, Mar 1, 2013 at 7:01 PM, Brian Foster <bfoster at redhat.com> wrote:
> On 03/01/2013 11:48 AM, Torbjørn Thorsen wrote:
>> On Thu, Feb 28, 2013 at 4:54 PM, Brian Foster <bfoster at redhat.com> wrote:
> All writes are done with sync, so I don't quite understand how cache
>> flushing comes in.
>>
>
> Flushing doesn't seem to be a factor, I was just noting previously that
> the only slowdown I noticed in my brief tests were associated with flushing.
>
> Note again though that loop seems to flush on close(). I suspect a
> reason for this is so 'losetup -d' can return immediately, but that's
> just a guess. IOW, if you hadn't used oflag=sync, the close() issued by
> dd before it actually exits would result in flushing the buffers
> associated with the loop device to the backing store. You are using
> oflag=sync, so that doesn't really matter.

Ah, I see.
I thought you meant close() on the FD that was backing the loop device,
but now I see what you mean.
Doing an non-sync dd run towards a loop device, it felt like that was the case.
I was seeing high throughput, but pressing ^C didn't stop dd,
and I'm guessing it's because it was blocking on close().

>>
..
>> To me it seems that a fresh loop device does mostly 64kb writes,
>> and at some point during a 24 hour window, changes to doing 4kb writes ?
>>
>
> Yeah, interesting data. One thing I was curious about is whether
> write-behind or some other caching translator was behind this one way or
> another (including the possibility that the higher throughput value is
> actually due to a bug, rather than the other way around). If I
> understand the io-stats translator correctly however, these request size
> metrics should match the size of the requests coming into gluster and
> thus suggest something else is going on.
>
> Regardless, I think it's best to narrow the problem down and rule out as
> much as possible. Could you try some of the commands in my previous
> email to disable performance translators and see if it affects
> throughput? For example, does disabling any particular translator
> degrade throughput consistently (even on new loop devices)? If so, does
> re-enabling a particular translator enhance throughput on an already
> mapped and "degraded" loop (without unmapping/remapping the loop)?
>

I was running with defaults, no configuration had been done after
installing Gluster.

If I disable the write-behind translator, I immediately see pretty
much the same speeds
as the "degraded loop", ie. ~3MB/s.
Gluster profiling tells me the same story, all writes are now 4KB requests.

If write-behind is disabled, the loop device is slow even if it's fresh.
Enabling write-behind, even while dd is writing to the loop device,
seems to increase the speed right away, without needing a new fd to the device.

A degraded loop device without an open fd will be fast after a toggle
of write-behind.
However, it seems that an open fd will keep the loop device slow.
I've only tested that with Xen, as that was the only thing I had with
a long-lived open fd to a loop device.

> Also, what gluster and kernel versions are you on?

# uname -a
Linux xen-storage01 2.6.32-5-xen-amd64 #1 SMP Sun May 6 08:57:29 UTC
2012 x86_64 GNU/Linux

# dpkg -l | grep $(uname -r)
ii  linux-image-2.6.32-5-xen-amd64      2.6.32-46
Linux 2.6.32 for 64-bit PCs, Xen dom0 support

# dpkg -l | grep gluster
ii  glusterfs-client                    3.3.1-1
clustered file-system (client package)
ii  glusterfs-common                    3.3.1-1
GlusterFS common libraries and translator modules


--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker

Trollweb Solutions AS
- Professional Magento Partner
www.trollweb.no

Telefon dagtid: +47 51215300
Telefon kveld/helg: For kunder med Serviceavtale

Besøksadresse: Luramyrveien 40, 4313 Sandnes
Postadresse: Maurholen 57, 4316 Sandnes

Husk at alle våre standard-vilkår alltid er gjeldende



More information about the Gluster-users mailing list