[Gluster-users] IO-Cache does not cache

Raghavendra G raghavendra at gluster.com
Thu Apr 8 09:33:23 UTC 2010


Hi Michael,

Are you checking the number of calls during the second and subsequent reads on a file? For the first time, file has to be read freshly from backend. However, subsequent reads are served from cache.

regards,
----- Original Message -----
From: "Michael Schmid" <michael.schmid at amazee.com>
To: gluster-users at gluster.org
Sent: Thursday, April 8, 2010 1:21:44 PM
Subject: [Gluster-users] IO-Cache does not cache

Hello everybody!

First big thanks to this awesome project, we are using today NFS for our Webservice, but we want to use in the future GlusterFS to make it redundant. I'm hating standing up in the night to fix problems ; )

I'm working now with two Servers and one Client. All on Amazon EC2 running Debian Lenny and GlusterFS 3.0.3.

On the Client side there is an Apache 2.2, with PHP and eAccelerator. We need to have every PageHit as fast as possible, so I'm trying to use the performance translators as good as possible.
While playing around with the io-cache I figured out, that on my system it isn't used. So my Configs looks like this:


SERVER
====
volume posix1
  type storage/posix
  option directory /data/export
end-volume

volume locks1
    type features/locks
    subvolumes posix1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes locks1
end-volume
==

CLIENT
======
volume server1
    type protocol/client
    option transport-type tcp
    option remote-host 10.228.23.83
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume server2
    type protocol/client
    option transport-type tcp
    option remote-host 10.228.238.84
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume mirror0
    type cluster/replicate
    subvolumes server1 server2
end-volume

volume trace
   type debug/trace
   subvolumes mirror0
end-volume

volume iocache
    type performance/io-cache
    option cache-size 50MB
    option cache-timeout 30
    option page-size 256KB
    subvolumes trace
end-volume
==

When I now clear the debug file, make a pagerefresh, the debug file contains about:
~4800 Lines

Now I remove the IO-Cache and make the same again:
~4800 Lines

So the same count of files are requested over the network. Even if I request the same files again after 10 secounds, there are 4800 new lines. So no Cache at all.

When I'm now using performance/quick-read it looks like this:

With Quick-Read:
~2500 LInes

Without Quick-Read:
~4800 Lines

So there are some files cached. Does everybody know, why the io-cache does not work? Or is my debugging configuration wrong?
I could use performance/quick-read for our installation, but there is still the Memory-Bug....

Thanks for your help.

Michael
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list