[Gluster-users] IO-Cache Questions

Alan Meadows pentheus at gmail.com
Wed Dec 17 05:50:21 UTC 2008


I think part of that message got clipped.

> From looking at the io-cache.c source, it's not clear that the entire
> file would be cached, byte ranges of heavily accessed portions of
> those files, or what.  Also, you can't specify a storage device to use
> as a cache target like /dev/sdb1 so I have to assume it reserves some
> memory and then keeps the cached files there.
>
> This brings me to three questions:
>
> 1) In order to use the SSD device would I create a really large swap
> file on the SSD device and then allocate the size of the SSD to
> IO-Cache?  Because I'm not that familiar with memory caching I'm
> wondering if the calls it makes to place data it wants into memory
> would stop at the point the system begins to swap, no matter what
> number I provide.  In this case, because of the SSD speed I don't want
> it to stop at the system begins to swap.  Swapping becomes a good
> thing.
>
> 2) Would IO-Cache attempt to cache the entire 2GB to 400G file,
> skipping those over my IO-Cache allocation size (lets say 60G) or
> would it just cache the byte ranges of the most heavily accessed parts
> of those files?  For instance, if someone is running a database server
> within one of the qcow2 files would and is hitting the "foobar" table
> with selects really hard, would IO-Cache cache the data being
> referenced (inodes X, Y, Z in file a.qcow2) by the select statements
> or try to cache the entire qcow2 file?
>
> 3) IO-Cache only helps with speeding up reads, right?  In my mind, the
> world is very bursty.  It would be incredibly cool if we had the
> ability to write to the SSD drive during bursts to return success to
> the client quickly and then stream those writes at steady speed to the
> SATA array.  I realize this is more complicated then it sounds in
> practice because incoming reads expect coherent data even while the
> data is being slowly written to SATA but just curious if this is out
> there.
>
> Thanks!




More information about the Gluster-users mailing list