[Gluster-devel] Improving real world performance by moving files closer to their target workloads

gordan at bobich.net gordan at bobich.net
Fri May 16 12:28:28 UTC 2008



On Sat, 17 May 2008, Luke McGregor wrote:

> ok so basicly a file lookup works by broadcasting to the network and anyone
> who has the file replies, the location is then cached locally and the next
> time the file is needed no broadcast is needed as it is in local cache?
> correct?

The file's _location_ gets cached, from how I understand what Avati said. 
Not the file itself. It avoids the look-up, but doesn't avoid the 
transfer.

> The reason we dont really want to go with caching as a final
> solution is that it wont reduce toward an optimal solution over time,
> ideally files that are commonly used should no longer be located in nodes
> that dont use them, instead they should be located locally. Well thats the
> theory anyway, if this isnt the case i think it may still be useful doing
> the work to prove the point that it doesnt provide any great benefit.

It would depend on your workload, but I think it WOULD give considerable 
benefit. You'd just need to "cache" the file to the local store, rather 
than caching it's location. The file in the local store is not really 
"cached", it's copied, so next time another node asks for it, the node 
that copied it to itself would respond to the request.

This would need a change in the handling of how a file is locked/accessed. 
A lock would need to be made on all the nodes that could respond with the 
file. AFR doesn't quite do this, the primary node (first one on the AFR 
list) is the one that acts as the lock server. If this could be overcome, 
then you'd have most of the components you need.

> Im a little worried that there might be a sticking point with the current
> lookup scheme if there are multiple copies however. Im not quite sure how to
> get around consistancy if you want to guarentee that every write accesses
> the most recent copy only. I can see this as a serious problem. Im not too
> sure how to get around it but i will have a think about it.

I think there is an upcoming fix in 1.3.10 that enures that O_APPEND 
writes are atomic in AFR. You would need something similar, with implicit 
locking across the cluster.

> I personally think that they would get a better performance benefit by
> breaking the file down into small pieces and spreading them over the network
> to get better read performance from the network as there are more hosts
> doing small amounts of disk IO each, i suppose this is similar to your
> stripe?

That sounds exactly the same as the stripe, but if your network I/O is 
faster than your disk I/O, then why bother caching locally in the first 
place? The main assumption in the first place was that a file that is 
closer could be read faster, hence why local migration.

> However the acedemics in the department all seem very sold on the
> migration idea. personally i come from a RAID/SAN background.

File migration idea is more academically sexy. You have a self-optimizing 
file system, which is a very cool idea, and a lot more redundant than 
RAID/SAN. With file migration + minimum redundancy specification, you 
could lose entire nodes transparently. SAN cannot have that level of 
redundancy. SAN also becomes the bottleneck, whereas the GlusterFS 
solution with migrating redundant files would be much, much more 
scaleable.

Gordan





More information about the Gluster-devel mailing list