[Gluster-users] Gluster crashes when cascading AFR
haralds at cs.tu-berlin.de
Tue Dec 16 16:22:33 UTC 2008
2008/12/16 Rainer Schwemmer <rainer.schwemmer at cern.ch>:
> Hello all,
> Thanks for all the suggestions so far.
> There seems to be a bit of confusion of what I'm trying to do, or i did
> not understand the caching part that some of you are suggesting.
What I think was meant by "caching" was to use the io-cache translator
on the clients.
IIRC, your program is started eight times on every node, once for each
core. Local caching on the clients would reduce network and server
load as commonly used files would have to be fetched over the network
just once for each node's program startup.
> The plan is to use AFR to write a copy of the repository onto the local
> disks of each of the 2000 cluster nodes. Since gluster uses the
> underlying ext3 file system and just puts the AFRed files onto the
> disks, i should be able to read the repository data directly via ext3 on
> the cluster nodes, once replication is completed.
> This way i can also use the linux built in FS cache. I would just use
> the root node of the hierarchy to throw in new files to be replicated on
> all the cluster nodes when necessary.
I agree that that looks like an optimal solution, it would be nice if
you can get it to work.
If it doesn't - well here's another idea:
It's something like the opposite of your preferred setup:
I wonder what the performance would be like with a simple unify of a
lot of volumes.
Something like the NUFA example:
Maybe using the new DHT scheduler would improve performance for small files.
This benchmark looks promising, but uses a different interconnect so
it might not be comparable:
More information about the Gluster-users