[Gluster-users] Parity for NUFA ?
frank at sixthtoe.net
Tue Apr 7 11:52:10 UTC 2009
>I just discovered GlusterFS and it looks great ! I will definitely give
>it a try. Soon.
>In particular, the NUFA translator seems to meet my needs (use local
>resources as far as possible). I've read most of the documentation about
>NUFA but I still have some unanswered questions :
>- What happens if a node fills up its entire local storage ? Are new
>data transfered to another node ? Or does it crash ?
>- What about data protection ? As I understand it, if a node dies in a
>NUFA cluster, its files are gone with it ?
>Jshook say that in order to combine NUFA and afr functionnality, you
>just have to use afr with local volume name, and have read-subvolume
>option set to local volume. That's true in the case of a 2 nodes
>cluster, but in a 100 nodes cluster, you would still have the capacity
>of only 1 node, and 100 copies of each file. Am I right ?
>What would be great is to have the ability to create parity bricks :
>something like having 98 nodes in a NUFA cluster and 2 parity nodes that
>are just here in case a node (or two) went down. I saw that you had
>graid6 on your roadmap, so do you think that's possible ? And if so,
>when (approximately) ?
>Anyway, thanks for the work you made so far. I'll certainly be back
>annoying you when I'll start testing it ;-)
I saw RAID-6 support on the road map also, and agree it would be great
to get some type of protection against
brick failure. I got to thinking... instead of doing RAID-6 maybe it
would be better to do something like ZFS raid-z
on the brick level. Treat each brick like a vdev and the collection
of bricks like a zpool! I'm sure it's far more
complicated than that, but do any of the developers out there think it
would possible to merge the two (RAIDZ & GlusterFS)?
I guess the hardest part would be trying to figure out where the
checking would get done; client side or brick side?
"I have come here to chew bubble gum and kick ass; and I'm all out of
~Rowdy Roddy Piper - 'They Live'
More information about the Gluster-users