[Gluster-users] Parity for NUFA ?
Raghavendra G
raghavendra.hg at gmail.com
Wed Apr 15 05:00:19 UTC 2009
Hi,
please find the inlined comments.
On Wed, Apr 1, 2009 at 6:44 PM, Julien Cornuwel <julien at cornuwel.net> wrote:
> Hi,
>
> I just discovered GlusterFS and it looks great ! I will definitely give
> it a try. Soon.
> In particular, the NUFA translator seems to meet my needs (use local
> resources as far as possible). I've read most of the documentation about
> NUFA but I still have some unanswered questions :
>
> - What happens if a node fills up its entire local storage ? Are new
> data transfered to another node ? Or does it crash ?
New files will be created on another node. Writes to the files on already
filled out nodes, return -1 with error code set to ENOSPC.
> - What about data protection ? As I understand it, if a node dies in a
> NUFA cluster, its files are gone with it ?
Yes, with just NUFA setup, there can be data loss. You can protect from data
loss by using replicate/afr xlator to replicate each child of NUFA.
>
> On http://www.gluster.org/docs/index.php/GlusterFS_Roadmap_Suggestions
> Jshook say that in order to combine NUFA and afr functionnality, you
> just have to use afr with local volume name, and have read-subvolume
> option set to local volume. That's true in the case of a 2 nodes
> cluster, but in a 100 nodes cluster, you would still have the capacity
> of only 1 node, and 100 copies of each file. Am I right ?
>
> What would be great is to have the ability to create parity bricks :
> something like having 98 nodes in a NUFA cluster and 2 parity nodes that
> are just here in case a node (or two) went down. I saw that you had
> graid6 on your roadmap, so do you think that's possible ? And if so,
> when (approximately) ?
>
> Anyway, thanks for the work you made so far. I'll certainly be back
> annoying you when I'll start testing it ;-)
>
> Regards,
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090415/92cbaaa1/attachment.html>
More information about the Gluster-users
mailing list