[Gluster-devel] GlusterFS Roadmap: Erasure codes.
Dr Rodney G. McDuff
mcduff at its.uq.edu.au
Thu May 1 08:15:42 UTC 2008
Hi Anand
I see this as a ultra-high redundancy and DR solution (similar to
Cleversafe which doesn't seem to be progressing as fast as glusterfs
is). Large storage bricks spread over geographically distance data
centres (say for instance a coop of multiple universities data centres)
and used for archival of large write-one-read-many scientific data sets.
Such a system would still continue to function ( for some of the coop)
even if several data centres failed.
Anand Avati wrote:
> Rodney,
> A couple of weeks back we did an analysis of implementing R-S
> translator for m+n redundancy. One of our strategies (probably not the
> best) was to have it work like an extended stripe translator, where
> the scope of each checksum domain is a file.
> Some subvolumes contain file stripe chunks, and some subvolumes
> contain polynomial checksums across the chunks.
> Flip sides -
> 1. The computation is pretty high for this implementation
> 2. Even though the storage is not N-fold extra, write traffic is
> actually N-fold extra.
>
> Implementation is surely possible, with the performance impacts. We
> also have not seen a big need to bring it up in our priority list, but
> it is surely something we would like to have in the future.
--
Dr. Rodney G. McDuff |Ex ignorantia ad sapientiam
Manager, Strategic Technologies Group| Ex luce ad tenebras
Information Technology Services |
The University of Queensland |
EMAIL: mcduff at its.uq.edu.au |
TELEPHONE: +61 7 3365 8220 |
More information about the Gluster-devel
mailing list