[Gluster-users] Archive integrity with hasing
Jeffry Molanus
jeffry.molanus at gmail.com
Mon Feb 23 20:34:56 UTC 2009
Hi all,
I've been using a CAS based storage solution for archiving and one of
the features that makes it an solution for archiving is the fact that it
has a mechanism in order to determine if the hash of the object (file)
is the same as on first write. The system uses replication for "HA", and
is a node based cluster implementation with a database containing the
meta data.
When the file is read from disc, the system performs calculation of the
hash; and checks if it matches the database. If this fails the copy of
the file that has been created during replication is checked. If this
one matches a new copy it is replicated and the damages file/disk is
deleted/retired and I have to working copies again. (if both fail: data
loss)
Another reason that the system is usable for archiving is because by
means of the hash, it can be determined if the file changed during
initial commit/write. This of course is not 100% safe, but it does add
to the "integrity" of the archive.
Is there any kind of support for this kind of extra checking in
gluster?
Regards, Jeffry
More information about the Gluster-users
mailing list