[Gluster-devel] Selfheal is not working? Once more
Łukasz Osipiuk
lukasz at osipiuk.net
Thu Jul 31 00:07:51 UTC 2008
Thanks for answers :)
On Wed, Jul 30, 2008 at 8:52 PM, Martin Fick <mogulguy at yahoo.com> wrote:
> --- On Wed, 7/30/08, Łukasz Osipiuk <lukasz at osipiuk.net> wrote:
>
[cut]
>> The more extreme example is: on of data bricks explodes and
>> You replace it with new one, configured as one which gone off
>> but with empty HD. This is the same as above
>> experiment but all data is gone, not just one file.
>
> AFR should actually handle this case fine. When you install
> a new brick and it is empty, there will be no metadata for
> any files or directories on it so it will self(lazy) heal.
> The problem that you described above occurs because you have
> metadata saying that your files (directory actually) is
> up to date, but the directory is not since it was modified
> manually under the hood. AFR cannot detect this (yet), it
> trusts its metadata.
Well, either I am doing something terribly wrong or it does not handle
this case fine.
I have following configuration.
6 bricks: A, B, C, D, E, F
On client I do
IO-CACHE(
IO-THREADS(
WRITE-BEHIND
READ_AHEAD(
UNIFY(
DATA(AFR(A,B), AFR(C,D)), NS(AFR(E,F)
)
)
)
)
)
I do:
1. mount glusterfs on client
2. on client create few files/directories on mounted glusterfs
3. shutdown brick A
4. delete and recreate brick A local directory
5. startup brick A
6. on client access all files in mounted glusterfs directory.
After such procedure no files/directories appear in local brick A
directory? Should they or I am missing something?
I think the file checksuming you described is overkill for my needs.
I think I will know if one of my HD drives brakes down and I will
replace it, but I need to workaround problem with data recreation
described above.
--
Łukasz Osipiuk
mailto: lukasz at osipiuk.net
More information about the Gluster-devel
mailing list