[Gluster-users] Self healing metadata info
Stephan von Krawczynski
skraw at ithnet.com
Fri Jan 25 10:01:42 UTC 2013
Hi Patric,
your paper shows clearly you are infected by the fs-programmer-virus :-)
Noone else would give you tags/gfids/inode nums of a file inside a logfile
instead of the full true filename, simply because looking at the logfile
days/months/years later you know exactly nothing about the files affected by
e.g. a self heal. Can you explain why a fs cannot give the user/admin the
files' name currently fiddling around in a logfile instead of a cryptic number?
For the completeness in split-brain case I would probably do a
gluster volume heal <repvol> prefer <brick> <filename>
command which prefers the files' copy on <brick> and triggers the self-heal
for that file.
As an addition you would be able to allow
gluster volume heal <repvol> prefer <brick>
(without filename) to generally prefer files on <brick> and trigger self-heal
for all files. There are cases where admins do not care about the actual copy
but more about the accessibility of the file per se.
Everything is easy around self-heal/splitbrain if you deal with 5 files
affected. But dealing with 5000 files instead shows you that no admin is
probably able to look at every single file. So he should be able to choose
some general option like gluster volume heal <repvol> prefer <tag>
where <tag> can be:
<brickname> (as above)
"length", choose longest file always
"date", choose latest file date always
"delete", simply remove all affected files
<name-one> ...
Regards,
Stephan
On Fri, 25 Jan 2013 10:11:07 +0100
Patric Uebele <puebele at redhat.com> wrote:
> Hi JPro,
>
> perhaps the attached doc does explain it a bit.
>
> Best regards,
>
> Patric
>
> On Fri, 2013-01-25 at 01:26 -0500, Java Pro wrote:
> > Hi,
> >
> >
> > If a brick is down and comes back up later, how does Glusterfs know
> > which files in this brick need to be 'self-healed'?
> >
> >
> > Since the metadata of whether to 'heal' is stored as an xattr in a
> > replica on other bricks. Does Glusterfs scan these files on the other
> > bricks to see if one is "accusing" its replica and therefore need to
> > "heal" its replica?
> >
> >
> > In short, does Glusterfs keep a record of "writes" to a brick when a
> > brick is down and apply these "writes" to the brick when its backup?
> >
> >
> >
> >
> > Thanks,
> > JPro
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> --
> Patric Uebele
> Solution Architect Storage
>
> Red Hat GmbH
> Technopark II, Haus C
> Werner-von-Siemens-Ring 14
> 85630 Grasbrunn
> Germany
>
> Office: +49 89 205071-162
> Cell: +49 172 669 14 99
> mailto: Patric.Uebele at redhat.com
>
> gpg keyid: 48E64CC1
> gpg fingerprint: C63E 6320 A03B 4410 D208 4EE7 12FC D0E6 48E6 4CC1
>
> ____________________________________________________________________
> Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 14, 85630 Grasbrunn
> Handelsregister: Amtsgericht Muenchen HRB 153243
> Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham,
> Charles Cachera
More information about the Gluster-users
mailing list