[Gluster-devel] proposals to afr
alexey.filin at gmail.com
Mon Oct 22 17:13:59 UTC 2007
On 10/22/07, Kevan Benson <kbenson at a-1networks.com> wrote:
> Alexey Filin wrote:
> > Hi,
> > may I propose some ideas to be implemented inside afs to increase its
> > reliability?
> > * First idea: an extra extented attribute named e.g. afr_op_counter
> > info about operations performed currently over file, so operations
> > a file's (meta)data are done in a way:
> > 1) afr_master.increase_afr_op_counter <for file in namespace>
> > 2) real operation over file (meta)data
> > 3) afr_master.start_op -> afr_slave.increase_afr_op_counter <for file on
> > slave>
> > 4) loop over all slaves by 2)-3)
> > during close():
> > 1) afr_master.zero_op -> afr_slave.zero_afr_op_counter <for file on a
> > 2) loop over all slaves by 1)
> > 3) afr_master.zero_afr_op_counter <for file in namespace>
> > with the scheme all operations finished incorrectly are disclosed in a
> > simple and fast way (with non-zero counter), that scheme is not
> replacing to
> > afr version xattr, it is a complement allowing to find inconsistent
> > when close() doesn't update the xattr on slaves due to afr master crash
> Hmm, sort of like a trusted_afr_version minor number, that gets set
> while in an operation. Essentially equivalent to taking a file with an
> afr version of 3 and making it 3.5 for the duration of the operation,
> and 4 on close. Any files on slaves that show they are in an op but no
> operationis actually in place need to be self-healed. Sounds good to
> me, but then again, I'm not a GlusterFS dev. ;)
yes it is, you understood me correctly.
I changed the algo some times, so "start_op" is better to be named
"finish_op" (the minor version increment).
There may be a problem when a file is open() some times but it's the same
problem with afr version xattr, so it is to be handled probably in the same
> * Second idea: afr journal on master (for data or metadata only like in
> > modern local FS's), to keep all updates in it during operations with afr
> > slaves and recover after afr crash
> I'm not sure a journal's necessary with self heal. It would speed up
> recovery of failed processes in some cases, but slow it down in others.
> There should be another copy of the data be the nature of AFR, so self
> heal can recover the problem on a node by the copy operation it does
> currently. It might be somewhat slower for small operations, but it's
> quite simple and functional.
it is required to know where is the master copy. In a configuration when
some afr masters access slaves in different order it's a non-trivial task.
Is such a configuration legal?
As it is now, if a node dies during a write, the files
> trusted_afr_version isn't incremented on that node, and the next read of
> the file when the node is active will overwrite the inconsistent file
> with the good copy from another node. The client experiences a delay
> while glusterfs waits for the failed node to timeout before it continues
> it's writes, and then continues on. Besides the delay, node failures
> (and the subsequent automatic repair of the FS) are transparent to the
> client with regard to AFR.
I understand it, I say about configuration when afr master node crashes
before close() (it's indifferent is afr master on glfs slave or on glfs
server node), in such a situation replicas can be inconsistent but afr
version xattr is the same (old) on all replicas. To discover the
inconsistency without minor version xattr it is required e.g. to calculate
checksum of all replicas or to use workarounds like modification time check
misbehaving in some cases.
Of course minor version xattr brings some overhead, but if the overhead is
negligible it is worth it.
> -Kevan Benson
> -A-1 Networks
More information about the Gluster-devel