[Gluster-devel] How does read-subvol-entry.t works?

Ravishankar N ravishankar at redhat.com
Mon Mar 2 16:27:33 UTC 2015


On 03/01/2015 10:58 AM, Emmanuel Dreyfus wrote:
> Hi
>
> I am trying to understand why read-subvol-entry.t almost always fail on
> NetBSD. Here is how it goes:
>
> - create a 2-brick replicated volume
> - mkdir -p $M0/abc/def
> - set:
> self-heal-daemon off
> stat-prefetch off
> cluster.background-self-heal-count 0
> cluster.data-self-heal off
> cluster.metadata-self-heal off
> cluster.entry-self-heal off
> - kill brick0
> - touch $M0/abc/def/ghi
> - restart brick0
> - check for ghi in ls $M0/abc/def/
>
> How is it supposed to heal? If I understand correctly, the touch
> $M0/abc/def/ghi causes AFR xtattr to be created on $M0/abc/def/ for the
> operatione not done on brick0. Later the READDIR operation from ls
> $M0/abc/def/ should cause AFR to notice the AFR xattr and perform the
> heal.
I think I added the testcase for afr-v1 code. Since 
cluster.{data/metadata/entry}self-heal is set to off,
lookup/readdir will *not* trigger heal in afr-v1. i.e. ghi would not be 
created in brick0 despite the "EXPECT_WITHIN $PROCESS_UP_TIMEOUT "ghi" 
echo `ls $M0/abc/def/`"
Thus what the test was checking is that in spite of not healing, the 
client still got the correct data by reading from brick1.

In afr-v2, lookup  does not trigger data/metadata/entry anyway. Only 
'name' self-heals are done. i.e. ghi is created but its data and 
metadata are not healed. These are taken care only by the self-heal-daemon.
In other words, client side healing in v2 is restricted to name 
self-heals only.

HTH,
Ravi


> Is it the way it should behave? If it is, then where is the relevant
> code in xlator/cluster/afr/src ?
>



More information about the Gluster-devel mailing list