[Gluster-devel] question on glustershd

Krutika Dhananjay kdhananj at redhat.com
Wed Dec 3 06:39:56 UTC 2014


----- Original Message -----

> From: "Krutika Dhananjay" <kdhananj at redhat.com>
> To: "Emmanuel Dreyfus" <manu at netbsd.org>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Wednesday, December 3, 2014 11:54:03 AM
> Subject: Re: [Gluster-devel] question on glustershd

> ----- Original Message -----

> > From: "Emmanuel Dreyfus" <manu at netbsd.org>
> 
> > To: "Ravishankar N" <ravishankar at redhat.com>, "Gluster Devel"
> > <gluster-devel at gluster.org>
> 
> > Sent: Wednesday, December 3, 2014 10:14:22 AM
> 
> > Subject: Re: [Gluster-devel] question on glustershd
> 

> > Ravishankar N <ravishankar at redhat.com> wrote:
> 

> > > afr_shd_full_healer() is run only when we run 'gluster vol heal <volname>
> 
> > > full`, doing a full brick traversal (readdirp) from the root and
> 
> > > attempting heal for each entry.
> 

> > Then we agree that "gluster vol heal $volume full" may fail to heal some
> 
> > files because of inode lock contention, right?
> 

> > If that is expected behavior, then the tests are wrong. For instance in
> 
> > tests/basic/afr/entry-self-heal.t we do "gluster vol heal $volume full"
> 
> > and we check that no unhealed files are left behind.
> 

> > Did I miss something, or do we have to either fix afr_shd_full_healer()
> 
> > or tests/basic/afr/entry-self-heal.t ?
> 

> Typical use of "heal full" is in the event of a disk replacement where one of
> the bricks in the replica set is totally empty.
> And in a volume where both (assuming 2 way replication to keep the discussion
> simple) children of AFR are on the same node, SHD would launch two healers.
> Each healer does readdirp() only on the brick associated with it (see how
> @subvol is initialised in afr_shd_full_sweep()).
> I guess in such scenarios, the healer associated with the brick that was
> empty would have no entries to read, and as a result, nothing to heal from
> it to the other brick.
> In that case, there is no question of lock contention of the kind that you
> explained above?

Come to think of it, it does not really matter whether the two bricks are on the same node or not. 
In either case, there may not be a lock contention between healers associated with different bricks, irrespective of whether they are part of the same SHD or SHDs on different nodes. 
-Krutika 

> -Krutika

> > --
> 
> > Emmanuel Dreyfus
> 
> > http://hcpnet.free.fr/pubz
> 
> > manu at netbsd.org
> 
> > _______________________________________________
> 
> > Gluster-devel mailing list
> 
> > Gluster-devel at gluster.org
> 
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 

> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20141203/e37aca35/attachment.html>


More information about the Gluster-devel mailing list