[Gluster-devel] Sharding - what next?
Krutika Dhananjay
kdhananj at redhat.com
Wed Dec 16 12:59:29 UTC 2015
----- Original Message -----
> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>, "gluster-users"
> <gluster-users at gluster.org>
> Sent: Wednesday, December 16, 2015 6:56:03 AM
> Subject: Re: Sharding - what next?
> Hi, late reply again ...
> On 10/12/2015 5:33 PM, Krutika Dhananjay wrote:
> > There is a 'heal-info summary' command that is under review, written by
> > Mohammed Ashiq @ http://review.gluster.org/#/c/12154/3 which prints the
> > number of files that are yet to be healed.
>
> > It could perhaps be enhanced to print files in split-brain and also files
> > which are possibly being healed. Note that these counts are printed per
> > brick.
>
> > It does not print a single list of counts with aggregated values. Would
> > that
> > be something you would consider useful?
>
> Very much so, that would be perfect.
> I can get close to this just with the following
> gluster volume heal datastore1 info | grep 'Brick\|Number'
> And if one is feeling fancy or just wants to keep an eye on progress
> watch "gluster volume heal datastore1 info | grep 'Brick\|Number'"
> though of course this runs afoul of the heal info delay.
I guess I did not make myself clear. Apologies. I meant to say that printing a single list of counts aggregated
from all bricks can be tricky and is susceptible to the possibility of same entry getting counted multiple times
if the inode needs a heal on multiple bricks. Eliminating such duplicates would be rather difficult.
Or, we could have a sub-command of heal-info dump all the file paths/gfids that need heal from all bricks and
you could pipe the output to 'sort | uniq | wc -l' to eliminate duplicates. Would that be OK? :)
-Krutika
> > > Also, it would be great if the heal info command could return faster,
> > > sometimes it takes over a minute.
> >
>
> > Yeah, I think part of the problem could be eager-lock feature which is
> > causing the GlusterFS client process to not relinquish the network lock on
> > the file soon enough, causing the heal info utility to be blocked for
> > longer
> > duration.
>
> > There is an enhancement Anuradha Talur is working on where heal-info would
> > do
> > away with taking locks altogether. Once that is in place, heal-info should
> > return faster.
>
> Excellent, I look fwd to that. Even if removing the locks results in the
> occasional inaccurate cout, I don't think that would mattter - From my POV
> its an indicator, not a absolute.
> Thanks,
> --
> Lindsay Mathieson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20151216/4ff36d4e/attachment.html>
More information about the Gluster-devel
mailing list