[Gluster-Maintainers] glusterfs-3.8: User understandable release note needed for new CLI command for ESH
Krutika Dhananjay
kdhananj at redhat.com
Mon Dec 12 04:51:42 UTC 2016
On Mon, Dec 12, 2016 at 10:17 AM, Niels de Vos <ndevos at redhat.com> wrote:
> On Mon, Dec 12, 2016 at 09:52:04AM +0530, Krutika Dhananjay wrote:
> > With this fix, the user does not need to worry about when to
> enable/disable
> > the option -
> > the CLI command itself performs the necessary checks before allowing the
> > "enable" command to proceed.
> > What are those checks?
> > * Whether heal is already needed on the volume
> > * Whether any of the replicas is down
> > In both of the cases, the command will be failed since AFR will be
> > switching from creating heal indices (markers
> > for files that need heal) under .glusterfs/indices/xattrop to creating
> them
> > under .glusterfs/indices/entry-changes.
> > The moment this switch happens, self-heal-daemon will cease to crawl the
> > entire directory if a directory needs heal
> > and instead looks for exact names under a directory that need heal under
> > .glusterfs/indices/entry-changes. This
> > might cause self-heal to miss healing some entries (because before the
> > switch directories already needing heal won't
> > have any indices under .glusterfs/indices/entry-changes) and mistakenly
> > unset the pending heal xattrs even though
> > the individual replicas are not in sync.
> >
> > When should they enable the option? - When they want to use the feature
> ;)
> > - which is useful
> > for faster self-healing in use cases with large number of files under a
> > single directory.
> > For example, it is useful in VM use cases with smaller shard sizes, given
> > that all shards are created
> > under a single directory ".shard". When a shard is created while a
> replica
> > was down, once it is back up,
> > self-heal due to its maintaining granular indices will know exactly which
> > shard to recreate on the sync as
> > opposed to crawling the entire .shard directory to find out the same
> > information.
> >
> > When should they disable the option? - When they don't like the feature
> or
> > if/when a bug is found in it,
>
> Thanks for the details!
>
> > ... speaking of which, can we wait till http://review.gluster.org/#/c/
> 16075/
> > is also merged into 3.8 before making
> > the release? Although the bug is in AFR core, the likelihood of hitting
> the
> > bug is more with granular entry heal
> > than without it. And I know of at least 3 users who are using the feature
> > already on their production system.
> > Otherwise we might have to wait one more month for the fix to be taken
> in,
> > which is quite late IMO.
>
> I do not see a cloned bug for 3.8.7 yet? Could you clone the bug for
> mainline and add "glusterfs-3.8.7" in the blocks field of the new BZ?
>
Thank you! Here it is - https://bugzilla.redhat.com/show_bug.cgi?id=1403646
-Krutika
> Thanks,
> Niels
>
> >
> > -Krutika
> >
> > On Sun, Dec 11, 2016 at 10:23 PM, Niels de Vos <ndevos at redhat.com>
> wrote:
> >
> > > Could you please pass me a few lines that are understandable for users
> > > so that they know when/if they should enable/disable the new
> > > granular-entry-heal option?
> > >
> > > The bug does not explain a lot, and the commit message is not very user
> > > friendly:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1398501#c4
> > >
> > > It helps to know what kind of errors/warnings are produced, and what
> the
> > > recommended action is.
> > >
> > > I'll wait with pushing the release-notes for 3.8.7 until I have more
> > > details. This obviously blocks the release as well.
> > >
> > > Thanks,
> > > Niels
> > >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/maintainers/attachments/20161212/0131df76/attachment-0001.html>
More information about the maintainers
mailing list