[Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.

bugzilla at redhat.com bugzilla at redhat.com
Thu Dec 29 11:58:03 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1403120

nchilaka <nchilaka at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED



--- Comment #6 from nchilaka <nchilaka at redhat.com> ---
validation:
I have run the test on 3.8.4-10 and the fix is working

1. Create a 1x2 replica vol using a 2 node cluster.
2. Fuse mount the vol and create 2000 files
3. Bring one brick down, write to those files, leading to 2000 pending data
heals.
4. Bring back the brick and launch index heal
5. The shd log on the source brick prints completed heals for the the processed
files.
6. Before the heal completes, do a `gluster vol set volname self-heal-daemon
off`
7. The heal stops as expected.
8. Re-enable the shd: `gluster vol set volname self-heal-daemon on`
9. Observe the shd log, the heal started to work and shd log gets populated
with heal info

moving to verified

While verifying, I hit the bz 1409084 - heal enable/disable is restarting the
selfheal deamon we don't see any files getting healed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=S0hQRlquCK&a=cc_unsubscribe


More information about the Bugs mailing list