[Bugs] [Bug 1287503] Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"

bugzilla at redhat.com bugzilla at redhat.com
Thu Dec 10 06:42:53 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1287503



--- Comment #6 from Vijay Bellur <vbellur at redhat.com> ---
COMMIT: http://review.gluster.org/12843 committed in master by Atin Mukherjee
(amukherj at redhat.com) 
------
commit d57a5a57b8e87caffce94ed497240b37172f4a27
Author: Ravishankar N <root at ravi2.(none)>
Date:   Wed Dec 2 08:20:46 2015 +0000

    glusterd: add pending_node only if hxlator_count is valid

    Fixes a regression introduced by commit
    0ef62933649392051e73fe01c028e41baddec489 . See BZ for bug
    description.

    Problem:
        To perform GLUSTERD_BRICK_XLATOR_OP, the rpc requires number of xlators
(n) the
        op needs to be performed on and the xlator names are populated in
dictionary
        with xl-0, xl-1...  xl-n-1 as keys. When Volume heal full is executed,
for each
        replica group, glustershd on the local node may or may not be selected
to
        perform heal by glusterd.  XLATOR_OP rpc should be sent to the shd
running on
        the same node by glusterd only when glustershd on that node is selected
at
        least once. This bug occurs when glusterd sends the rpc to local
glustershd
        even when it is not selected for any of the replica groups.

    Fix:
        Don't send the rpc to local glustershd when it is not selected even
once.

    Change-Id: I2c8217a8f00f6ad5d0c6a67fa56e476457803e08
    BUG: 1287503
    Signed-off-by: Ravishankar N <ravishankar at redhat.com>
    Reviewed-on: http://review.gluster.org/12843
    Tested-by: NetBSD Build System <jenkins at build.gluster.org>
    Tested-by: Gluster Build System <jenkins at build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu at redhat.com>

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=kmzagL4Gix&a=cc_unsubscribe


More information about the Bugs mailing list