[Bugs] [Bug 1329871] New: tests/basic/afr/heal-info.t fails

bugzilla at redhat.com bugzilla at redhat.com
Sun Apr 24 01:39:10 UTC 2016


            Bug ID: 1329871
           Summary: tests/basic/afr/heal-info.t fails
           Product: GlusterFS
           Version: mainline
         Component: replicate
          Assignee: bugs at gluster.org
          Reporter: pkarampu at redhat.com
                CC: bugs at gluster.org

Description of problem:
#Test that parallel heal-info command execution doesn't result in spurious
#entries with locking-scheme granular

. $(dirname $0)/../../include.rc
. $(dirname $0)/../../volume.rc


function heal_info_to_file {
        while [ -f $M0/a.txt ]; do
                $CLI volume heal $V0 info | grep -i number | grep -v 0 >> $1

function write_and_del_file {
        dd of=$M0/a.txt if=/dev/zero bs=1024k count=100
        rm -f $M0/a.txt

TEST glusterd
TEST pidof glusterd
TEST $CLI volume create $V0 replica 2 $H0:$B0/brick{0,1}
TEST $CLI volume set $V0 locking-scheme granular
TEST $CLI volume start $V0
TEST $GFS --volfile-id=$V0 --volfile-server=$H0 $M0;
TEST touch $M0/a.txt
write_and_del_file &
touch $B0/f1 $B0/f2
heal_info_to_file $B0/f1 &
heal_info_to_file $B0/f2 &
EXPECT "^$" cat $B0/f1
EXPECT "^$" cat $B0/f2


This test failed on NetBSD twice. While debugging it was found that if unlink
is in progress when 'dirty' index is being checked for heal, on one of the
bricks it gets ENOENT while on the other it will get success. This will lead to
an assumption that the file needs heal.

This was leading to failure.
Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.

More information about the Bugs mailing list