[Bugs] [Bug 1698861] New: Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks.

bugzilla at redhat.com bugzilla at redhat.com
Thu Apr 11 11:45:43 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1698861

            Bug ID: 1698861
           Summary: Renaming a directory when 2 bricks of multiple
                    disperse subvols are down leaves both old and new dirs
                    on the bricks.
           Product: GlusterFS
           Version: mainline
            Status: NEW
         Component: disperse
          Assignee: bugs at gluster.org
          Reporter: nbalacha at redhat.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Description of problem:

Running the following .t results in both olddir and newdir visible from the
mount point and listing them shows no files.

Steps to Reproduce:

#!/bin/bash                                                                     

. $(dirname $0)/../../include.rc                                                
. $(dirname $0)/../../volume.rc                                                 
. $(dirname $0)/../../common-utils.rc                                           

cleanup                                                                         

TEST glusterd                                                                   
TEST pidof glusterd                                                             

TEST $CLI volume create $V0 disperse 6 disperse-data 4 $H0:$B0/$V0-{1..24}
force
TEST $CLI volume start $V0                                                      

TEST glusterfs -s $H0 --volfile-id $V0 $M0                                      

ls $M0/                                                                         

mkdir $M0/olddir                                                                
mkdir $M0/olddir/subdir                                                         
touch $M0/olddir/file-{1..10}                                                   

ls -lR                                                                          

TEST kill_brick $V0 $H0 $B0/$V0-1                                               
TEST kill_brick $V0 $H0 $B0/$V0-2                                               
TEST kill_brick $V0 $H0 $B0/$V0-7                                               
TEST kill_brick $V0 $H0 $B0/$V0-8                                               

TEST mv $M0/olddir $M0/newdir      


# Start all bricks                                                              

TEST $CLI volume start $V0 force                                                
$CLI volume status                                                              

# It takes a while for the client to reconnect to the brick                     
sleep 5                                                                         

ls -l $M0                                                                       


# Cleanup                                                                       
#cleanup                                                                        


Version-Release number of selected component (if applicable):


How reproducible:
Consistently



Actual results:
[root at rhgs313-6 tests]# ls -lR /mnt/glusterfs/0/
/mnt/glusterfs/0/:
total 8
drwxr-xr-x. 2 root root 4096 Apr 11 17:12 newdir
drwxr-xr-x. 2 root root 4096 Apr 11 17:12 olddir

/mnt/glusterfs/0/newdir:
total 0

/mnt/glusterfs/0/olddir:
total 0
[root at rhgs313-6 tests]# 


Expected results:


Additional info:

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list