[Bugs] [Bug 1208564] ls operation hangs after the first brick is killed for distributed-disperse volume
bugzilla at redhat.com
bugzilla at redhat.com
Wed May 20 15:17:17 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1208564
--- Comment #3 from Fang Huang <fanghuang.data at yahoo.com> ---
I tested the last 3.6 commit which is 24b3db07685fcc39f08e15b1b5e5cd1262b91590,
the bug still exist. The following is my test script.
------------------------------------------------
# cat tests/bugs/bug-1208564.t
#!/bin/bash
. $(dirname $0)/../include.rc
. $(dirname $0)/../volume.rc
function file_count() {
ls $1 |wc -l
}
cleanup
TEST glusterd
TEST pidof glusterd
#TEST mkdir -p $B0/${V0}{0,1,2,3,4,5,6,7}
TEST $CLI volume create $V0 disperse 4 redundancy 1
$H0:$B0/${v0}{0,1,2,3,4,5,6,7}
EXPECT "$V0" volinfo_field $V0 'Volume Name'
EXPECT 'Created' volinfo_field $V0 'Status'
EXPECT '8' brick_count $V0
TEST $CLI volume start $V0
EXPECT_WITHIN $PROCESS_UP_TIMEOUT 'Started' volinfo_field $V0 'Status'
TEST glusterfs --entry-timeout=0 --attribute-timeout=0 -s $H0 --volfile-id $V0
$M0
TEST touch $M0/file{0..99}
TEST kill_brick $V0 $H0 $B0/0
EXPECT_WITHIN $PROCESS_UP_TIMEOUT '100' file_count $M0
cleanup
--------------------------------------------
It stuck at the last file_count checking.
More details for my test:
----------
OS version:
Red Hat Enterprise Linux Server release 6.5 (Santiago) with libc-2.12
and CentOS release 6.6 (Final) with libc-2.12
# gluster --version
glusterfs 3.6.4beta1 built on May 20 2015 22:16:38
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
#gluster v info all
Volume Name: patchy
Type: Distributed-Disperse
Volume ID: 84374ec8-1aa5-4126-908d-c458d605d905
Status: Started
Number of Bricks: 2 x (3 + 1) = 8
Transport-type: tcp
Bricks:
Brick1: mn1:/d/backends/0
Brick2: mn1:/d/backends/1
Brick3: mn1:/d/backends/2
Brick4: mn1:/d/backends/3
Brick5: mn1:/d/backends/4
Brick6: mn1:/d/backends/5
Brick7: mn1:/d/backends/6
Brick8: mn1:/d/backends/7
# gluster v status
Status of volume: patchy
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick mn1:/d/backends/0 N/A N 16557
Brick mn1:/d/backends/1 49153 Y 16568
Brick mn1:/d/backends/2 49154 Y 16579
Brick mn1:/d/backends/3 49155 Y 16590
Brick mn1:/d/backends/4 49156 Y 16601
Brick mn1:/d/backends/5 49157 Y 16612
Brick mn1:/d/backends/6 49158 Y 16623
Brick mn1:/d/backends/7 49159 Y 16634
NFS Server on localhost 2049 Y 16652
Task Status of Volume patchy
------------------------------------------------------------------------------
There are no active volume tasks
#df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/35000c5005e5a3337p5
1784110888 85086720 1608389928 6% /
tmpfs 16416508 4 16416504 1% /dev/shm
/dev/mapper/35000c5005e5a3337p1
999320 72752 874140 8% /boot
/dev/mapper/35000c5005e5a3337p2
103081248 61028 97777340 1% /test
mn1:patchy 10704665216 510520320 9650339456 6% /mnt/glusterfs/0
# ls /mnt/glusterfs/0/
`ls` command hang here
PS: For the master branch the test passed.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=yhEJ6xxreL&a=cc_unsubscribe
More information about the Bugs
mailing list