[Bugs] [Bug 1236050] Disperse volume: fuse mount hung after self healing
bugzilla at redhat.com
bugzilla at redhat.com
Wed Aug 5 09:24:47 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1236050
--- Comment #3 from Pranith Kumar K <pkarampu at redhat.com> ---
hi Backer,
Thanks for the quick reply. Based on your comment, I am assuming no hangs
are observed. Auto-healing of replace-brick/disk-replacement is something we
are working for 3.7.4, until then you need to execute "gluster volume heal ec2
full".
As for the data corruption bug, I am not able to re-create it:
Let me know if I missed any step:
root at localhost - ~
14:48:24 :) ⚡ glusterd && gluster volume create ec2 disperse 6 redundancy 2
`hostname`:/home/gfs/ec_{0..5} force && gluster volume start ec2 && mount -t
glusterfs `hostname`:/ec2 /mnt/ec2
volume create: ec2: success: please start the volume to access data
volume start: ec2: success
#I disabled perf-xlators so that reads are served from the bricks always
root at localhost - ~
14:48:38 :( ⚡ ~/.scripts/disable-perf-xl.sh ec2
+ gluster volume set ec2 performance.quick-read off
volume set: success
+ gluster volume set ec2 performance.io-cache off
volume set: success
+ gluster volume set ec2 performance.write-behind off
volume set: success
+ gluster volume set ec2 performance.stat-prefetch off
volume set: success
+ gluster volume set ec2 performance.read-ahead off
volume set: success
+ gluster volume set ec2 performance.open-behind off
volume set: success
root at localhost - ~
14:48:47 :) ⚡ cd /mnt/ec2/
root at localhost - /mnt/ec2
14:48:59 :) ⚡ gluster v status
Status of volume: ec2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick localhost.localdomain:/home/gfs/ec_0 49152 0 Y 14828
Brick localhost.localdomain:/home/gfs/ec_1 49153 0 Y 14846
Brick localhost.localdomain:/home/gfs/ec_2 49155 0 Y 14864
Brick localhost.localdomain:/home/gfs/ec_3 49156 0 Y 14882
Brick localhost.localdomain:/home/gfs/ec_4 49157 0 Y 14900
Brick localhost.localdomain:/home/gfs/ec_5 49158 0 Y 14918
NFS Server on localhost 2049 0 Y 14937
Task Status of Volume ec2
------------------------------------------------------------------------------
There are no active volume tasks
root at localhost - /mnt/ec2
14:49:02 :) ⚡ kill -9 14918 14900
root at localhost - /mnt/ec2
14:49:11 :) ⚡ dd if=/dev/urandom of=1.txt bs=1M count=2
2+0 records in
2+0 records out
2097152 bytes (2.1 MB) copied, 0.153835 s, 13.6 MB/s
root at localhost - /mnt/ec2
14:49:15 :) ⚡ md5sum 1.txt
5ead68d0a60b8134f7daf0e8d1afe19c 1.txt
root at localhost - /mnt/ec2
14:49:23 :) ⚡ gluster v start ec2 force
volume start: ec2: success
root at localhost - /mnt/ec2
14:49:35 :) ⚡ gluster v heal ec2
Launching heal operation to perform index self heal on volume ec2 has been
successful
Use heal info commands to check status
root at localhost - /mnt/ec2
14:49:39 :) ⚡ gluster v heal ec2 info
Brick localhost.localdomain:/home/gfs/ec_0/
/1.txt
Number of entries: 1
Brick localhost.localdomain:/home/gfs/ec_1/
/1.txt
Number of entries: 1
Brick localhost.localdomain:/home/gfs/ec_2/
/1.txt
Number of entries: 1
Brick localhost.localdomain:/home/gfs/ec_3/
/1.txt
Number of entries: 1
Brick localhost.localdomain:/home/gfs/ec_4/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_5/
Number of entries: 0
root at localhost - /mnt/ec2
14:49:45 :) ⚡ gluster v heal ec2
Launching heal operation to perform index self heal on volume ec2 has been
successful
Use heal info commands to check status
root at localhost - /mnt/ec2
14:49:47 :) ⚡ gluster v heal ec2 info
Brick localhost.localdomain:/home/gfs/ec_0/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_1/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_2/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_3/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_4/
Number of entries: 0
Brick localhost.localdomain:/home/gfs/ec_5/
Number of entries: 0
root at localhost - /mnt/ec2
14:49:51 :) ⚡ gluster v status
Status of volume: ec2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick localhost.localdomain:/home/gfs/ec_0 49152 0 Y 14828
Brick localhost.localdomain:/home/gfs/ec_1 49153 0 Y 14846
Brick localhost.localdomain:/home/gfs/ec_2 49155 0 Y 14864
Brick localhost.localdomain:/home/gfs/ec_3 49156 0 Y 14882
Brick localhost.localdomain:/home/gfs/ec_4 49157 0 Y 15173
Brick localhost.localdomain:/home/gfs/ec_5 49158 0 Y 15191
NFS Server on localhost 2049 0 Y 15211
Task Status of Volume ec2
------------------------------------------------------------------------------
There are no active volume tasks
root at localhost - /mnt/ec2
14:49:56 :) ⚡ kill -9 14828 14846
root at localhost - /mnt/ec2
14:50:03 :) ⚡ md5sum 1.txt
5ead68d0a60b8134f7daf0e8d1afe19c 1.txt
root at localhost - /mnt/ec2
14:50:06 :) ⚡ cd
root at localhost - ~
14:50:13 :) ⚡ umount /mnt/ec2
root at localhost - ~
14:50:16 :) ⚡ mount -t glusterfs `hostname`:/ec2 /mnt/ec2
root at localhost - ~
14:50:19 :) ⚡ md5sum /mnt/ec2/1.txt
5ead68d0a60b8134f7daf0e8d1afe19c /mnt/ec2/1.txt
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=Euyms91NDd&a=cc_unsubscribe
More information about the Bugs
mailing list