[Bugs] [Bug 1236050] Disperse volume: fuse mount hung after self healing
bugzilla at redhat.com
bugzilla at redhat.com
Thu Aug 6 14:07:10 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1236050
--- Comment #6 from Backer <mdfakkeer at gmail.com> ---
I have created a new volume once again and confirmed the bug.
root at gfs-tst-08:/home/gfsadmin# gluster volume create vaulttest52 disperse-data
3 redundancy 1 10.1.2.238:/media/disk{1..4} force
root at gfs-tst-08:/home/gfsadmin# gluster v start vaulttest52
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1574
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1590
NFS Server on localhost 2049 0 Y 1558
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# gluster v info
Volume Name: vaulttest52
Type: Disperse
Volume ID: 0b0b3f8f-acb9-4e2c-a029-fcb89f85b1e7
Status: Started
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.2.238:/media/disk1
Brick2: 10.1.2.238:/media/disk2
Brick3: 10.1.2.238:/media/disk3
Brick4: 10.1.2.238:/media/disk4
Options Reconfigured:
performance.readdir-ahead: on
gfsadmin at gfs-tst-09:/mnt/gluster$ sudo dd if=/dev/urandom of=1.txt bs=1M
count=2
2+0 records in
2+0 records out
2097152 bytes (2.1 MB) copied, 0.208704 s, 10.0 MB/s
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 1.txt
1233b5321315c05abb4668cc9a1d9d25 1.txt
root at gfs-tst-08:/home/gfsadmin# ls -l -h /media/disk{1..4}
/media/disk1:
total 960K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
/media/disk2:
total 960K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
/media/disk3:
total 960K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
/media/disk4:
total 960K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
root at gfs-tst-08:/home/gfsadmin# kill -9 1574
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 N/A N/A N N/A
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1590
NFS Server on localhost 2049 0 Y 1558
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
gfsadmin at gfs-tst-09:/mnt/gluster$ sudo dd if=/dev/urandom of=2.txt bs=1M
count=2
2+0 records in
2+0 records out
2097152 bytes (2.1 MB) copied, 0.205401 s, 10.2 MB/s
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
9c8b37847622efbf2ec75c683166de97 2.txt
root at gfs-tst-08:/home/gfsadmin# ls -l -h /media/disk{1..4}
/media/disk1:
total 960K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
/media/disk2:
total 1.9M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
/media/disk3:
total 1.9M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
/media/disk4:
total 1.4M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
root at gfs-tst-08:/home/gfsadmin# gluster v start vaulttest52 force
volume start: vaulttest52: success
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1590
NFS Server on localhost 2049 0 Y 1758
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# gluster v heal vaulttest52
Launching heal operation to perform index self heal on volume vaulttest52 has
been successful
Use heal info commands to check status
root at gfs-tst-08:/home/gfsadmin# gluster v heal vaulttest52 info
Brick gfs-tst-08:/media/disk1/
Number of entries: 0
Brick gfs-tst-08:/media/disk2/
Number of entries: 0
Brick gfs-tst-08:/media/disk3/
Number of entries: 0
Brick gfs-tst-08:/media/disk4/
Number of entries: 0
root at gfs-tst-08:/home/gfsadmin# ls -l -h /media/disk{1..4}
/media/disk1:
total 728K
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
/media/disk2:
total 1.4M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
/media/disk3:
total 1.4M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
/media/disk4:
total 1.4M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1590
NFS Server on localhost 2049 0 Y 1758
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# kill -9 1590
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 N/A N/A N N/A
NFS Server on localhost 2049 0 Y 1758
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
96f6f469f4b743b4a575fdc408b5f007 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
96f6f469f4b743b4a575fdc408b5f007 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
96f6f469f4b743b4a575fdc408b5f007 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
96f6f469f4b743b4a575fdc408b5f007 2.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 2.txt
96f6f469f4b743b4a575fdc408b5f007 2.txt
=====================================
MD5SUM has ben changed
====================================
root at gfs-tst-08:/home/gfsadmin# gluster v start vaulttest52 force
volume start: vaulttest52: success
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1852
NFS Server on localhost 2049 0 Y 1871
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
======================================
disabled perf-xlators
=====================================
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.quick-read off
gluster volume set vaulttest52 performance.io-cache off
gluster volume set vaulttest52 performance.write-behind off
gluster volume set vaulttest52 performance.stat-prefetch off
gluster volume set vaulttest52 performance.read-ahead off
gluster volume set vaulttest52 performance.open-behind off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.io-cache off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.write-behind off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.stat-prefetch off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.read-ahead off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster volume set vaulttest52
performance.open-behind off
volume set: success
root at gfs-tst-08:/home/gfsadmin# gluster v info
Volume Name: vaulttest52
Type: Disperse
Volume ID: 0b0b3f8f-acb9-4e2c-a029-fcb89f85b1e7
Status: Started
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.2.238:/media/disk1
Brick2: 10.1.2.238:/media/disk2
Brick3: 10.1.2.238:/media/disk3
Brick4: 10.1.2.238:/media/disk4
Options Reconfigured:
performance.open-behind: off
performance.read-ahead: off
performance.stat-prefetch: off
performance.write-behind: off
performance.io-cache: off
performance.quick-read: off
performance.readdir-ahead: on
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 1852
NFS Server on localhost 2049 0 Y 1871
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# kill -9 1852
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 N/A N/A N N/A
NFS Server on localhost 2049 0 Y 1871
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
gfsadmin at gfs-tst-09:/mnt/gluster$ sudo dd if=/dev/urandom of=3.txt bs=5M
count=10
10+0 records in
10+0 records out
52428800 bytes (52 MB) copied, 5.40714 s, 9.7 MB/s
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
fa9d9d3e298d01c8cf54855968784b83 3.txt
root at gfs-tst-08:/home/gfsadmin# gluster v start vaulttest52 force
volume start: vaulttest52: success
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 2017
NFS Server on localhost N/A N/A N N/A
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# gluster v heal vaulttest52
Launching heal operation to perform index self heal on volume vaulttest52 has
been successful
Use heal info commands to check status
root at gfs-tst-08:/home/gfsadmin# gluster v heal vaulttest52 info
Brick gfs-tst-08:/media/disk1/
Number of entries: 0
Brick gfs-tst-08:/media/disk2/
Number of entries: 0
Brick gfs-tst-08:/media/disk3/
Number of entries: 0
Brick gfs-tst-08:/media/disk4/
Number of entries: 0
root at gfs-tst-08:/home/gfsadmin# ls -l -h /media/disk{1..4}
/media/disk1:
total 33M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
-rw-r--r-- 2 root root 17M Aug 6 19:26 3.txt
/media/disk2:
total 34M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
-rw-r--r-- 2 root root 17M Aug 6 19:26 3.txt
/media/disk3:
total 34M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
-rw-r--r-- 2 root root 17M Aug 6 19:26 3.txt
/media/disk4:
total 1.4M
-rw-r--r-- 2 root root 683K Aug 6 19:14 1.txt
-rw-r--r-- 2 root root 683K Aug 6 19:16 2.txt
-rw-r--r-- 2 root root 17M Aug 6 19:26 3.txt
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 49173 0 Y 1582
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 2017
NFS Server on localhost 2049 0 Y 2036
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
root at gfs-tst-08:/home/gfsadmin# kill -9 1582
root at gfs-tst-08:/home/gfsadmin# gluster v status
Status of volume: vaulttest52
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.1.2.238:/media/disk1 49172 0 Y 1739
Brick 10.1.2.238:/media/disk2 N/A N/A N N/A
Brick 10.1.2.238:/media/disk3 49174 0 Y 1595
Brick 10.1.2.238:/media/disk4 49175 0 Y 2017
NFS Server on localhost 2049 0 Y 2036
Task Status of Volume vaulttest52
------------------------------------------------------------------------------
There are no active volume tasks
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
fa9d9d3e298d01c8cf54855968784b83 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
fa9d9d3e298d01c8cf54855968784b83 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
fa9d9d3e298d01c8cf54855968784b83 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
fa9d9d3e298d01c8cf54855968784b83 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ ls
1.txt 2.txt 3.txt
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
ea50603ce500b29c73dca6a9c733eb7a 3.txt
gfsadmin at gfs-tst-09:/$ sudo umount /mnt/gluster
gfsadmin at gfs-tst-09:/$ sudo mount -t glusterfs 10.1.2.238:/vaulttest52
/mnt/gluster/
gfsadmin at gfs-tst-09:/$ cd /mnt/gluster/
gfsadmin at gfs-tst-09:/mnt/gluster$ md5sum 3.txt
ea50603ce500b29c73dca6a9c733eb7a 3.txt
After putting ls command in local dir, the md5sum hash has been changed
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=vbTf5vZwXM&a=cc_unsubscribe
More information about the Bugs
mailing list