[Bugs] [Bug 1170942] New: More than redundancy bricks down, leads to the persistent write return IO error, then the whole file can not be read/write any longer, even all bricks going up
bugzilla at redhat.com
bugzilla at redhat.com
Fri Dec 5 07:32:08 UTC 2014
https://bugzilla.redhat.com/show_bug.cgi?id=1170942
Bug ID: 1170942
Summary: More than redundancy bricks down, leads to the
persistent write return IO error, then the whole file
can not be read/write any longer, even all bricks
going up
Product: GlusterFS
Version: 3.6.1
Component: disperse
Severity: high
Assignee: bugs at gluster.org
Reporter: jiademing.dd at gmail.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
More than redundancy bricks down, leads to the persistent write return IO
error.We can accept this result, but after that, the whole file can not be
read/write any longer, even all bricks going up.
Version-Release number of selected component (if applicable):
3.6.1
How reproducible:
Steps to Reproduce:
1.I create a distribute-disperse volume test
Volume Name: test
Type: Distributed-Disperse
Volume ID: 17149c08-fba6-4061-892f-f815aecff1c9
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: node-1:/sda
Brick2: node-1:/sdb
Brick3: node-1:/sdc
Brick4: node-2:/sda
Brick5: node-2:/sdb
Brick6: node-2:/sdc
2.I use dd if=/dev/zero of=/mountpoint/test.bak bs=1M, I know the test.bak on
Brick4, Brick5 and Brick6.
3. When persistent write, I kill Brick4, persistent write is normal.After that,
I kill Brick5, then mountpoint return IO error.
Actual results:
The whole file can not be read/write any longer, even all bricks going up.
Expected results:
After bricks going up, we can read the data that wrote before IO error.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list