[Bugs] [Bug 1447390] New: Brick Multiplexing : - .trashcan not able to heal after replace brick
bugzilla at redhat.com
bugzilla at redhat.com
Tue May 2 15:21:20 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1447390
Bug ID: 1447390
Summary: Brick Multiplexing :- .trashcan not able to heal
after replace brick
Product: GlusterFS
Version: mainline
Component: core
Severity: high
Assignee: bugs at gluster.org
Reporter: jthottan at redhat.com
CC: amukherj at redhat.com, anoopcs at redhat.com,
bugs at gluster.org, ksandha at redhat.com,
nchilaka at redhat.com, rhinduja at redhat.com,
rhs-bugs at redhat.com, storage-qa-internal at redhat.com
+++ This bug was initially created as a clone of Bug #1443939 +++
Description of proble:-
self heal daemon fops not able to heal .trashcan after replacing a brick
Version-Release number of selected component (if applicable):
mainline
How reproducible:
100%
Steps to Reproduce:
1. create 100 files on arbiter volume 3*(2+1)
2. replace b1 brick with bnew brick
3. start renaming the files
4. .trashcan remains unhealed ; / possibly going under heal
Actual results:
no files should be left unhealed
.trash can shhould be healed.
.trashcan is not supported downstream
Expected results:
there should be no entry under heal command
Karan Sandha on 2017-04-20 05:56:48 EDT ---
]# gluster v info
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: bc5a0c88-7ca7-48f6-8092-70c0fe5e8846
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.47.60:/bricks/brick3/b1
Brick2: 10.70.46.218:/bricks/brick0/b1
Brick3: 10.70.47.61:/bricks/brick0/b1 (arbiter)
Brick4: 10.70.46.218:/bricks/brick2/b2
Brick5: 10.70.47.61:/bricks/brick2/b2
Brick6: 10.70.47.60:/bricks/brick2/b2 (arbiter)
Brick7: 10.70.47.60:/bricks/brick1/b3
Brick8: 10.70.46.218:/bricks/brick1/b3
Brick9: 10.70.47.61:/bricks/brick1/b3 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
nfs.disable: off
cluster.brick-multiplex: on.
--- Additional comment from Karan Sandha on 2017-04-21 02:36:52 EDT ---
Atin,
When I replaced the brick only .trashcan & / is left to heal rest all the
directories and files were healed. It would make confusion to the user why
this directory isn't getting healed. its 100% reproducible.
out put of the heal info command:-
[root at K1 /]# gluster v heal testvol info
Brick 10.70.47.60:/bricks/brick3/b1
/ - Possibly undergoing heal
Status: Connected
Number of entries: 1
Brick 10.70.46.218:/bricks/brick0/b1
/ - Possibly undergoing heal
/.trashcan
Status: Connected
Number of entries: 2
Brick 10.70.47.61:/bricks/brick0/b1
/ - Possibly undergoing heal
/.trashcan
Status: Connected
Number of entries: 2
Brick 10.70.46.218:/bricks/brick2/b2
Status: Connected
Number of entries: 0
Brick 10.70.47.61:/bricks/brick2/b2
Status: Connected
Number of entries: 0
Brick 10.70.47.60:/bricks/brick2/b2
Status: Connected
Number of entries: 0
Brick 10.70.47.60:/bricks/brick1/b3
Status: Connected
Number of entries: 0
Brick 10.70.46.218:/bricks/brick1/b3
Status: Connected
Number of entries: 0
Brick 10.70.47.61:/bricks/brick1/b3
Status: Connected
Number of entries: 0
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list