[Bugs] [Bug 1439657] New: Arbiter brick becames a source for data heal
bugzilla at redhat.com
bugzilla at redhat.com
Thu Apr 6 11:34:56 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1439657
Bug ID: 1439657
Summary: Arbiter brick becames a source for data heal
Product: GlusterFS
Version: 3.8
Component: replicate
Severity: high
Assignee: bugs at gluster.org
Reporter: dchaplyg at redhat.com
CC: bugs at gluster.org
Description of problem: Given three hosts and three bricks on them, combined
into replica 3 volume with arbiter, it could happen, that arbiter brick will
become a source for data heal, which should not happed
How reproducible: time to time
Steps to Reproduce:
1. Create a replica 3 volume with arbiter, keeping bricks on three different
hosts.
2. Start updating some file frequently
3. Start rebooting nodes in a random order (breaking network connectivity is
fine too), several reboots should affect two nodes in a random order
Actual results:
Some files will not be healed.
[root at hc-lion ~]# gluster volume heal data full
Launching heal operation to perform full self heal on volume data has been
successful
Use heal info commands to check status
[root at hc-lion ~]# gluster volume heal data info
Brick hc-lion:/rhgs/data
/555425cf-e3e4-4665-ae82-6152896d8190/dom_md/ids
Status: Connected
Number of entries: 1
Brick hc-tiger:/rhgs/data
/555425cf-e3e4-4665-ae82-6152896d8190/dom_md/ids
Status: Connected
Number of entries: 1
Brick hc-panther:/rhgs/data
/555425cf-e3e4-4665-ae82-6152896d8190/dom_md/ids
Status: Connected
Number of entries: 1
[root at hc-lion dom_md]# getfattr -d -m . -e hex ids
# file: ids
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.data-client-1=0x0000000e0000000000000000
trusted.afr.data-client-2=0x000000000000000000000000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.version=0x080000000000000058e6028e000829f0
trusted.gfid=0x405ab9b11adb4ced927294ef36272b44
trusted.glusterfs.shard.block-size=0x0000000020000000
trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000
Expected results:
All files should be healed.
Additional info:
I do not have a good way to reproduce that bug. But i hope that logs from my
nodes will be helpful. Bug was observed during first half of day 6th of April.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list