[Bugs] [Bug 1524325] New: wrong healing source after upgrade
bugzilla at redhat.com
bugzilla at redhat.com
Mon Dec 11 09:25:50 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1524325
Bug ID: 1524325
Summary: wrong healing source after upgrade
Product: GlusterFS
Version: 3.10
Component: glusterd
Severity: high
Assignee: bugs at gluster.org
Reporter: dm at belkam.com
CC: bugs at gluster.org
Created attachment 1365830
--> https://bugzilla.redhat.com/attachment.cgi?id=1365830&action=edit
logs
Description of problem:
We run 2 nodes cluster with replicated volume, yes , this is not recommended
setup, but...
Nodes names are father and son.
VMs and gluster are or these nodes.
We moved all VMs to one node (namely father).
We upgraded gluster on one node from 3.10.7 to 3.10.8 on one of nodes ( namely
son) and rebooted it.
After this we see that healing for one of VM images is running from son to
father:
[root at son ~]# gluster volume heal pool info
Brick father:/wall/pool/brick
/shador.img
/balamak.img
/devaron.img
/talita.img
Status: Connected
Number of entries: 4
Brick son:/wall/pool/brick
/endor.img
Status: Connected
Number of entries: 1
And image became broken.
There was bitrot detection enabled on this volume and , looks like, it is only
process which accessed local data on son during boot ( please, look into logs).
We disabled bitrot detection for now.
Version-Release number of selected component (if applicable):
Centos 7.4, gluster 3.10.7 and 3.10.8.
How reproducible:
we don't know how to reproduce it.
Steps to Reproduce:
1. install 2 nodes gluster with replicated volume
2. set VMs on it
3. upgrade
4. reboot
May be just reboot is enough, we don't know
Actual results:
some (one in our case ) VM images are broken, because healed from old data.
Expected results:
healthy data on cluster.
Thank you!
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list