[Bugs] [Bug 1347686] IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse (EC) volume from 3.1.2 to 3.1.3
bugzilla at redhat.com
bugzilla at redhat.com
Fri Jun 17 12:22:34 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1347686
Ashish Pandey <aspandey at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |aspandey at redhat.com
--- Comment #2 from Ashish Pandey <aspandey at redhat.com> ---
For glusterfs 3.7.5, feature/lock was not returning the lock count in xdata
which ec requested.
To solve a hang issue we modified the code in such a way that if there is any
request of inodelk count in xdata, feature/lock will return the same using
xdata.
Now for glusterfs 3.7.9 ec is getting inodelk count in xdata from feature/lock.
This issue arises when we do a rolling update from 3.7.5 to 3.7.9.
For 4+2 volume running 3.7.5, if we update 2 nodes and after heal completion
kill 2 older nodes, this problem can be seen.
After update and killing of bricks, 2 nodes will return inodelk count while 2
older nodes will not contain it.
During dictionary match , ec_dict_compare, this will lead to mismatch of
answers and the file operation on mount point will fail with IO error.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list