[Gluster-devel] Regression Summary of last week failures

Jeff Darcy jdarcy at redhat.com
Thu Apr 7 11:57:35 UTC 2016


> Found cores:
> Component: [u'glusterfs-20536.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19522/console
> Component: [u'glusterfsd-24329.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19532/console
> Component: [u'glusterfsd-25652.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19537/console
> Component: [u'glusterd-27702.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19538/console
> Component: [u'glusterfsd-4606.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19543/console
> Component: [u'glusterd-27702.core']
> Regression Link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/19546/console
> Component: [u'glusterfsd-13444.core']

I looked at a few of these yesterday, because some of them were on jobs
for my own patches.  In most of the ones I looked at, some data
structure or other was garbage for no obvious reason related to the code
paths in the backtrace.  This usually indicates that some other (now
completed) code path had corrupted memory.  Unfortunately, it's hard to
narrow this down further because we're only detecting cores at the end
of a run instead of after an individual test.  I've just merged
http://review.gluster.org/#/c/13921/ which *might* help if the
corruption is in code related to specific features or operations.  If
not, we'll be stuck doing a "git bisect" or equivalent to see when the
corruption started.


More information about the Gluster-devel mailing list