[Gluster-devel] Random voting in Gerrit - Check votes before merging
kshlmster at gmail.com
Thu Jun 9 06:43:37 UTC 2016
A heads up to all maintainers and developers.
As all of you probably already know, reviews in Gerrit are getting
random votes for jobs that ran for other patchsets.
We've had people noticing these votes only when they've been negative.
But these votes can be positive as well (I've got an example in the
forwarded mail below).
Maintainers need to be make sure that any positive vote given to a
review is correct and for a job that ran for the particular review,
before merging it.
To make sure that changes that have been given such a bogus vote don't
get merged, any developer finding such a vote, can give a Verified-1
to the review to block it from merging. I've changed the Verified flag
so that a Verified-1 blocks a review from being merged. I'll remove
this change after we figure out what's happening.
I'll be posting updates to the infra-list to the mail-thread I've
---------- Forwarded message ----------
From: Kaushal M <kshlmster at gmail.com>
Date: Thu, Jun 9, 2016 at 11:52 AM
Subject: Investigating random votes in Gerrit
To: gluster-infra <gluster-infra at gluster.org>
In addition to the builder issues we're having, we are also facing
problems with jenkins voting/commenting randomly.
The comments generally link to older jobs for older patchsets, which
were run about 2 months back (beginning of April). For example,
https://review.gluster.org/14665 has a netbsd regression +1 vote, from
a job run in April for review 13873, and which actually failed.
Another observation that I've made is that these fake votes sometime
provide a -1 Verified. Jenkins shouldn't be using this flag anymore.
These 2 observations, make me wonder if another jenkins instance is
running somewhere, from our old backups possibly? Michael, could this
To check from where these votes/comments were coming from, I tried
checking the Gerrit sshd logs. This wasn't helpful, because all logins
apparently happen from 127.0.0.1. This is probably some firewall rule
that has been setup, post migration, because I see older logs giving
proper IPs. I'll require Michael's help with fixing this, if possible.
I'll continue to investigate, and update this thread with anything I find.
More information about the Gluster-devel