[Gluster-infra] tests failing on multiple slaves, having problems getting through regression

Dan Lambright dlambrig at redhat.com
Wed Jan 13 18:17:51 UTC 2016



> > > >>>
> > > >>> On 01/13/2016 10:21 PM, Vivek Agarwal wrote:
> > > >>>> On 01/13/2016 10:01 PM, Vivek Agarwal wrote:
> > > >>>>> On 01/13/2016 10:00 PM, Mohammed Rafi K C wrote:
> > > >>>>>> Hi All,
> > > >>>>>>
> > > >>>>>> The following patches need to be re-triggered (I don't have
> > > >>>>>> permission
> > > >>>>>> to re-trigger) urgently as it was expected to block 3.1.2 and need
> > > >>>>>> to be
> > > >>>>>> in for next build.
> > > >>>>>>
> > > >>>>>> 1)http://review.gluster.org/#/c/13224/
> > > >>>>>>           netbsd :
> > > >>>>>> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13368/consoleFull
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>           linux     :
> > > >>>>>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/17515/console
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> 2) http://review.gluster.org/#/c/13225/
> > > >>>>>>
> > > >>>>>>            linux :
> > > >>>>>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/17543/console
> > > >>>>>>
> > > >>>>>>
> > > >>>
> > > >>>
> > > >>> Adding one more
> > > >>>
> > > >>> 3) http://review.gluster.org/#/c/11892/
> > > >>>          linux :
> > > >>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/17537/consoleFull
> > > >>>
> > > >>>
> > > >>>>>>
> > > >>>>>> Thanks a lot for your help .
> > > >>>>> I have just retriggered all of them.
> > > >>>> Upon checking I see all the builds are not re-triggered.  Can some
> > > >>>> one
> > > >>>> else help why those were not re-triggered and/or re-trigger them ?
> > > >>
> > > >> retriggered
> > > >>
> > > >> so we have 13224, 13225, 11892
> > > >
> > > > Have marked slave46 as temporarily offline as it seemed to fail
> > > > launching regression tests.
> > > 
> > > 
> > > Thanks Vijay. Looks like salve slave21.cloud.gluster.org
> > > <https://build.gluster.org/computer/slave21.cloud.gluster.org> is also
> > > in bad state. Will rebooting work here ?
> > > 
> > 
> > 
> > Yes, that should help.
> > 
> > -Vijay
> > 
> 
> Have marked slave21 and slave25 as bad pending the reboot

Hello gluster-infra,

The following slaves keep failing tests. We have taken them offline:

slave21, slave25, and slave26.

They appear to require reboots? But it is unclear how they got into the situation.

Can someone help figure out what went wrong with these machines? We have some important fixes which need to be merged.

Dan 


More information about the Gluster-infra mailing list