[Gluster-infra] Regression fails due to infra issue

Kaushal M kshlmster at gmail.com
Thu Jun 9 09:12:20 UTC 2016


On Wed, Jun 8, 2016 at 4:50 PM, Niels de Vos <ndevos at redhat.com> wrote:
> On Wed, Jun 08, 2016 at 10:30:37AM +0200, Michael Scherer wrote:
>> Le mercredi 08 juin 2016 à 03:15 +0200, Niels de Vos a écrit :
>> > On Tue, Jun 07, 2016 at 10:29:34AM +0200, Michael Scherer wrote:
>> > > Le mardi 07 juin 2016 à 10:00 +0200, Michael Scherer a écrit :
>> > > > Le mardi 07 juin 2016 à 09:54 +0200, Michael Scherer a écrit :
>> > > > > Le lundi 06 juin 2016 à 21:18 +0200, Niels de Vos a écrit :
>> > > > > > On Mon, Jun 06, 2016 at 09:59:02PM +0530, Nigel Babu wrote:
>> > > > > > > On Mon, Jun 6, 2016 at 12:56 PM, Poornima Gurusiddaiah <pgurusid at redhat.com>
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > Hi,
>> > > > > > > >
>> > > > > > > > There are multiple issues that we saw with regressions lately:
>> > > > > > > >
>> > > > > > > > 1. On certain slaves the regression fails during build and i see those on
>> > > > > > > > slave26.cloud.gluster.org, slave25.cloud.gluster.org and may be others
>> > > > > > > > also.
>> > > > > > > >     Eg:
>> > > > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21422/console
>> > > > > > > >
>> > > > > > >
>> > > > > > > Are you sure this isn't a code breakage?
>> > > > > >
>> > > > > > No, it really does not look like that.
>> > > > > >
>> > > > > > This is an other one, it seems the testcase got killed for some reason:
>> > > > > >
>> > > > > >   https://build.gluster.org/job/rackspace-regression-2GB-triggered/21459/console
>> > > > > >
>> > > > > > It was running on slave25.cloud.gluster.org too... Is it possible that
>> > > > > > there is some watchdog or other configuration checking for resources and
>> > > > > > killing testcases on occasion? The number of slaves where this happens
>> > > > > > seems limited, were these more recently installed/configured?
>> > > > >
>> > > > > So dmesg speak of segfault in yum
>> > > > >
>> > > > > yum[2711] trap invalid opcode ip:7f2efac38d60 sp:7ffd77322658 error:0 in
>> > > > > libfreeblpriv3.so[7f2efabe6000+72000]
>> > > > >
>> > > > > and
>> > > > > https://access.redhat.com/solutions/2313911
>> > > > >
>> > > > > That's exactly the problem.
>> > > > > [root at slave25 ~]# /usr/bin/curl https://google.com
>> > > > > Illegal instruction
>> > > > >
>> > > > > I propose to remove the builder from rotation while we investigate.
>> > > >
>> > > > Or we can:
>> > > >
>> > > > export NSS_DISABLE_HW_AES=1
>> > > >
>> > > > to work around, cf the bug listed on the article.
>> > > >
>> > > > Not sure the best way to deploy that.
>> > >
>> > > So we are testing the fix on slave25, and if that's what fix the error,
>> > > I will deploy to the whole gluster builders, and investigate for the non
>> > > builders server. That's only for RHEL 6/Centos 6 on rackspace.
>> >
>> > If this does not work, configuring mock to use http (without the 's')
>> > might be an option too. The export variable would probably need to get
>> > set inside the mock chroot. It can possibly be done in
>> > /etc/mock/site-defaults.cfg.
>> >
>> > For the normal test cases, placing the environment variable (and maybe
>> > NSS_DISABLE_HW_GCM=1 too?) in the global bashrc might be sufficient.
>>
>> We used /etc/environment, and so far, no one complained about side
>> effects.
>>
>> (I mean, this did fix stuff, right ? right ??? )
>
> I dont know. This was the last job that failed due to the bug:
>
>   https://build.gluster.org/job/glusterfs-devrpms/16978/console
>
> There are more recent ones on slave25 that failed due to unclear reasons
> as well, not sure if that is caused by the same problem:
>
>   https://build.gluster.org/computer/slave25.cloud.gluster.org/builds
>
> Thanks,
> Niels

The random build failures should now be fixed (or at least not happen anymore).
Please refer to the mail-thread 'Investigating random votes in Gerrit'
for more information.

~kaushal

>
> _______________________________________________
> Gluster-infra mailing list
> Gluster-infra at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra


More information about the Gluster-infra mailing list