From bugzilla at redhat.com Wed Jan 2 05:44:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 05:44:24 +0000
Subject: [Gluster-infra] [Bug 1660732] create gerrit for github project
glusterfs-containers-tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660732
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-02 05:44:24
--- Comment #4 from Nigel Babu ---
Alright. Valerii is now on the committers group for the
glusterfs-container-tests
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 06:12:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 06:12:45 +0000
Subject: [Gluster-infra] [Bug 1663089] New: Make GD2 container nightly and
push it docker hub
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Bug ID: 1663089
Summary: Make GD2 container nightly and push it docker hub
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
During GCS scale testing effort, we identified couple of major issues in GD2
for which the PRs were posted and merged yesterday night, but apparently they
missed the window of yesterday's nightly build and hence we're sort of blocked
till today evening for picking up the GD2 container image.
If we can build the container from the latest GD2 head and push it to docker
hub right away, it'd be great and we should get unblocked.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 06:15:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 06:15:05 +0000
Subject: [Gluster-infra] [Bug 1663089] Make GD2 container nightly and push
it docker hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Did it make it to the GD2 nightly RPM build?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 07:00:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:00:46 +0000
Subject: [Gluster-infra] [Bug 1663089] Make GD2 container nightly and push
it docker hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
--- Comment #2 from Atin Mukherjee ---
As per https://ci.centos.org/view/Gluster/job/gluster_gd2-nightly-rpms/ , it
seems like the last build was 6 hours 49 minutes ago which means the required
PRs should be in as part of the rpms.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 10:07:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 10:07:56 +0000
Subject: [Gluster-infra] [Bug 1663089] Make GD2 container nightly and push
it docker hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-03 10:07:56
--- Comment #3 from Nigel Babu ---
ALright, Deepshika retriggerd the Jenkins job and we're good now.
--
You are receiving this mail because:
You are on the CC list for the bug.
From ndevos at redhat.com Thu Jan 3 10:57:24 2019
From: ndevos at redhat.com (Niels de Vos)
Date: Thu, 3 Jan 2019 11:57:24 +0100
Subject: [Gluster-infra] gluster-centos container on Docker Hub does
still not get automatically rebuild
In-Reply-To: <20181017205321.GI18987@ndevos-x270>
References: <20181017205321.GI18987@ndevos-x270>
Message-ID: <20190103105724.GI15249@ndevos-x270>
On Wed, Oct 17, 2018 at 10:53:21PM +0200, Niels de Vos wrote:
> Hi Humble,
>
> It seems that merging changes in the gluster-container repository still
> does not trigger a rebuild in Docker Hub. Two weeks ago you had a look
> at it and thing did get rebuild at least once. Could you have a look at
> it again?
>
> I'm also happy to check it out, but I'm not in the Gluster team. Maybe
> you or someone else can add me? My username is 'nixpanic'.
It seems that the container images are still not build automatically.
What needs to be done to get some progress here?
Reported at https://github.com/gluster/gluster-containers/pull/115#issuecomment-451070886
it would good to update the state there as well.
Thanks,
Niels
From hchiramm at redhat.com Thu Jan 3 11:43:40 2019
From: hchiramm at redhat.com (Humble Chirammal)
Date: Thu, 3 Jan 2019 17:13:40 +0530
Subject: [Gluster-infra] gluster-centos container on Docker Hub does
still not get automatically rebuild
In-Reply-To: <20190103105724.GI15249@ndevos-x270>
References: <20181017205321.GI18987@ndevos-x270>
<20190103105724.GI15249@ndevos-x270>
Message-ID:
On Thu, Jan 3, 2019 at 4:27 PM Niels de Vos wrote:
> On Wed, Oct 17, 2018 at 10:53:21PM +0200, Niels de Vos wrote:
> > Hi Humble,
> >
> > It seems that merging changes in the gluster-container repository still
> > does not trigger a rebuild in Docker Hub. Two weeks ago you had a look
> > at it and thing did get rebuild at least once. Could you have a look at
> > it again?
> >
> > I'm also happy to check it out, but I'm not in the Gluster team. Maybe
> > you or someone else can add me? My username is 'nixpanic'.
>
> It seems that the container images are still not build automatically.
> What needs to be done to get some progress here?
>
> Reported at
> https://github.com/gluster/gluster-containers/pull/115#issuecomment-451070886
> it would good to update the state there as well.
>
>
> Sure, builds are triggered now. There was an common issue in dockerhub
> about auto build, not sure whether that caused this.
>
--
Cheers,
Humble
Red Hat Storage Engineering
Mastering KVM Virtualization: http://amzn.to/2vFTXaW
Website: http://humblec.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Thu Jan 3 13:32:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 13:32:29 +0000
Subject: [Gluster-infra] [Bug 1661887] Add monitoring of postgrey
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1661887
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-03 13:32:29
--- Comment #1 from M. Scherer ---
So, notification was added, and I think it is also managed properly now.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 7 03:15:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 07 Jan 2019 03:15:21 +0000
Subject: [Gluster-infra] [Bug 1657860] Archives for ci-results mailinglist
are getting wiped (with each mail?)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657860
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |nigelb at redhat.com
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-07 03:15:21
--- Comment #2 from Nigel Babu ---
The fix seems to be working. Closing bug.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 7 03:37:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 07 Jan 2019 03:37:47 +0000
Subject: [Gluster-infra] [Bug 1663780] New: On docs.gluster.org,
we should convert spaces in folder or file names to 301 redirects
to hypens
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663780
Bug ID: 1663780
Summary: On docs.gluster.org, we should convert spaces in
folder or file names to 301 redirects to hypens
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: nigelb at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
This request depends on https://github.com/gluster/glusterdocs/pull/447. Once
we have the Nginx redirect code ready, we can merge in the pull request and
push the change.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 7 04:14:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 07 Jan 2019 04:14:51 +0000
Subject: [Gluster-infra] [Bug 1658147] BZ incorrectly updated with "patch
posted" message when a patch is merged
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1658147
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |DUPLICATE
Last Closed| |2019-01-07 04:14:51
--- Comment #2 from Nigel Babu ---
*** This bug has been marked as a duplicate of bug 1658146 ***
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 7 04:14:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 07 Jan 2019 04:14:51 +0000
Subject: [Gluster-infra] [Bug 1658146] BZ incorrectly updated with "patch
posted" message when a patch is merged
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1658146
--- Comment #2 from Nigel Babu ---
*** Bug 1658147 has been marked as a duplicate of this bug. ***
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Jan 8 06:19:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 08 Jan 2019 06:19:13 +0000
Subject: [Gluster-infra] [Bug 1664226] New: glusterd2 PR is not triggering
Tests
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664226
Bug ID: 1664226
Summary: glusterd2 PR is not triggering Tests
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
PR https://github.com/gluster/glusterd2/pull/1464 is not triggering the tests.
Status in https://ci.centos.org/job/gluster_glusterd2/ shows "#4?14?7
(pending?gluster-ci-slave01 is offline)"
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Jan 8 06:26:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 08 Jan 2019 06:26:04 +0000
Subject: [Gluster-infra] [Bug 1664226] glusterd2 PR is not triggering Tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664226
Kaushal changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
CC| |kaushal at redhat.com
Assignee|bugs at gluster.org |kaushal at redhat.com
--- Comment #1 from Kaushal ---
I've reported the problem to the centos-ci team. It should be resolved soon
enough.or the past
--
You are receiving this mail because:
You are on the CC list for the bug.
From kshlmster at gmail.com Tue Jan 8 06:50:09 2019
From: kshlmster at gmail.com (Kaushal M)
Date: Tue, 8 Jan 2019 12:20:09 +0530
Subject: [Gluster-infra] Request for more executor VMs for the Gluster
project
Message-ID:
Hi,
Just a little while back, the gluster-ci-slave01 VM assigned to the
Gluster project went offline.
Fabian diagnosed this to have been caused by the VM being overloaded
with too many jobs.
Can an additional VM be setup for the gluster project to handle the
increased load?
Thanks.
From bugzilla at redhat.com Tue Jan 8 06:52:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 08 Jan 2019 06:52:07 +0000
Subject: [Gluster-infra] [Bug 1664226] glusterd2 PR is not triggering Tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664226
Kaushal changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |WORKSFORME
Last Closed| |2019-01-08 06:52:07
--- Comment #2 from Kaushal ---
This has been fixed now. The slave VM is back online and jobs are being
processed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bstinson at redhat.com Tue Jan 8 18:59:20 2019
From: bstinson at redhat.com (Brian Stinson)
Date: Tue, 8 Jan 2019 12:59:20 -0600
Subject: [Gluster-infra] Request for more executor VMs for the Gluster
project
In-Reply-To:
References:
Message-ID:
I migrated the gluster workspace to a new VM. If you have access to the
workspace, your new hostname is slave07.ci.centos.org
This should help avoid the noisy neighbour problem in the future.
--Brian
On Tue, Jan 8, 2019 at 12:50 AM Kaushal M wrote:
> Hi,
>
> Just a little while back, the gluster-ci-slave01 VM assigned to the
> Gluster project went offline.
> Fabian diagnosed this to have been caused by the VM being overloaded
> with too many jobs.
>
> Can an additional VM be setup for the gluster project to handle the
> increased load?
>
> Thanks.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kshlmster at gmail.com Wed Jan 9 01:32:16 2019
From: kshlmster at gmail.com (Kaushal M)
Date: Wed, 9 Jan 2019 07:02:16 +0530
Subject: [Gluster-infra] Request for more executor VMs for the Gluster
project
In-Reply-To:
References:
Message-ID:
Awesome, thanks!
On Wed, 9 Jan 2019, 00:29 Brian Stinson I migrated the gluster workspace to a new VM. If you have access to the
> workspace, your new hostname is slave07.ci.centos.org
>
> This should help avoid the noisy neighbour problem in the future.
>
> --Brian
>
> On Tue, Jan 8, 2019 at 12:50 AM Kaushal M wrote:
>
>> Hi,
>>
>> Just a little while back, the gluster-ci-slave01 VM assigned to the
>> Gluster project went offline.
>> Fabian diagnosed this to have been caused by the VM being overloaded
>> with too many jobs.
>>
>> Can an additional VM be setup for the gluster project to handle the
>> increased load?
>>
>> Thanks.
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Fri Jan 11 06:57:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 11 Jan 2019 06:57:04 +0000
Subject: [Gluster-infra] [Bug 1665361] New: Alerts for offline nodes
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665361
Bug ID: 1665361
Summary: Alerts for offline nodes
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: nigelb at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
I want to have a report that tells us which Jenkins nodes are offline and why
they're offline. This is offline in terms of Jenkins. We often have failures in
a few nodes and it takes us a few weeks to get around to fixing them.
This bug is for a solution as well as implementing it.
Option 1: A jenkins job which makes API calls and sends us an email in case
there are machines offline.
Option 2: Nagios check which alerts us. This is slightly more explosive :)
--
You are receiving this mail because:
You are on the CC list for the bug.
From nigelb at redhat.com Fri Jan 11 07:07:40 2019
From: nigelb at redhat.com (Nigel Babu)
Date: Fri, 11 Jan 2019 12:37:40 +0530
Subject: [Gluster-infra] Please do not upgrade the cppcheck Jenkins plugin
Message-ID:
Hello folks,
This is a note to myself and everyone else. Please do not upgrade cppcheck
from 1.22. The plugin seems to have changed in a backwards incompatible
manner. For now we'll stick to the 1.22 version until we have to figure out
how to make it work with the latest version.
--
nigelb
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Mon Jan 14 10:23:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 14 Jan 2019 10:23:14 +0000
Subject: [Gluster-infra] [Bug 1665361] Alerts for offline nodes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665361
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
I suspect option 2 is not what we want.
But yeah, nagios do handle this quite well, doing notification, etc, etc. But
would still need to do the basic script that do the API call anyway, the
difference would be between "send a email", or "do a api call to nagios to
trigger a alert", and I think we could switch between thel quite easily if
needed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 14 11:07:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 14 Jan 2019 11:07:27 +0000
Subject: [Gluster-infra] [Bug 1665889] New: Too small restriction for commit
topic length in review.gluster.org
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665889
Bug ID: 1665889
Summary: Too small restriction for commit topic length in
review.gluster.org
Product: GlusterFS
Version: mainline
Hardware: All
OS: All
Status: NEW
Component: project-infrastructure
Severity: high
Assignee: bugs at gluster.org
Reporter: vponomar at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of the problem:
We, OCS QE automation team, have goal to port our downstream project [1] to the
upstream [2]. And we are unable to do it, because new repo [2] has limitation
to the length of a commit topic as 50 symbols. In our downstream project we
followed 72 symbols length. So, need to make it be 72 symbols.
[1] http://git.app.eng.bos.redhat.com/git/cns-qe/cns-automation.git/
[2] https://github.com/gluster/glusterfs-containers-tests
Version-Release number of selected component (if applicable):
How reproducible: 100%
Steps to Reproduce:
1. Create commit
2. Push it to the gerrit ->
https://review.gluster.org/#/q/project:glusterfs-containers-tests
Actual results:
Response from server:
remote: (W) efd7f6f: commit subject >50 characters; use shorter first paragraph
Expected results:
Success after attempt to push code.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 14 11:23:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 14 Jan 2019 11:23:08 +0000
Subject: [Gluster-infra] [Bug 1665889] Too small restriction for commit
topic length in review.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665889
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |MODIFIED
CC| |nigelb at redhat.com
Assignee|bugs at gluster.org |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Ack. This needs a gerrit config change and a restart. I'm going to do that now.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 14 11:42:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 14 Jan 2019 11:42:59 +0000
Subject: [Gluster-infra] [Bug 1665889] Too small restriction for commit
topic length in review.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665889
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-14 11:42:59
--- Comment #2 from Nigel Babu ---
This still lead to some permission troubles around pushing merge commits that
did not go away despite granting merge permissions. I did the push instead and
that has worked.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 14 14:49:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 14 Jan 2019 14:49:42 +0000
Subject: [Gluster-infra] [Bug 1658146] BZ incorrectly updated with "patch
posted" message when a patch is merged
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1658146
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-14 14:49:42
--- Comment #3 from Nigel Babu ---
This is now fixed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 17 05:16:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 17 Jan 2019 05:16:10 +0000
Subject: [Gluster-infra] [Bug 1666954] New: gluster_glusto-patch-check job
is failing with permission denied error on run tests
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1666954
Bug ID: 1666954
Summary: gluster_glusto-patch-check job is failing with
permission denied error on run tests
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Severity: high
Assignee: bugs at gluster.org
Reporter: vavuthu at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
gluster_glusto-patch-check (
https://ci.centos.org/job/gluster_glusto-patch-check ) job is failing with
permission denied error on run tests
https://ci.centos.org/job/gluster_glusto-patch-check/1070/console
05:10:00 TASK [Create an ssh keypair]
***************************************************
05:10:01 fatal: [localhost]: FAILED! => {"changed": true, "cmd": "ssh-keygen -b
2048 -t rsa -f $GLUSTO_WORKSPACE/glusto -q -N \"\"", "delta": "0:00:00.178959",
"end": "2019-01-17 05:10:01.386921", "msg": "non-zero return code", "rc": 1,
"start": "2019-01-17 05:10:01.207962", "stderr": "Saving key
\"/home/gluster/workspace/gluster_glusto-patch-check/centosci/glusto\" failed:
Permission denied", "stderr_lines": ["Saving key
\"/home/gluster/workspace/gluster_glusto-patch-check/centosci/glusto\" failed:
Permission denied"], "stdout": "", "stdout_lines": []}
05:10:01 to retry, use: --limit
@/home/gluster/workspace/gluster_glusto-patch-check/centosci/jobs/scripts/glusto/setup-glusto.retry
05:10:01
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 17 07:21:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 17 Jan 2019 07:21:14 +0000
Subject: [Gluster-infra] [Bug 1666954] gluster_glusto-patch-check job is
failing with permission denied error on run tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1666954
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
CC| |nigelb at redhat.com
Assignee|bugs at gluster.org |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Ack. This is strange, because the user absolutely has permissions. Re-running
the exact same ansible script after the job works, so I'm a bit lost as to
what's failing. Will dig deeper.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Jan 18 11:46:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 18 Jan 2019 11:46:37 +0000
Subject: [Gluster-infra] [Bug 1666954] gluster_glusto-patch-check job is
failing with permission denied error on run tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1666954
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-18 11:46:37
--- Comment #2 from Nigel Babu ---
There seems to an issue with running ssh-keygen via the Jenkins connection. I
haven't figured out a solution to that. Instead, I've just generated a key
manually in .ssh and we'll be using that for all our jobs. After fixing this
bug, I ran into an issue with the python-docx installation failure which is
fixed as well.
--
You are receiving this mail because:
You are on the CC list for the bug.