From bugzilla at redhat.com Mon Mar 4 09:11:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Mar 2019 09:11:44 +0000
Subject: [Gluster-infra] [Bug 1685051] New: New Project create request
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Bug ID: 1685051
Summary: New Project create request
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Please create a new project under Github Gluster organization
Name: devblog
Description: Gluster Developer Blog posts
Admins:
@aravindavk Aravinda VK
@amarts Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 4 09:26:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Mar 2019 09:26:26 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Deepshikha khandelwal changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |dkhandel at redhat.com
Resolution|--- |NOTABUG
Last Closed| |2019-03-04 09:26:26
--- Comment #1 from Deepshikha khandelwal ---
Done.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 4 09:36:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Mar 2019 09:36:31 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
--- Comment #2 from Aravinda VK ---
Thanks
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 4 10:09:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Mar 2019 10:09:27 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
Resolution|NOTABUG |CURRENTRELEASE
--- Comment #3 from M. Scherer ---
Wait, what is the plan for that ?
And why isn't gluster-infra in the loop sooner or with more details ?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 4 16:20:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Mar 2019 16:20:18 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Amye Scavarda changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |MODIFIED
CC| |amye at redhat.com,
| |avishwan at redhat.com
Resolution|CURRENTRELEASE |---
Flags| |needinfo?(avishwan at redhat.c
| |om)
Keywords| |Reopened
--- Comment #4 from Amye Scavarda ---
I have concerns.
What is this for?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Mar 5 03:00:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Mar 2019 03:00:32 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(avishwan at redhat.c |
|om) |
--- Comment #5 from Aravinda VK ---
This project is for hosting Developer blog posts using Github pages. Developers
are more familiar with Markdown format to write documentation or blog post, so
this will be easy to contribute compared to using UI and write blog posts.
Based on discussion with other developers, they find it difficult to set up a
blog website than writing. This project aims to simplify that
- Official Gluster org blog continue to exists to make announcements or release
highlights or any other blog posts
- This will only host developer blog posts(More technical, developer tips,
feature explanation etc)
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Mar 5 04:31:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Mar 2019 04:31:03 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Amye Scavarda changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ASSIGNED
Flags| |needinfo?(avishwan at redhat.c
| |om)
--- Comment #6 from Amye Scavarda ---
This is exactly what should be on gluster.org's blog!
You write wherever you want, we can set WordPress to take Markdown with no
issues.
We should not be duplicating effort when gluster.org is a great platform to be
able to create content on already.
We should get a list of the people who want to write developer blogs and get
them author accounts to publish directly on Gluster.org and publicize from
there through social media.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Mar 5 14:26:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Mar 2019 14:26:05 +0000
Subject: [Gluster-infra] [Bug 1685576] New: DNS delegation record for
rhhi-dev.gluster.org
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
Bug ID: 1685576
Summary: DNS delegation record for rhhi-dev.gluster.org
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: sabose at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Please create DNS delegation record for rhhi-dev.gluster.org
ns-1487.awsdns-57.org.
ns-626.awsdns-14.net.
ns-78.awsdns-09.com.
ns-1636.awsdns-12.co.uk.
Version-Release number of selected component (if applicable):
NA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Mar 5 15:06:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Mar 2019 15:06:42 +0000
Subject: [Gluster-infra] [Bug 1685576] DNS delegation record for
rhhi-dev.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
For the context, that's for a test instance of openshift hosted on AWS.
The delegation got created, please tell me if ther eis any issue
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 05:49:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 05:49:03 +0000
Subject: [Gluster-infra] [Bug 1685576] DNS delegation record for
rhhi-dev.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
Rohan CJ changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |rojoseph at redhat.com
Flags| |needinfo?(mscherer at redhat.c
| |om)
--- Comment #2 from Rohan CJ ---
The delegation doesn't seem to be working. I don't know if DNS propagation is a
concern here, but I did also try directly querying ns1.redhat.com.
Here is the link to the kind of delegation we want for openshift:
https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md#step-4b-subdomain---perform-dns-delegation
$ dig rhhi-dev.gluster.org
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-13.P2.fc28 <<>> rhhi-dev.gluster.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18531
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;rhhi-dev.gluster.org. IN A
;; AUTHORITY SECTION:
gluster.org. 278 IN SOA ns1.redhat.com. noc.redhat.com.
2019030501 3600 1800 604800 86400
;; Query time: 81 msec
;; SERVER: 10.68.5.26#53(10.68.5.26)
;; WHEN: Wed Mar 06 11:12:41 IST 2019
;; MSG SIZE rcvd: 103
$ dig @ns-1487.awsdns-57.org. rhhi-dev.gluster.org
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-13.P2.fc28 <<>> @ns-1487.awsdns-57.org.
rhhi-dev.gluster.org
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58544
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;rhhi-dev.gluster.org. IN A
;; AUTHORITY SECTION:
rhhi-dev.gluster.org. 900 IN SOA ns-1487.awsdns-57.org.
awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;; Query time: 39 msec
;; SERVER: 205.251.197.207#53(205.251.197.207)
;; WHEN: Wed Mar 06 11:13:27 IST 2019
;; MSG SIZE rcvd: 131
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 07:50:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 07:50:10 +0000
Subject: [Gluster-infra] [Bug 1685813] New: Not able to run
centos-regression getting exception error
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685813
Bug ID: 1685813
Summary: Not able to run centos-regression getting exception
error
Product: GlusterFS
Version: 6
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Not able to run centos-regression build is getting an Exception error
Version-Release number of selected component (if applicable):
How reproducible:
https://build.gluster.org/job/centos7-regression/5017/consoleFull
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 09:29:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 09:29:38 +0000
Subject: [Gluster-infra] [Bug 1685576] DNS delegation record for
rhhi-dev.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(mscherer at redhat.c |
|om) |
--- Comment #3 from M. Scherer ---
Seems there is a issue with the DNS server, as it work, but only on the
internal server in the RH lan. I am slightly puzzled on that. I will have to
escalate that to IT.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 09:45:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 09:45:53 +0000
Subject: [Gluster-infra] [Bug 1685813] Not able to run centos-regression
getting exception error
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685813
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
Yeah, there is a patch that seems to break builders one by one.
Dkhandel told me this morning that we lost lots of aws builder (8 out of 10),
and upon investigation, they all ran regression tests for that change before
becoming offline:
https://review.gluster.org/#/c/glusterfs/+/22290/
As said on gerrit, I strongly suspect that the logic change do result in the
test spawning a infinite loop, since the builder we recovered didn't show any
trace of error in the log, which is the kind of symptom you get with a infinite
loop (while still answering to ping, since icmp is handled in the kernel).
So I would suggest to investigate ./tests/00-geo-rep/00-georep-verify-setup.t ,
as I see that as being the last test run before losing contact with builders.
In fact, since the patch worked for the 2nd iteration, I guess the issue is the
3rd iteration of the patch.
In any case, I think that's not a infra issue.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 10:53:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 10:53:31 +0000
Subject: [Gluster-infra] [Bug 1685813] Not able to run centos-regression
getting exception error
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685813
--- Comment #2 from M. Scherer ---
I did reboot the broken builders and they are back. And I also looked at the
patch, but didn't found something, so I suspect there is some logic that escape
me.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 10:58:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 10:58:53 +0000
Subject: [Gluster-infra] [Bug 1685813] Not able to run centos-regression
getting exception error
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685813
--- Comment #3 from Mohit Agrawal ---
Thanks, Michael there is some issue in my patch.
I will upload a new patch.
You can close the bugzilla.
Thanks,
Mohit Agrawal
--
You are receiving this mail because:
You are on the CC list for the bug.
From dkhandel at redhat.com Wed Mar 6 12:07:53 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Wed, 6 Mar 2019 17:37:53 +0530
Subject: [Gluster-infra] 8/10 AWS jenkins builders disconnected
Message-ID:
Hello,
Today while debugging the centos7-regression failed builds I saw most of
the builders did not pass the instance status check on AWS and were
unreachable.
Misc investigated this and came to know about the patch[1] which seems to
break the builder one after the other. They all ran the regression test for
this specific change before going offline.
We suspect that this change do result in infinite loop of processes as we
did not see any trace of error in the system logs.
We did reboot all those builders and they all seem to be running fine now.
Please let us know if you see any such issues again.
[1] https://review.gluster.org/#/c/glusterfs/+/22290/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sankarshan.mukhopadhyay at gmail.com Wed Mar 6 12:23:20 2019
From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay)
Date: Wed, 6 Mar 2019 17:53:20 +0530
Subject: [Gluster-infra] [Gluster-devel] 8/10 AWS jenkins builders
disconnected
In-Reply-To:
References:
Message-ID:
On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
wrote:
>
> Hello,
>
> Today while debugging the centos7-regression failed builds I saw most of the builders did not pass the instance status check on AWS and were unreachable.
>
> Misc investigated this and came to know about the patch[1] which seems to break the builder one after the other. They all ran the regression test for this specific change before going offline.
> We suspect that this change do result in infinite loop of processes as we did not see any trace of error in the system logs.
>
> We did reboot all those builders and they all seem to be running fine now.
>
The question though is - what to do about the patch, if the patch
itself is the root cause? Is this assigned to anyone to look into?
> Please let us know if you see any such issues again.
>
> [1] https://review.gluster.org/#/c/glusterfs/+/22290/
--
sankarshan mukhopadhyay
From dkhandel at redhat.com Wed Mar 6 12:31:39 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Wed, 6 Mar 2019 18:01:39 +0530
Subject: [Gluster-infra] [Gluster-devel] 8/10 AWS jenkins builders
disconnected
In-Reply-To:
References:
Message-ID:
Yes, Mohit is looking into it. There's some issue in the patch itself.
I forgot to link the bug filed for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1685813
On Wed, Mar 6, 2019 at 5:54 PM Sankarshan Mukhopadhyay <
sankarshan.mukhopadhyay at gmail.com> wrote:
> On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
> wrote:
> >
> > Hello,
> >
> > Today while debugging the centos7-regression failed builds I saw most of
> the builders did not pass the instance status check on AWS and were
> unreachable.
> >
> > Misc investigated this and came to know about the patch[1] which seems
> to break the builder one after the other. They all ran the regression test
> for this specific change before going offline.
> > We suspect that this change do result in infinite loop of processes as
> we did not see any trace of error in the system logs.
> >
> > We did reboot all those builders and they all seem to be running fine
> now.
> >
>
> The question though is - what to do about the patch, if the patch
> itself is the root cause? Is this assigned to anyone to look into?
>
> > Please let us know if you see any such issues again.
> >
> > [1] https://review.gluster.org/#/c/glusterfs/+/22290/
>
>
> --
> sankarshan mukhopadhyay
>
> _______________________________________________
> Gluster-infra mailing list
> Gluster-infra at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Wed Mar 6 15:13:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 15:13:43 +0000
Subject: [Gluster-infra] [Bug 1686034] New: Request access to docker hub
gluster organisation.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
Bug ID: 1686034
Summary: Request access to docker hub gluster organisation.
Product: GlusterFS
Version: experimental
Status: NEW
Component: project-infrastructure
Severity: low
Assignee: bugs at gluster.org
Reporter: sseshasa at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
I request access to the docker hub gluster organisation in order to push and
manage docker images. My docker hub user ID is: sseshasa
I am not sure what to choose for "Product" and "Component" fields. Please
suggest/correct accordingly if they are wrong.
Version-Release number of selected component (if applicable):
NA
How reproducible:
NA
Steps to Reproduce:
1.
2.
3.
Actual results:
NA
Expected results:
NA
Additional info:
NA
--
You are receiving this mail because:
You are on the CC list for the bug.
From mscherer at redhat.com Wed Mar 6 15:17:30 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Wed, 06 Mar 2019 16:17:30 +0100
Subject: [Gluster-infra] [Gluster-devel] 8/10 AWS jenkins builders
disconnected
In-Reply-To:
References:
Message-ID:
Le mercredi 06 mars 2019 ? 17:53 +0530, Sankarshan Mukhopadhyay a
?crit :
> On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
> wrote:
> >
> > Hello,
> >
> > Today while debugging the centos7-regression failed builds I saw
> > most of the builders did not pass the instance status check on AWS
> > and were unreachable.
> >
> > Misc investigated this and came to know about the patch[1] which
> > seems to break the builder one after the other. They all ran the
> > regression test for this specific change before going offline.
> > We suspect that this change do result in infinite loop of processes
> > as we did not see any trace of error in the system logs.
> >
> > We did reboot all those builders and they all seem to be running
> > fine now.
> >
>
> The question though is - what to do about the patch, if the patch
> itself is the root cause? Is this assigned to anyone to look into?
We also pondered on wether we should protect the builder from that kind
of issue. But since:
- we are not sure that the hypothesis is right
- any protection based on "limit the number of process" would surely
sooner or later block legitimate tests, and requires adjustement (and
likely investigation)
we didn't choose to follow that road for now.
> > Please let us know if you see any such issues again.
> >
> > [1] https://review.gluster.org/#/c/glusterfs/+/22290/
>
>
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From bugzilla at redhat.com Wed Mar 6 15:18:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 15:18:23 +0000
Subject: [Gluster-infra] [Bug 1686034] Request access to docker hub gluster
organisation.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
So, what is the exact plan ? Shouldn't the docker image be built and pushed
automatically ?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 15:34:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 15:34:38 +0000
Subject: [Gluster-infra] [Bug 1686034] Request access to docker hub gluster
organisation.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
--- Comment #2 from Sridhar Seshasayee ---
I have built a docker image locally and pushed it to my repository on docker
hub with user ID: sseshasa.
However, I need to push the same image under the gluster organisation on docker
hub (https://hub.docker.com/u/gluster) under gluster/gluster*. I don't know how
to achieve this and imagine that I need some access privilege to push images
there. Please let me know how I can go about this.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 6 15:59:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Mar 2019 15:59:06 +0000
Subject: [Gluster-infra] [Bug 1686034] Request access to docker hub gluster
organisation.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
--- Comment #3 from M. Scherer ---
Nope, we do not allow direct push (or shouldn't). If you want a new image
there, you have to explain what it is, why it should be there, etc, etc. And
automate the push, for example, using a jenkins job. See for example this job:
https://build.gluster.org/job/glusterd2-containers/
http://git.gluster.org/cgit/build-jobs.git/tree/build-gluster-org/jobs/glusterd2-containers.yml
that's managed by gerrit, like the glusterfs source code.
--
You are receiving this mail because:
You are on the CC list for the bug.
From sankarshan.mukhopadhyay at gmail.com Wed Mar 6 16:01:47 2019
From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay)
Date: Wed, 6 Mar 2019 21:31:47 +0530
Subject: [Gluster-infra] [Gluster-devel] 8/10 AWS jenkins builders
disconnected
In-Reply-To:
References:
Message-ID:
On Wed, Mar 6, 2019 at 8:47 PM Michael Scherer wrote:
>
> Le mercredi 06 mars 2019 ? 17:53 +0530, Sankarshan Mukhopadhyay a
> ?crit :
> > On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
> > wrote:
> > >
> > > Hello,
> > >
> > > Today while debugging the centos7-regression failed builds I saw
> > > most of the builders did not pass the instance status check on AWS
> > > and were unreachable.
> > >
> > > Misc investigated this and came to know about the patch[1] which
> > > seems to break the builder one after the other. They all ran the
> > > regression test for this specific change before going offline.
> > > We suspect that this change do result in infinite loop of processes
> > > as we did not see any trace of error in the system logs.
> > >
> > > We did reboot all those builders and they all seem to be running
> > > fine now.
> > >
> >
> > The question though is - what to do about the patch, if the patch
> > itself is the root cause? Is this assigned to anyone to look into?
>
> We also pondered on wether we should protect the builder from that kind
> of issue. But since:
> - we are not sure that the hypothesis is right
> - any protection based on "limit the number of process" would surely
> sooner or later block legitimate tests, and requires adjustement (and
> likely investigation)
>
> we didn't choose to follow that road for now.
>
This is a good topic though. Is there any logical way to fence off the
builders from noisy neighbors?
From mscherer at redhat.com Wed Mar 6 16:52:33 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Wed, 06 Mar 2019 17:52:33 +0100
Subject: [Gluster-infra] [Gluster-devel] 8/10 AWS jenkins builders
disconnected
In-Reply-To:
References:
Message-ID: <3f46ce23a073651b47680d3196017f8e49e53f84.camel@redhat.com>
Le mercredi 06 mars 2019 ? 21:31 +0530, Sankarshan Mukhopadhyay a
?crit :
> On Wed, Mar 6, 2019 at 8:47 PM Michael Scherer
> wrote:
> >
> > Le mercredi 06 mars 2019 ? 17:53 +0530, Sankarshan Mukhopadhyay a
> > ?crit :
> > > On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
> > > wrote:
> > > >
> > > > Hello,
> > > >
> > > > Today while debugging the centos7-regression failed builds I
> > > > saw
> > > > most of the builders did not pass the instance status check on
> > > > AWS
> > > > and were unreachable.
> > > >
> > > > Misc investigated this and came to know about the patch[1]
> > > > which
> > > > seems to break the builder one after the other. They all ran
> > > > the
> > > > regression test for this specific change before going offline.
> > > > We suspect that this change do result in infinite loop of
> > > > processes
> > > > as we did not see any trace of error in the system logs.
> > > >
> > > > We did reboot all those builders and they all seem to be
> > > > running
> > > > fine now.
> > > >
> > >
> > > The question though is - what to do about the patch, if the patch
> > > itself is the root cause? Is this assigned to anyone to look
> > > into?
> >
> > We also pondered on wether we should protect the builder from that
> > kind
> > of issue. But since:
> > - we are not sure that the hypothesis is right
> > - any protection based on "limit the number of process" would
> > surely
> > sooner or later block legitimate tests, and requires adjustement
> > (and
> > likely investigation)
> >
> > we didn't choose to follow that road for now.
> >
>
> This is a good topic though. Is there any logical way to fence off
> the
> builders from noisy neighbors?
I am not sure to follow the question, what I had in mind was more to
just regular ulimit to avoid the equivalent of a fork bomb (again, if
the hypothesis is the right one).
Since our builders are running 1 job at a time, there is no noisy
neighbor issues, or rather, since that's AWS, we can't control anything
regarding contention of shared ressources anyway
.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From bugzilla at redhat.com Thu Mar 7 05:24:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 05:24:49 +0000
Subject: [Gluster-infra] [Bug 1686034] Request access to docker hub gluster
organisation.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
--- Comment #4 from Sridhar Seshasayee ---
Okay, thanks for the info and pointers. I will work with one of the developers
and get this done. This issue may be closed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 7 06:54:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 06:54:30 +0000
Subject: [Gluster-infra] [Bug 1685576] DNS delegation record for
rhhi-dev.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
--- Comment #4 from Rohan CJ ---
It's working now!
--
You are receiving this mail because:
You are on the CC list for the bug.
From dkhandel at redhat.com Thu Mar 7 09:46:07 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Thu, 7 Mar 2019 15:16:07 +0530
Subject: [Gluster-infra] Upgrading build.gluster.org
Message-ID:
Hello,
I?ve planned to do an upgrade of build.gluster.org tomorrow morning so as
to install and pull in the latest security upgrade of the Jenkins plugins.
I?ll stop all the running jobs and re-trigger them once I'm done with
upgradation.
The downtime window will be from :
UTC: 0330 to 0400
IST: 0900 to 0930
The outage is for 30 minutes. Please bear with us as we continue to ensure
the latest plugins and fixes for build.gluster.org
Thanks,
Deepshikha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Thu Mar 7 11:03:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 11:03:32 +0000
Subject: [Gluster-infra] [Bug 1686371] New: Cleanup nigel access and
document it
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686371
Bug ID: 1686371
Summary: Cleanup nigel access and document it
Product: GlusterFS
Version: 4.1
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: mscherer at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Nigel babu left the admin team as well as Red Hat. We should clean and remove
access and document that.
SO far, here is what we have to do:
Access to remove:
- remove from github (group Github-organization-Admins)
- remove ssh keys in ansible
=> done
- remove alias from root on private repo
=> done
- remove alias from group_vars/nagios/admins.yml
=> done
- remove entry from jenkins (on https://build.gluster.org/configureSecurity/)
=> done
- remove from gerrit permission
=> TODO
- remove from gluster repo
=> edit ./MAINTAINERS
- remove from ec2
=> TODO
While on it, there is a few passwords and stuff to rotate:
- rotate the ansible ssh keys
=> done, but we need to write down the process (ideally, a ansible playbook)
- change nagios password
=> TODO
- rotate the jenkins ssh keys
=> TODO, write a process
Maybe more need to be done
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 7 11:20:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 11:20:17 +0000
Subject: [Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686371
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22320
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 7 11:20:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 11:20:18 +0000
Subject: [Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686371
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22320 (Remove Nigel, as he left the company)
posted (#1) for review on master by Michael Scherer
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 7 12:28:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 07 Mar 2019 12:28:56 +0000
Subject: [Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686371
--- Comment #2 from M. Scherer ---
Alos, removed from jenkins-admins on github.
--
You are receiving this mail because:
You are on the CC list for the bug.
From atumball at redhat.com Thu Mar 7 13:17:47 2019
From: atumball at redhat.com (Amar Tumballi Suryanarayan)
Date: Thu, 7 Mar 2019 18:47:47 +0530
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
Message-ID:
And it is happening with 'failed to determine' the job... anything
different in jenkins ?
Also happening with regression-full-run
Would be good to resolve sooner, so we can get in many patches which are
blocking releases.
-Amar
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mscherer at redhat.com Thu Mar 7 14:39:12 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Thu, 07 Mar 2019 15:39:12 +0100
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
In-Reply-To:
References:
Message-ID:
Le jeudi 07 mars 2019 ? 18:47 +0530, Amar Tumballi Suryanarayan a
?crit :
> And it is happening with 'failed to determine' the job... anything
> different in jenkins ?
No, we didn't touch to jenkins as far as I know, besides removing nigel
from a group on a github this morning.
> Also happening with regression-full-run
>
> Would be good to resolve sooner, so we can get in many patches which
> are blocking releases.
Can you give a bit more information, like which execution exactly ?
For example:
https://build.gluster.org/job/regression-on-demand-full-run/255/ is
what you are speaking of ?
(as I do not see the exact string you pointed, I am not sure that's the
issue)
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From atumball at redhat.com Thu Mar 7 14:40:04 2019
From: atumball at redhat.com (Amar Tumballi Suryanarayan)
Date: Thu, 7 Mar 2019 20:10:04 +0530
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
In-Reply-To:
References:
Message-ID:
https://build.gluster.org/job/regression-on-demand-full-run/ All recent
failures (4-5 of them),
and centos-regression like
https://build.gluster.org/job/centos7-regression/5048/console
On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer wrote:
> Le jeudi 07 mars 2019 ? 18:47 +0530, Amar Tumballi Suryanarayan a
> ?crit :
> > And it is happening with 'failed to determine' the job... anything
> > different in jenkins ?
>
> No, we didn't touch to jenkins as far as I know, besides removing nigel
> from a group on a github this morning.
>
> > Also happening with regression-full-run
> >
> > Would be good to resolve sooner, so we can get in many patches which
> > are blocking releases.
>
> Can you give a bit more information, like which execution exactly ?
>
> For example:
> https://build.gluster.org/job/regression-on-demand-full-run/255/ is
> what you are speaking of ?
>
> (as I do not see the exact string you pointed, I am not sure that's the
> issue)
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dkhandel at redhat.com Thu Mar 7 14:42:26 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Thu, 7 Mar 2019 20:12:26 +0530
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
In-Reply-To:
References:
Message-ID:
Here is one of the console output which Amar is pointing to
https://build.gluster.org/job/centos7-regression/5051/console
It showed up after we did reboot yesterday only on builder207
On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer wrote:
> Le jeudi 07 mars 2019 ? 18:47 +0530, Amar Tumballi Suryanarayan a
> ?crit :
> > And it is happening with 'failed to determine' the job... anything
> > different in jenkins ?
>
> No, we didn't touch to jenkins as far as I know, besides removing nigel
> from a group on a github this morning.
>
> > Also happening with regression-full-run
> >
> > Would be good to resolve sooner, so we can get in many patches which
> > are blocking releases.
>
> Can you give a bit more information, like which execution exactly ?
>
> For example:
> https://build.gluster.org/job/regression-on-demand-full-run/255/ is
> what you are speaking of ?
>
> (as I do not see the exact string you pointed, I am not sure that's the
> issue)
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mscherer at redhat.com Thu Mar 7 14:45:33 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Thu, 07 Mar 2019 15:45:33 +0100
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
In-Reply-To:
References:
Message-ID: <270cd30a9fda0aebd0b1438bc694f23e10f99d71.camel@redhat.com>
Le jeudi 07 mars 2019 ? 20:12 +0530, Deepshikha Khandelwal a ?crit :
> Here is one of the console output which Amar is pointing to
> https://build.gluster.org/job/centos7-regression/5051/console
>
> It showed up after we did reboot yesterday only on builder207
ok, so let's put the node offline for now, the others should pick the
work
> On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer
> wrote:
>
> > Le jeudi 07 mars 2019 ? 18:47 +0530, Amar Tumballi Suryanarayan a
> > ?crit :
> > > And it is happening with 'failed to determine' the job...
> > > anything
> > > different in jenkins ?
> >
> > No, we didn't touch to jenkins as far as I know, besides removing
> > nigel
> > from a group on a github this morning.
> >
> > > Also happening with regression-full-run
> > >
> > > Would be good to resolve sooner, so we can get in many patches
> > > which
> > > are blocking releases.
> >
> > Can you give a bit more information, like which execution exactly ?
> >
> > For example:
> > https://build.gluster.org/job/regression-on-demand-full-run/255/ is
> > what you are speaking of ?
> >
> > (as I do not see the exact string you pointed, I am not sure that's
> > the
> > issue)
> >
> > --
> > Michael Scherer
> > Sysadmin, Community Infrastructure and Platform, OSAS
> >
> >
> >
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From mscherer at redhat.com Thu Mar 7 14:49:30 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Thu, 07 Mar 2019 15:49:30 +0100
Subject: [Gluster-infra] Lot of 'centos7-regression' failures
In-Reply-To:
References:
Message-ID: <22694c31dc726863e81ed407e26b610a7d5911d4.camel@redhat.com>
Le jeudi 07 mars 2019 ? 20:12 +0530, Deepshikha Khandelwal a ?crit :
> Here is one of the console output which Amar is pointing to
> https://build.gluster.org/job/centos7-regression/5051/console
>
> It showed up after we did reboot yesterday only on builder207
Seems 202 also has a issue.
> On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer
> wrote:
>
> > Le jeudi 07 mars 2019 ? 18:47 +0530, Amar Tumballi Suryanarayan a
> > ?crit :
> > > And it is happening with 'failed to determine' the job...
> > > anything
> > > different in jenkins ?
> >
> > No, we didn't touch to jenkins as far as I know, besides removing
> > nigel
> > from a group on a github this morning.
> >
> > > Also happening with regression-full-run
> > >
> > > Would be good to resolve sooner, so we can get in many patches
> > > which
> > > are blocking releases.
> >
> > Can you give a bit more information, like which execution exactly ?
> >
> > For example:
> > https://build.gluster.org/job/regression-on-demand-full-run/255/ is
> > what you are speaking of ?
> >
> > (as I do not see the exact string you pointed, I am not sure that's
> > the
> > issue)
> >
> > --
> > Michael Scherer
> > Sysadmin, Community Infrastructure and Platform, OSAS
> >
> >
> >
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From bugzilla at redhat.com Fri Mar 8 04:34:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 04:34:51 +0000
Subject: [Gluster-infra] [Bug 1686034] Request access to docker hub gluster
organisation.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686034
Sridhar Seshasayee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |NOTABUG
Last Closed| |2019-03-08 04:34:51
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 09:18:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 09:18:07 +0000
Subject: [Gluster-infra] [Bug 1686754] New: Requesting merge rights for
Cloudsync
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
Bug ID: 1686754
Summary: Requesting merge rights for Cloudsync
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: atumball at redhat.com
Reporter: spalai at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Requesting to provide merge rights being a maintainer for Cloudsync Xlator.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 09:21:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 09:21:36 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |medium
Assignee|atumball at redhat.com |dkhandel at redhat.com
Severity|unspecified |medium
--- Comment #1 from Amar Tumballi ---
https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L275
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 11:23:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 11:23:16 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #2 from M. Scherer ---
Hi, can you explain a bit more what is missing ? As i am not familliar with the
ACL system of gerrit, I would like to understand the kind of access you want,
and for example who have it already so I can see where this would be defined,
or something like this.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 11:40:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 11:40:42 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #3 from Susant Kumar Palai ---
(In reply to M. Scherer from comment #2)
> Hi, can you explain a bit more what is missing ? As i am not familliar with
Maintainer right is missing. This gives the ability to add +2 on a patch and
merge it as well.
> the ACL system of gerrit, I would like to understand the kind of access you
> want, and for example who have it already so I can see where this would be
You can look at Amar's profile.
> defined, or something like this.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 13:19:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 13:19:55 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #4 from M. Scherer ---
So Amar has a bit more access that most people, but I suspect that we want you
either in github group glusterfs-maintainers or gluster-committers, based on
the project.config file that can be access using the meta/config branch,
according to
https://gerrit-review.googlesource.com/Documentation/access-control.html
I will add you to the group once I verify your github id (I see
https://github.com/spalai but since there is no information at all on the
profile, I can't be sure). I would also like to make sure folks with more
access to approve have 2FA turned on, so please take a look at
https://help.github.com/en/articles/about-two-factor-authentication
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 14:31:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 14:31:46 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #5 from Susant Kumar Palai ---
Michael, Is there something pending on me?
Susant
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 14:36:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 14:36:24 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #6 from M. Scherer ---
Well, your github account, you need to confirm if that's
https://github.com/spalai (who do not show much information such as name,
company, and our internal directory do not list that as your github account so
before granting privilege, I prefer to have a confirmation)
Also, plaase enable 2FA.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 14:48:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 14:48:49 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #7 from Susant Kumar Palai ---
I doubt any Maintainers using two-factor authentication. Plus I don't see India
listed for SMS based 2FA.
Updated the bio as you asked.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 14:52:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 14:52:23 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #8 from M. Scherer ---
You can use a yubikey with u2f, or any u2f compliant device. You can use google
auth, or freeotp.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 8 15:41:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 08 Mar 2019 15:41:52 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
--- Comment #9 from M. Scherer ---
Also, I didn't ask to change the bio, I just asked to confirm that is your
account. Just telling me "yes, that's my account" would have been sufficient :/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Mar 9 10:58:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 09 Mar 2019 10:58:56 +0000
Subject: [Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686371
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22320 (Remove Nigel as requested by him)
merged (#3) on master by Nigel Babu
--
You are receiving this mail because:
You are on the CC list for the bug.
From nigel at nigelb.me Sun Mar 10 18:01:00 2019
From: nigel at nigelb.me (Nigel Babu)
Date: Sun, 10 Mar 2019 14:01:00 -0400
Subject: [Gluster-infra] Removing myself as maintainer
Message-ID:
Hello folks,
This change has gone through, but I wanted to let folks here know as well. I'm removing myself as maintainer from everything to reflect that I will no longer be the primary point of contact for any of the components I used to own.
However, I will still be around and contributing as I get time and energy.
--
nigelb
From bugzilla at redhat.com Mon Mar 11 12:51:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 11 Mar 2019 12:51:32 +0000
Subject: [Gluster-infra] [Bug 1686754] Requesting merge rights for Cloudsync
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1686754
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-03-11 12:51:32
--- Comment #10 from M. Scherer ---
So, that was unrelated to github in the end, I did the change (in gerrit UI),
but I would still push folks to use 2FA as much as possible.
--
You are receiving this mail because:
You are on the CC list for the bug.
From dkhandel at redhat.com Tue Mar 12 08:55:52 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Tue, 12 Mar 2019 14:25:52 +0530
Subject: [Gluster-infra] Softserve is up and running
Message-ID:
Hello folks,
Softserve is deployed back today with AWS stack to loan centos machines for
regression testing. I've tested them a few times today to confirm it works
as expected. In the past, Softserve[1] machines would be a clean Centos 7
image. Now, we have an AMI image with all the dependencies installed and
*almost* setup to run regressions. It just needs a few steps run on them
and we have a simplified playbook that will setup *just* those steps. The
instructions are on softserve wiki[2]
Please let us know if you face troubles by filing a bug.[3]
[1]: https://softserve.gluster.org/
[2]: https://github.com/gluster/softserve
/wiki/Running-Regressions-on-loaned-Softserve-instances
[3]:
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=project-infrastructure
Thanks,
Deepshikha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ksubrahm at redhat.com Tue Mar 12 10:44:39 2019
From: ksubrahm at redhat.com (Karthik Subrahmanya)
Date: Tue, 12 Mar 2019 16:14:39 +0530
Subject: [Gluster-infra] Smoke tests are failing
Message-ID:
Hi,
The recent patches are failing on smoke while trying to update the bugzilla
state, with the following error:
xmlrpclib.Fault:
Looks like we have been logged out. Can someone take a look?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dkhandel at redhat.com Tue Mar 12 11:14:34 2019
From: dkhandel at redhat.com (Deepshikha Khandelwal)
Date: Tue, 12 Mar 2019 16:44:34 +0530
Subject: [Gluster-infra] Smoke tests are failing
In-Reply-To:
References:
Message-ID:
It is now fixed.
On Tue, Mar 12, 2019 at 4:15 PM Karthik Subrahmanya
wrote:
> Hi,
>
> The recent patches are failing on smoke while trying to update the
> bugzilla state, with the following error:
>
> xmlrpclib.Fault:
>
>
> Looks like we have been logged out. Can someone take a look?
>
> _______________________________________________
> Gluster-infra mailing list
> Gluster-infra at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ksubrahm at redhat.com Tue Mar 12 11:17:21 2019
From: ksubrahm at redhat.com (Karthik Subrahmanya)
Date: Tue, 12 Mar 2019 16:47:21 +0530
Subject: [Gluster-infra] Smoke tests are failing
In-Reply-To:
References:
Message-ID:
Thanks Deepshikha for the quick turnaround.
On Tue, Mar 12, 2019 at 4:44 PM Deepshikha Khandelwal
wrote:
> It is now fixed.
>
> On Tue, Mar 12, 2019 at 4:15 PM Karthik Subrahmanya
> wrote:
>
>> Hi,
>>
>> The recent patches are failing on smoke while trying to update the
>> bugzilla state, with the following error:
>>
>> xmlrpclib.Fault:
>>
>>
>> Looks like we have been logged out. Can someone take a look?
>>
>> _______________________________________________
>> Gluster-infra mailing list
>> Gluster-infra at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-infra
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sankarshan.mukhopadhyay at gmail.com Wed Mar 13 02:52:57 2019
From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay)
Date: Wed, 13 Mar 2019 08:22:57 +0530
Subject: [Gluster-infra] What is the current state of the Glusto test
framework in upstream?
Message-ID:
What I am essentially looking to understand is whether there are
regular Glusto runs and whether the tests receive refreshes. However,
if there is no available Glusto service running upstream - that is a
whole new conversation.
From ykaul at redhat.com Wed Mar 13 09:33:21 2019
From: ykaul at redhat.com (Yaniv Kaul)
Date: Wed, 13 Mar 2019 11:33:21 +0200
Subject: [Gluster-infra] [automated-testing] What is the current state
of the Glusto test framework in upstream?
In-Reply-To:
References:
Message-ID:
On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadhyay at gmail.com> wrote:
> What I am essentially looking to understand is whether there are
> regular Glusto runs and whether the tests receive refreshes. However,
> if there is no available Glusto service running upstream - that is a
> whole new conversation.
>
I'm* still trying to get it running properly on my simple Vagrant+Ansible
setup[1].
Right now I'm installing Gluster + Glusto + creating bricks, pool and a
volume in ~3m on my latop.
Once I do get it fully working, we'll get to make it work faster, clean it
up and and see how can we get code coverage.
Unless there's an alternative to the whole framework that I'm not aware of?
Surely for most of the positive paths, we can (and perhaps should) use the
the Gluster Ansible modules.
Y.
[1] https://github.com/mykaul/vg
* with an intern's help.
_______________________________________________
> automated-testing mailing list
> automated-testing at gluster.org
> https://lists.gluster.org/mailman/listinfo/automated-testing
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sankarshan.mukhopadhyay at gmail.com Wed Mar 13 10:07:44 2019
From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay)
Date: Wed, 13 Mar 2019 15:37:44 +0530
Subject: [Gluster-infra] [automated-testing] What is the current state
of the Glusto test framework in upstream?
In-Reply-To:
References:
Message-ID:
On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote:
> On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay wrote:
>>
>> What I am essentially looking to understand is whether there are
>> regular Glusto runs and whether the tests receive refreshes. However,
>> if there is no available Glusto service running upstream - that is a
>> whole new conversation.
>
>
> I'm* still trying to get it running properly on my simple Vagrant+Ansible setup[1].
> Right now I'm installing Gluster + Glusto + creating bricks, pool and a volume in ~3m on my latop.
>
This is good. I think my original question was to the maintainer(s) of
Glusto along with the individuals involved in the automated testing
part of Gluster to understand the challenges in deploying this for the
project.
> Once I do get it fully working, we'll get to make it work faster, clean it up and and see how can we get code coverage.
>
> Unless there's an alternative to the whole framework that I'm not aware of?
I haven't read anything to this effect on any list.
> Surely for most of the positive paths, we can (and perhaps should) use the the Gluster Ansible modules.
> Y.
>
> [1] https://github.com/mykaul/vg
> * with an intern's help.
From mscherer at redhat.com Wed Mar 13 15:32:07 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Wed, 13 Mar 2019 16:32:07 +0100
Subject: [Gluster-infra] Smoke tests are failing
In-Reply-To:
References:
Message-ID:
Le mardi 12 mars 2019 ? 16:44 +0530, Deepshikha Khandelwal a ?crit :
> It is now fixed.
This was likely caused by the bugzilla upgrade, who logged the bot out.
To prevent that from happening, we have now a script to log with cron
(every hour), and I am also moving the bugzilla script node inside the
lan, which mean I am doing a few tests (the old one is untouched).
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From bugzilla at redhat.com Thu Mar 14 07:34:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 14 Mar 2019 07:34:23 +0000
Subject: [Gluster-infra] [Bug 1685051] New Project create request
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685051
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
CC| |atumball at redhat.com
Severity|unspecified |medium
--- Comment #7 from Amar Tumballi ---
Amye,
> This is exactly what should be on gluster.org's blog!
I guess there were a lot of question about where to write blogs from many
gluster developers in office, and hence the request for this.
> You write wherever you want, we can set WordPress to take Markdown with no issues.
> We should not be duplicating effort when gluster.org is a great platform to be able to create content on already.
> We should get a list of the people who want to write developer blogs and get them author accounts to publish directly on Gluster.org and publicize from there through social media.
The way I liked the github static pages is, developers are used to local md (or
hackmd way of writing), and the process of doing git push. This also allows
some of us to proof read this, and merge. And considering there are already
available tools/themes for this, this shouldn't be hard to setup.
Whether this is going to be a long term solution? Don't know, but having this
option increases possibilities of people posting is what I thought.
--
You are receiving this mail because:
You are on the CC list for the bug.
From jholloway at redhat.com Wed Mar 13 14:14:12 2019
From: jholloway at redhat.com (Jonathan Holloway)
Date: Wed, 13 Mar 2019 09:14:12 -0500
Subject: [Gluster-infra] [automated-testing] What is the current state
of the Glusto test framework in upstream?
In-Reply-To:
References:
Message-ID:
On Wed, Mar 13, 2019 at 5:08 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadhyay at gmail.com> wrote:
> On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote:
> > On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay <
> sankarshan.mukhopadhyay at gmail.com> wrote:
> >>
> >> What I am essentially looking to understand is whether there are
> >> regular Glusto runs and whether the tests receive refreshes. However,
> >> if there is no available Glusto service running upstream - that is a
> >> whole new conversation.
> >
> >
> > I'm* still trying to get it running properly on my simple
> Vagrant+Ansible setup[1].
> > Right now I'm installing Gluster + Glusto + creating bricks, pool and a
> volume in ~3m on my latop.
> >
>
> This is good. I think my original question was to the maintainer(s) of
> Glusto along with the individuals involved in the automated testing
> part of Gluster to understand the challenges in deploying this for the
> project.
>
> > Once I do get it fully working, we'll get to make it work faster, clean
> it up and and see how can we get code coverage.
> >
> > Unless there's an alternative to the whole framework that I'm not aware
> of?
>
> I haven't read anything to this effect on any list.
>
>
This is cool. I haven't had a chance to give it a run on my laptop, but it
looked good.
Are you running into issues with Glusto, glusterlibs, and/or Glusto-tests?
I was using the glusto-tests container to run tests locally and for BVT in
the lab.
I was running against lab VMs, so looking forward to giving the vagrant
piece a go.
By upstream service are we talking about the Jenkins in the CentOS
environment, etc?
@Vijay Bhaskar Reddy Avuthu @Akarsha Rai
any insight?
Cheers,
Jonathan
> Surely for most of the positive paths, we can (and perhaps should) use
> the the Gluster Ansible modules.
> > Y.
> >
> > [1] https://github.com/mykaul/vg
> > * with an intern's help.
> _______________________________________________
> automated-testing mailing list
> automated-testing at gluster.org
> https://lists.gluster.org/mailman/listinfo/automated-testing
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bugzilla at redhat.com Mon Mar 18 12:10:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 18 Mar 2019 12:10:39 +0000
Subject: [Gluster-infra] [Bug 1689905] New: gd2 smoke job aborts on timeout
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1689905
Bug ID: 1689905
Summary: gd2 smoke job aborts on timeout
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Severity: high
Assignee: bugs at gluster.org
Reporter: ykaul at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
>From https://build.gluster.org/job/gd2-smoke/4762/console :
Installing vendored packages
13:25:04 Build timed out (after 30 minutes). Marking the build as aborted.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 20 06:40:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 20 Mar 2019 06:40:52 +0000
Subject: [Gluster-infra] [Bug 1689905] gd2 smoke job aborts on timeout
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1689905
Deepshikha khandelwal changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |dkhandel at redhat.com
Resolution|--- |NOTABUG
Last Closed| |2019-03-20 06:40:52
--- Comment #1 from Deepshikha khandelwal ---
It is now fixed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 20 15:15:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 20 Mar 2019 15:15:25 +0000
Subject: [Gluster-infra] [Bug 1685576] DNS delegation record for
rhhi-dev.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1685576
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-03-20 15:15:25
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 21 13:24:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 21 Mar 2019 13:24:20 +0000
Subject: [Gluster-infra] [Bug 1691357] New: core archive link from
regression jobs throw not found error
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691357
Bug ID: 1691357
Summary: core archive link from regression jobs throw not found
error
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
If I try to download the core files from
https://build.gluster.org/job/centos7-regression/5193/console which points me
to https://logs.aws.gluster.org/centos7-regression-5193.tgz , such link doesn't
exists.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 21 13:24:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 21 Mar 2019 13:24:33 +0000
Subject: [Gluster-infra] [Bug 1691357] core archive link from regression
jobs throw not found error
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691357
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Severity|unspecified |urgent
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 21 13:59:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 21 Mar 2019 13:59:38 +0000
Subject: [Gluster-infra] [Bug 1691357] core archive link from regression
jobs throw not found error
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691357
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
I see a tar.gz on https://build.gluster.org/job/centos7-regression/5193/, and
there is a 450 Mo archives, so where does it point you to logs.aws ?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 22 05:37:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 22 Mar 2019 05:37:05 +0000
Subject: [Gluster-infra] [Bug 1691617] New: clang-scan tests are failing
nightly.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691617
Bug ID: 1691617
Summary: clang-scan tests are failing nightly.
Product: GlusterFS
Version: 4.1
Status: NEW
Component: project-infrastructure
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: atumball at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
https://build.gluster.org/job/clang-scan/641/console seems to be failing since
last 20 days.
Version-Release number of selected component (if applicable):
master
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 22 06:30:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 22 Mar 2019 06:30:19 +0000
Subject: [Gluster-infra] [Bug 1663780] On docs.gluster.org,
we should convert spaces in folder or file names to 301 redirects
to hypens
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663780
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |atumball at redhat.com
--- Comment #1 from Amar Tumballi ---
Team, can we consider to pick this up? This change is blocking the above patch
merger.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 22 09:19:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 22 Mar 2019 09:19:32 +0000
Subject: [Gluster-infra] [Bug 1691617] clang-scan tests are failing nightly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691617
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
yep, the builder depend on F27 who is now EOL, so removed from mock config.
One small step is to update it to F29 (so we can have 1 year before it fail
like this), but this will bring new tests and maybe new failure to fix.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Mar 22 14:15:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 22 Mar 2019 14:15:06 +0000
Subject: [Gluster-infra] [Bug 1691789] New: rpc-statd service stops on AWS
builders
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691789
Bug ID: 1691789
Summary: rpc-statd service stops on AWS builders
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: dkhandel at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
On AWS builders, rpc-statd service stops abruptly causing the job to fail.
Though there's a workaround for this but needs more investigation on it.
One such example: https://build.gluster.org/job/centos7-regression/5208/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 25 12:32:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 25 Mar 2019 12:32:27 +0000
Subject: [Gluster-infra] [Bug 1692349] New: gluster-csi-containers job is
failing
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692349
Bug ID: 1692349
Summary: gluster-csi-containers job is failing
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: dkhandel at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
gluster-csi-containers nightly jenkins job is failing from so long because of
no space left on device. This job is aimed to build gluster-csi containers and
push it to dockerhub.
https://build.gluster.org/job/gluster-csi-containers/200/console
Do we need this job anymore or we can delete it?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 25 16:30:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 25 Mar 2019 16:30:19 +0000
Subject: [Gluster-infra] [Bug 1564149] Agree upon a coding standard,
and automate check for this in smoke
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1564149
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version|glusterfs-5.0 |glusterfs-6.0
--- Comment #45 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-6.0, please open a new bug report.
glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for
several distributions should become available in the near future. Keep an eye
on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Mar 25 16:31:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 25 Mar 2019 16:31:10 +0000
Subject: [Gluster-infra] [Bug 1634102] MAINTAINERS: Add sunny kumar as a
peer for snapshot component
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1634102
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |CLOSED
Fixed In Version| |glusterfs-6.0
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-03-25 16:31:10
--- Comment #3 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-6.0, please open a new bug report.
glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for
several distributions should become available in the near future. Keep an eye
on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Mar 26 15:51:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 26 Mar 2019 15:51:46 +0000
Subject: [Gluster-infra] [Bug 1692879] New: Wrong Youtube link in website
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692879
Bug ID: 1692879
Summary: Wrong Youtube link in website
Product: GlusterFS
Version: mainline
Status: NEW
Component: website
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Gluster website(https://www.gluster.org/) links youtube channel as
https://www.youtube.com/channel/UC8OSwywy18VtzRXm036j5qA but all our podcasts
are published under https://www.youtube.com/user/GlusterCommunity
Please change the link in the website to
https://www.youtube.com/user/GlusterCommunity
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 27 13:26:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 27 Mar 2019 13:26:11 +0000
Subject: [Gluster-infra] [Bug 1693295] New: rpc.statd not started on
builder204.aws.gluster.org
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693295
Bug ID: 1693295
Summary: rpc.statd not started on builder204.aws.gluster.org
Product: GlusterFS
Version: 4.1
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem
https://build.gluster.org/job/centos7-regression/5244/ fails with:
11:59:01 mount.nfs: rpc.statd is not running but is required for remote
locking.
11:59:01 mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
11:59:01 mount.nfs: an incorrect mount option was specified
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 27 16:38:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 27 Mar 2019 16:38:38 +0000
Subject: [Gluster-infra] [Bug 1693295] rpc.statd not started on
builder204.aws.gluster.org
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693295
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
So, it fail because the network service didn't return correctly, but I can't
find why this do happen. I may just reboot after the test is finished.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 27 17:19:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 27 Mar 2019 17:19:30 +0000
Subject: [Gluster-infra] [Bug 1692879] Wrong Youtube link in website
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692879
Amye Scavarda changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |amye at redhat.com
Resolution|--- |UPSTREAM
Last Closed| |2019-03-27 17:19:30
--- Comment #2 from Amye Scavarda ---
Either way, resolved!
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 27 17:35:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 27 Mar 2019 17:35:16 +0000
Subject: [Gluster-infra] [Bug 1693385] New: request to change the version of
fedora in fedora-smoke-job
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385
Bug ID: 1693385
Summary: request to change the version of fedora in
fedora-smoke-job
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: atumball at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
There are at least 2 jobs which use 'fedora' while running smoke.
https://build.gluster.org/job/devrpm-fedora/ &&
https://build.gluster.org/job/fedora-smoke/
I guess we are running Fedora 28 in both of these, would be good to update it
to higher version, say F29 (and soon F30).
Version-Release number of selected component (if applicable):
master
Additional info:
Would be good to remove '--enable-debug' in these builds on some jobs (there
are 2 smoke, and 4 rpm build jobs). We should remove --enable-debug in at least
1 of these, so our release RPMs which has no DEBUG defined, can be warning
free.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Mar 27 17:48:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 27 Mar 2019 17:48:25 +0000
Subject: [Gluster-infra] [Bug 1693385] request to change the version of
fedora in fedora-smoke-job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
So, that requires to upgrade the builders (or reinstall them), I think it would
be better to wait on F30 to do it only once.
--
You are receiving this mail because:
You are on the CC list for the bug.
From guy.boisvert at ingtegration.com Wed Mar 27 19:19:36 2019
From: guy.boisvert at ingtegration.com (Guy Boisvert)
Date: Wed, 27 Mar 2019 15:19:36 -0400
Subject: [Gluster-infra] Gluster HA
Message-ID: <7fee8396-f524-68ef-3eab-e7c5461c9bbd@ingtegration.com>
Hi,
??? New to this mailing list.? I'm seeking people advice for GlusterFS
HA in the context of KVM Virtual Machines (VM) storage. We have 3 x KVM
servers that use a 3 x GlusterFS nodes.? The Volumes are 3 way replicate.
??? My question is: You guys, what is your network architecture / setup
for GlusterFS HA?? I read many articles on the internet. Many people are
talking about bonding to a switch but i don't consider this as a good
solution.? I'd like to have Gluster and KVM servers linked to at least 2
switches to have switch / wire and network car redundancy.
??? I saw people using 2 x dumb switches with bonding mode 6 on their
servers with mii monitoring.? It seems to be about good but it could
append that mii is up but frames / packets won't flow. So it this case,
i can't imagine how the servers would handle this.
??? Another setup is dual dumb switches and running Quagga on the
servers (OSPF / ECMP).? This seems to be the best setup, what do you
think?? Do you have experience with one of those setups?? What are your
thoughts on this?? Ah and lastly, how can i search in the list?
Thanks!
Guy
--
Guy Boisvert, ing.
IngTegration inc.
http://www.ingtegration.com
https://www.linkedin.com/pub/guy-boisvert/7/48/899/fr
AVIS DE CONFIDENTIALITE : ce message peut contenir des
renseignements confidentiels appartenant exclusivement a
IngTegration Inc. ou a ses filiales. Si vous n'etes pas
le destinataire indique ou prevu dans ce message (ou
responsable de livrer ce message a la personne indiquee ou
prevue) ou si vous pensez que ce message vous a ete adresse
par erreur, vous ne pouvez pas utiliser ou reproduire ce
message, ni le livrer a quelqu'un d'autre. Dans ce cas, vous
devez le detruire et vous etes prie d'avertir l'expediteur
en repondant au courriel.
CONFIDENTIALITY NOTICE : Proprietary/Confidential Information
belonging to IngTegration Inc. and its affiliates may be
contained in this message. If you are not a recipient
indicated or intended in this message (or responsible for
delivery of this message to such person), or you think for
any reason that this message may have been addressed to you
in error, you may not use or copy or deliver this message to
anyone else. In such case, you should destroy this message
and are asked to notify the sender by reply email.
From sankarshan.mukhopadhyay at gmail.com Thu Mar 28 02:27:45 2019
From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay)
Date: Thu, 28 Mar 2019 07:57:45 +0530
Subject: [Gluster-infra] On the topic of building packages and adding new
members
Message-ID:
I am posting this first to the infra list because we will need to have
input on how we are placed to gain benefit from maximum possible
automation and implementation of any gaps.
For the longest period of time Kaleb has been running the scripts and
doing the maintainer work (including keeping SPEC files and such in
sync) towards making packages available for Gluster. He ensures that
we get to have packages in the Fedora and CentOS (SIG) repositories,
Ubuntu, Debian and Suse package distribution channels as well off
download.gluster.org
Recently, Kaleb has urged us to find ways to lessen his work and add
new members to this task. To this end, he intends to make available
through the gluster.org assets, the scripts and know-how/documentation
to make such builds.
What I'd want to kick off is the assessment of how much of this is
scripted/automated and requires a human to watch for and take
corrective action on failures. Also, how can we use available project
assets eg. machine instances, if required, to generate the packages as
before. I am not so sure as to whether we have enough knowledge to
build Debian packages (packages for Ubuntu and Suse are built using
the project's own build systems)
We'd really like to have another individual shadowing Kaleb for the
upcoming 6.1 release cycle.
From mscherer at redhat.com Thu Mar 28 10:05:31 2019
From: mscherer at redhat.com (Michael Scherer)
Date: Thu, 28 Mar 2019 11:05:31 +0100
Subject: [Gluster-infra] Gluster HA
In-Reply-To: <7fee8396-f524-68ef-3eab-e7c5461c9bbd@ingtegration.com>
References: <7fee8396-f524-68ef-3eab-e7c5461c9bbd@ingtegration.com>
Message-ID: <5ec0e78b5b2c3c592e5e152171ca3791f674132d.camel@redhat.com>
Le mercredi 27 mars 2019 ? 15:19 -0400, Guy Boisvert a ?crit :
> Hi,
Hi Guy,
> New to this mailing list. I'm seeking people advice for GlusterFS
> HA in the context of KVM Virtual Machines (VM) storage. We have 3 x
> KVM
> servers that use a 3 x GlusterFS nodes. The Volumes are 3 way
> replicate.
>
> My question is: You guys, what is your network architecture /
> setup
> for GlusterFS HA? I read many articles on the internet. Many people
> are
> talking about bonding to a switch but i don't consider this as a
> good
> solution. I'd like to have Gluster and KVM servers linked to at
> least 2
> switches to have switch / wire and network car redundancy.
>
> I saw people using 2 x dumb switches with bonding mode 6 on
> their
> servers with mii monitoring. It seems to be about good but it could
> append that mii is up but frames / packets won't flow. So it this
> case,
> i can't imagine how the servers would handle this.
>
> Another setup is dual dumb switches and running Quagga on the
> servers (OSPF / ECMP). This seems to be the best setup, what do you
> think? Do you have experience with one of those setups? What are
> your
> thoughts on this? Ah and lastly, how can i search in the list?
I think the list is not what you are looking for, this is to discuss
for the gluster.org infrastructure, for the project itself. You might
have better luck asking technical questions on gluster-users list:
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL:
From bugzilla at redhat.com Thu Mar 28 15:09:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 28 Mar 2019 15:09:47 +0000
Subject: [Gluster-infra] [Bug 1693385] request to change the version of
fedora in fedora-smoke-job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385
--- Comment #2 from Niels de Vos ---
(In reply to Amar Tumballi from comment #0)
> Description of problem:
...
> Would be good to remove '--enable-debug' in these builds on some jobs (there
> are 2 smoke, and 4 rpm build jobs). We should remove --enable-debug in at
> least 1 of these, so our release RPMs which has no DEBUG defined, can be
> warning free.
I do not think these jobs are used for the RPMs that get marked as 'released'
and land on download.gluster.org.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Mar 28 15:26:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 28 Mar 2019 15:26:24 +0000
Subject: [Gluster-infra] [Bug 1693385] request to change the version of
fedora in fedora-smoke-job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385
--- Comment #3 from Amar Tumballi ---
Agree, I was asking for a job without DEBUG mainly because a few times, there
may be warning without DEBUG being there during compile (ref:
https://review.gluster.org/22347 && https://review.gluster.org/22389 )
As I had --enable-debug while testing locally, never saw the warning, and none
of the smoke tests captured the error. If we had a job without --enable-debug,
we could have seen the warning while compiling, which would have failed Smoke.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Mar 30 07:34:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 30 Mar 2019 07:34:37 +0000
Subject: [Gluster-infra] [Bug 1694291] New: Smoke test build artifacts do
not contain gluster logs
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694291
Bug ID: 1694291
Summary: Smoke test build artifacts do not contain gluster logs
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Severity: medium
Assignee: bugs at gluster.org
Reporter: ykaul at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
See for example https://build.gluster.org/job/smoke/48042/
The build artifacts do not contain the Gluster logs (the /var/log/glusterfs/*
contents)
--
You are receiving this mail because:
You are on the CC list for the bug.
From ykaul at redhat.com Sun Mar 31 12:59:50 2019
From: ykaul at redhat.com (Yaniv Kaul)
Date: Sun, 31 Mar 2019 15:59:50 +0300
Subject: [Gluster-infra] [automated-testing] What is the current state
of the Glusto test framework in upstream?
In-Reply-To:
References:
Message-ID:
On Wed, Mar 13, 2019 at 4:14 PM Jonathan Holloway
wrote:
>
>
> On Wed, Mar 13, 2019 at 5:08 AM Sankarshan Mukhopadhyay <
> sankarshan.mukhopadhyay at gmail.com> wrote:
>
>> On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote:
>> > On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay <
>> sankarshan.mukhopadhyay at gmail.com> wrote:
>> >>
>> >> What I am essentially looking to understand is whether there are
>> >> regular Glusto runs and whether the tests receive refreshes. However,
>> >> if there is no available Glusto service running upstream - that is a
>> >> whole new conversation.
>> >
>> >
>> > I'm* still trying to get it running properly on my simple
>> Vagrant+Ansible setup[1].
>> > Right now I'm installing Gluster + Glusto + creating bricks, pool and a
>> volume in ~3m on my latop.
>> >
>>
>> This is good. I think my original question was to the maintainer(s) of
>> Glusto along with the individuals involved in the automated testing
>> part of Gluster to understand the challenges in deploying this for the
>> project.
>>
>> > Once I do get it fully working, we'll get to make it work faster, clean
>> it up and and see how can we get code coverage.
>> >
>> > Unless there's an alternative to the whole framework that I'm not aware
>> of?
>>
>> I haven't read anything to this effect on any list.
>>
>>
> This is cool. I haven't had a chance to give it a run on my laptop, but it
> looked good.
> Are you running into issues with Glusto, glusterlibs, and/or Glusto-tests?
>
All of the above.
- The client consumes at times 100% CPU, not sure why.
- There are missing deps which I'm reverse engineering from Gluster CI
(which by itself has some strange deps - why do we need python-docx ?)
- I'm failing with the cvt test, with
test_shrinking_volume_when_io_in_progress with the error:
AssertionError: IO failed on some of the clients
I had hoped it could give me a bit more hint:
- which clients? (I happen to have one, so that's easy)
- What IO workload?
- What error?
- I hope there's a mode that does NOT perform cleanup/teardown, so it's
easier to look at the issue at hand.
- From glustomain.log, I can see:
2019-03-31 12:56:00,627 INFO (validate_io_procs) Validating IO on
192.168.250.10:/mnt/testvol_distributed-replicated_cifs
2019-03-31 12:56:00,627 INFO (_log_results) ESC[34;1mRETCODE (
root at 192.168.250.10): 1ESC[0m
2019-03-31 12:56:00,628 INFO (_log_results) ESC[47;30;1mSTDOUT (
root at 192.168.250.10)...
Starting File/Dir Ops: 12:55:27:PM:Mar_31_2019
Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6' :
Invalid argument
Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir0'
: Invalid argument
Unable to create dir
'/mnt/testvol_distributed-replicated_cifs/user6/dir0/dir0' : Invalid
argument
Unable to create dir
'/mnt/testvol_distributed-replicated_cifs/user6/dir0/dir1' : Invalid
argument
Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir1'
: Invalid argument
Unable to create dir
'/mnt/testvol_distributed-replicated_cifs/user6/dir1/dir0' : Invalid
argument
I'm right now assuming something's wrong on my setup. Unclear what, yet.
> I was using the glusto-tests container to run tests locally and for BVT in
> the lab.
> I was running against lab VMs, so looking forward to giving the vagrant
> piece a go.
>
> By upstream service are we talking about the Jenkins in the CentOS
> environment, etc?
>
Yes.
Y.
@Vijay Bhaskar Reddy Avuthu @Akarsha Rai
> any insight?
>
> Cheers,
> Jonathan
>
> > Surely for most of the positive paths, we can (and perhaps should) use
>> the the Gluster Ansible modules.
>> > Y.
>> >
>> > [1] https://github.com/mykaul/vg
>> > * with an intern's help.
>> _______________________________________________
>> automated-testing mailing list
>> automated-testing at gluster.org
>> https://lists.gluster.org/mailman/listinfo/automated-testing
>>
> _______________________________________________
> automated-testing mailing list
> automated-testing at gluster.org
> https://lists.gluster.org/mailman/listinfo/automated-testing
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: