From mscherer at redhat.com Fri Aug 2 22:03:33 2019 From: mscherer at redhat.com (Michael Scherer) Date: Sat, 03 Aug 2019 00:03:33 +0200 Subject: [Gluster-infra] Jenkins test server is down Message-ID: <1f4bceea34e9ff162a2123e86f027028546fa901.camel@redhat.com> Hi, TLDR: A unused test server was running a cryptominer, nothing got lost, we stopped the server and we will burn it and reinstall. TSWTR: so on Monday, I found out that due to neglect (aka, we didn't upgraded the plugins), the stage instance of jenkins had been compromised, likely during a wide scale attack (https://isc.sans.edu/diary/rss/24916 ). Upon seeing a weird process running under the jenkins account, I immediately suspended the server, and contacted our security team. After doing a bit of forensic with volatility, guestfs and radare2, I did conclude that nothing was accessed but CPU time, that the server was running a Monero miner, and that it was compromised since more than 2 months (our logs on that server do not go back enough in time). I also found that no one was using the server, since it was down for 1 whole month before being restarted after a jenkins upgrade. The server was just here to test packages, plugins, config without touching to prod. it is basically a sandbox, and after Nigel left, it has been left rotting. While we do automated upgrade of all packages, the jenkins plugins are not, as they were old. One in particular was Script Security, that was lagging a lot behind (version 1.29, so roughly 2 years ago), and that's what we use to mitigate CVE-2018-1000861. There is several "sandbox bypass" problem since end of 2018, and to this day, we still get attempts on the production server (who are blocked, because it is kept up to date) https://jenkins.io/security/advisory/2019-01-08/ https://jenkins.io/security/advisory/2019-02-19/ https://jenkins.io/security/advisory/2019-01-28/ https://jenkins.io/security/advisory/2019-03-25/ https://jenkins.io/security/advisory/2018-10-29/ For people who want more information on the type of attack, this is explained here: https://blog.orange.tw/2019/01/hacking-jenkins-part-1-play-with-dynamic-routing.html https://blog.orange.tw/2019/02/abusing-meta-programming-for-unauthenticated-rce.html https://0xdf.gitlab.io/2019/02/27/playing-with-jenkins-rce-vulnerability.html https://github.com/adamyordan/cve-2019-1003000-jenkins-rce-poc Since we have seen no selinux violation in log (nor nothing utterly suspicious), that the process wasn't really trying to hide itself, that the malware was connected to a monero pool, with a signature matching a monero miner, and everything was running under the jenkins account, we have no reason to think anything else happened. While the malware try to hide itself if it manage to get root access with sudo or anything, it didn't went that far, and we didn't find any suspect process in memory. And since the server was basically minimally configured (no ssh key, old nodes name from rackspace time, no jenkins secret, no user but a few local ones), I think nothing but mining happened (and I suspect the miner wasn't even efficient, since the only thing I see in the graph is jenkins having a problem during 1 month, and I also see 2 segfault in the log from a process related to the malware): https://munin.gluster.org/munin/rht.gluster.org/jenkins-stage.rht.gluster.org/processes.html I suspect the attack was around the 8th of may: https://munin.gluster.org/munin/rht.gluster.org/jenkins-stage.rht.gluster.org/load.html While attacked on a regular basis, production wasn't impacted, because Deepshika took care of keeping the plugins up to date, and we have automation for the rest (and proper monitoring). So, we are going to erase the server, and reinstall it from 0. This will also be the opportunity to automate the deployment and configuration further, based on groovy scripts ran at start of jenkins (which is why I connected to the staging instance, to test how that work once I found out that we can do that). https://brokenco.de/2017/07/24/groovy-automation-for-jenkins.html We also identified some way to automate the plugin upgrade, using ansible: https://docs.ansible.com/ansible/latest/modules/jenkins_plugin_module.html And I will also place it in the internal lan, were the access to external network is strictly controlled (firewall, proxy, DNS logging, etc, etc). Also, I will be out until the 19th, for vacation, and also for Flock. In case of emergency, do as usual, don't panic. -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From bugzilla at redhat.com Tue Aug 6 10:38:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 06 Aug 2019 10:38:09 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(mscherer at redhat.c | |om) --- Comment #9 from hari gowtham --- Hi Misc, Can you please create the machines as mentioned above, so we can setup them up? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 8 07:13:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 08 Aug 2019 07:13:45 +0000 Subject: [Gluster-infra] [Bug 1738778] New: Unable to setup softserve VM Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1738778 Bug ID: 1738778 Summary: Unable to setup softserve VM Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: After creating a VM from https://softserve.gluster.org/dashboard, when I try to use https://github.com/gluster/softserve/wiki/Running-Regressions-on-loaned-Softserve-instances, it doesn't connect to the VM. This is not just me, I believe even Sac tried it out on his setup and saw the same issue today. When I run `ansible-playbook -v -i inventory regressions-final.yml --become -u centos`, I get: TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** fatal: [builder555.cloud.gluster.org]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host", "unreachable": true} PLAY RECAP ********************************************************************************************************************************************************************************************************************************** builder555.cloud.gluster.org : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 I believe this is an infra issue. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 8 10:43:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 08 Aug 2019 10:43:44 +0000 Subject: [Gluster-infra] [Bug 1738778] Unable to setup softserve VM In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1738778 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #1 from Deepshikha khandelwal --- It's working fine for me. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 8 11:08:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 08 Aug 2019 11:08:14 +0000 Subject: [Gluster-infra] [Bug 1738778] Unable to setup softserve VM In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1738778 --- Comment #2 from Ravishankar N --- I'm trying this on Fedora 30. Here is the verbose output if it helps. I can ssh into the VM as centos user just fine. --------------------------------------------------------------------------------------------------------------------------------- fatal: [builder500.cloud.gluster.org]: UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS 28 May 2019\r\ndebug1: Reading configuration data /home/ravi/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host 18.219.69.93 originally 18.219.69.93\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: not matched 'final'\r\ndebug2: match not found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-,gss-group1-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256 at libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1]\r\ndebug1: configuration requests final Match pass\r\ndebug2: resolve_canonicalize: hostname 18.219.69.93 is address\r\ndebug1: re-parsing configuration\r\ndebug1: Reading configuration data /home/ravi/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host 18.219.69.93 originally 18.219.69.93\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: matched 'final'\r\ndebug2: match found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-,gss-group1-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256 at libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1]\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/ravi/.ansible/cp/d35c8610b6\" does not exist\r\ndebug1: Executing proxy command: exec ssh -q -W 18.219.69.93:22 root at logs.aws.gluster.org\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: identity file /home/ravi/.ssh/id_rsa type 0\r\ndebug1: identity file /home/ravi/.ssh/id_rsa-cert type -1\r\ndebug1: identity file /home/ravi/.ssh/id_dsa type -1\r\ndebug1: identity file /home/ravi/.ssh/id_dsa-cert type -1\r\ndebug1: identity file /home/ravi/.ssh/id_ecdsa type -1\r\ndebug1: identity file /home/ravi/.ssh/id_ecdsa-cert type -1\r\ndebug1: identity file /home/ravi/.ssh/id_ed25519 type -1\r\ndebug1: identity file /home/ravi/.ssh/id_ed25519-cert type -1\r\ndebug1: identity file /home/ravi/.ssh/id_xmss type -1\r\ndebug1: identity file /home/ravi/.ssh/id_xmss-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_8.0\r\nkex_exchange_identification: Connection closed by remote host", "unreachable": true } PLAY RECAP ********************************************************************************************************************************************************************************************************************************** builder500.cloud.gluster.org : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 --------------------------------------------------------------------------------------------------------------------------------- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Aug 19 07:52:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 19 Aug 2019 07:52:58 +0000 Subject: [Gluster-infra] [Bug 1738778] Unable to setup softserve VM In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1738778 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #3 from M. Scherer --- That is not a infra issue, the inventory is wrong: https://github.com/gluster/softserve/blob/master/playbooks/inventory#L6 Regular non infra folks do not have access to that server to serve as a bastion. -- You are receiving this mail because: You are on the CC list for the bug. From ndevos at redhat.com Mon Aug 19 10:37:49 2019 From: ndevos at redhat.com (Niels de Vos) Date: Mon, 19 Aug 2019 12:37:49 +0200 Subject: [Gluster-infra] New GitHub reposotory: samba-integration Message-ID: <20190819103749.GB14298@ndevos-x270.lan.nixpanic.net> Hi, The developers working on Samba want to add some tests for integrating Samba with Gluster. To get this started, I have created a repository 'samba-integration' under the Gluster project. Members of the team 'Samba Integration' will have maintainer permissions there. Please let me know if there is anything else I need to take care of concerning the infrastructure component of the repo/team creation. Thanks, Niels From bugzilla at redhat.com Tue Aug 20 05:30:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 20 Aug 2019 05:30:44 +0000 Subject: [Gluster-infra] [Bug 1738778] Unable to setup softserve VM In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1738778 --- Comment #4 from Ravishankar N --- Deleting that line worked. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Aug 20 12:27:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 20 Aug 2019 12:27:26 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #10 from M. Scherer --- I am not sure to understand what do you mean by "setup them up". I do expect the setup be done with ansible, using our playbooks, and not give direct access to people (because experience showed that when people have a way to bypass automation, they do bypass it sooner or later, causing us trouble later). So far, the only patch I found is https://review.gluster.org/#/c/build-jobs/+/23172/ which is not exactly something that should be merged, since that's a job that do replicate the work of jenkins. I kinda do expect a job that just run generic-package.sh on the builder, and that's it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 09:24:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 09:24:54 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #11 from hari gowtham --- By setup i meant doing the following prerequisites: these two steps are the ones necessary as of now: - `deb.packages.dot-gnupg.tgz`: has the ~/.gnupg dir with the keyring needed to build & sign packages - packages required: build-essential pbuilder devscripts reprepro debhelper dpkg-sig And for the first time we need to do this: # First time create the /var/cache/pbuilder/base.tgz # on debian: sudo pbuilder create --distribution wheezy --mirror ftp://ftp.us.debian.org/debian/ --debootstrapopts "--keyring=/usr/share/keyrings/debian-archive-keyring.gpg" # on raspbian: sudo pbuilder create --distribution wheezy --mirror http://archive.raspbian.org/raspbian/ --debootstrapopts "--keyring=/usr/share/keyrings/raspbian-archive-keyring.gpg" NOTE: In future if any change is made here ( https://github.com/semiosis/glusterfs-debian/tree/wheezy-glusterfs-3.5/debian) then we might have to change it. The reason to go for the above two level implementation was, I wasn't aware of how to make the job run on a particular machine based on the arguments it gets. Like stretch has to be run on rhs-vm-16.storage-dev.lab.eng.bOS.redhat.com(which will be one of the jenkins debian slaves) And we have to run the script on multiple machines based on the number of distributions we want to build. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 10:17:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 10:17:30 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #12 from M. Scherer --- Ok, so I will install the packages on the builder we have, and then have it added to jenkins. (and while on it, also have 2nd one, just in case) As for running different job running on specific machine, that's indeed pretty annoying on jenkins. I do not have enough experience with jjb, but JobTemplate is likely something that would help for that: https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 But afaik, gluster is not dependent on the kernel, so building that with pbuilder in a chroot should be sufficient no matter what Debian, as long as it is a up to date one, no ? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 10:30:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 10:30:41 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |kkeithle at redhat.com Flags| |needinfo?(kkeithle at redhat.c | |om) --- Comment #13 from hari gowtham --- (In reply to M. Scherer from comment #12) > Ok, so I will install the packages on the builder we have, and then have it > added to jenkins. > (and while on it, also have 2nd one, just in case) Forgot to mention that this script file is also necessary: https://github.com/Sheetalpamecha/packaging-scripts/blob/master/generic_package.sh Will send a patch to have it in the repo. > > As for running different job running on specific machine, that's indeed > pretty annoying on jenkins. I do not have enough experience with jjb, but > JobTemplate is likely something that would help for that: > https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 Will look into it. I'm new to writing jobs for jenkins. > > But afaik, gluster is not dependent on the kernel, so building that with > pbuilder in a chroot should be sufficient no matter what Debian, as long as > it is a up to date one, no ? Yes, gluster is not dependent on kernel, but I'm unaware of using chroot for different debian version . For this Kaleb would be the better person to answer. @kaleb can you please answer this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 10:42:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 10:42:48 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #14 from M. Scherer --- Pbuilder do setup chroots, afaik, so that's kinda like mock, if you are maybe more familliar with the Fedora/Centos tooling. Now, maybe there is limitation and they do not work exactly the same, but I would have expected a clean chroot created each time, to build the package. I didn't do debian package since a long time. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 11:46:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 11:46:33 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 --- Comment #7 from Shwetha K Acharya --- @misc (In reply to M. Scherer from comment #4) > Sure give me a deadline, and I will create the account. I mean, I do not > even need a precise one. > > Would you agree on "We do in 3 months", in which case I create the account > right now (with expiration as set). > > (I need a public ssh key and a username) We have already taken up the task of automating building and packaging. Details can be found at https://bugzilla.redhat.com/show_bug.cgi?id=1727727. Please create the account. Below are the required details: Public ssh key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCifwFXjkLXFwnlTBMXFgTEHAA1Vavzti41B4Yp1RJYtCuJ91s+P5YHc2j4a/wpVquPJboNuv9wtqknmd5SJYBXB11dinNfHfvE+gCN9Osdn64/om9i3pIpQQeY6uvF4MF9yfyx8huEWFZeaOiljvmTZ3//4kzsJHK2yKCmJFhy5Zcg9+WMM2bjfACjlFDIuOG2kqaRM8tGggOQG9iQ/VElWOTxJkHUJaP50PWdwEHHoiCKmipe5xEcSR/6qubaF6VpMfBLmrjmJMqkjVozryVweHBLn3oQfOkJmlErwJox7hLFuk5V4fvVine5xrWKygw/kA2Mpr7Q1zXg5moZHbCP root at localhost.localdomain User name: sacharya -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 12:01:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 12:01:02 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 --- Comment #8 from M. Scherer --- As said in the comment #2 and comment #4, what is the deadline for the account closure ? If I do not get a answer, then I will just decide on "3 month after the creation" and then deploy. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 12:05:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 12:05:54 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |hgowtham at redhat.com Flags| |needinfo?(hgowtham at redhat.c | |om) --- Comment #9 from Shwetha K Acharya --- Hari, Can you please address the above query? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:01:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:01:23 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(hgowtham at redhat.c | |om) | --- Comment #10 from hari gowtham --- We are trying to finish it within this sprint (each sprint is for 3 weeks). So we will assume that we should be done in a month with the automation. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:03:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:03:06 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #15 from M. Scherer --- I did push the installation and I would like to defer the gnupg integration for now, as it likely requires a bit more discussion (like, how do we distribute the keys, etc, do we rotate it). And for the pbuilder cache, I would need to know the exact matrix of distribution we want to build and how. That part seems not too hard: https://wiki.debian.org/PbuilderTricks#How_to_build_for_different_distributions And if we aim to build on unstable, we also may need to do some work to keep the chroot updated (same for stable in fact). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:05:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:05:37 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 --- Comment #11 from M. Scherer --- ok so 3 months is enougn (cause i also do not want to push unrealisitic deadline or more pressure, plus shit happen), I will add the account as soon as the previous ansible run finish. And if that's not enough, we can of course keep it open longer, just to be clear. But after jenkins issue last month, and the old compromise last time, we can't let stuff open too long if they are not going to clean themself. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:14:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:14:29 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(kkeithle at redhat.c | |om) | --- Comment #16 from Kaleb KEITHLEY --- yes, pbuilder is a chroot tool, similar to mock. Each time you build you get a clean chroot. We are currently building for stretch/9, buster/10, and bullseye/unstable/11. AFAIK the buildroot should be updated periodically for all of them; bullseye/unstable should probably be updated more frequently than the others. I don't know anything about pbuilder apart from what I mentioned above, and specifically I don't know anything about how to use pbuilder to build for different distributions on a single machine. I've been using separate stretch, buster, and bullseye installs on dedicated boxes to build the packages for that release of Debian. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:25:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:25:49 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #17 from Kaleb KEITHLEY --- (In reply to M. Scherer from comment #15) > I did push the installation and I would like to defer the gnupg integration > for now, as it likely requires a bit more discussion (like, how do we > distribute the keys, etc, do we rotate it). > > And for the pbuilder cache, I would need to know the exact matrix of > distribution we want to build and how. That part seems not too hard: > https://wiki.debian.org/ > PbuilderTricks#How_to_build_for_different_distributions > > And if we aim to build on unstable, we also may need to do some work to keep > the chroot updated (same for stable in fact). The keys that we've been using were generated on an internal machine and distributed to the build machines, which are all internal as well. We were using a new, different key for every major version through 4.1, but some people complained about that, so for 5.x, 6.x, and now 7.x we have been using the same key. As 4.1 is about to reach EOL that essentially means we are only using a single key now for all the packages we build. AFAIK people expect the packages to be signed. And best practices suggests to me that they _must_ be signed. Given that 7.0rc0 is now out and packages will be signed with the current key, that suggests to me that we must keep using that key for the life of 7.x. We can certainly create a new key for 8.x, when that rolls around. And yes, we need a secure way to get the private key onto the jenkins build machines somehow. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 13:57:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 13:57:49 +0000 Subject: [Gluster-infra] [Bug 1727727] Build+Packaging Automation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1727727 --- Comment #18 from hari gowtham --- (In reply to hari gowtham from comment #13) > (In reply to M. Scherer from comment #12) > > Ok, so I will install the packages on the builder we have, and then have it > > added to jenkins. > > (and while on it, also have 2nd one, just in case) > > Forgot to mention that this script file is also necessary: > https://github.com/Sheetalpamecha/packaging-scripts/blob/master/ > generic_package.sh > Will send a patch to have it in the repo. The above mentioned file is sent as a patch at: https://review.gluster.org/#/c/build-jobs/+/23289 > > > > > As for running different job running on specific machine, that's indeed > > pretty annoying on jenkins. I do not have enough experience with jjb, but > > JobTemplate is likely something that would help for that: > > https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 > > Will look into it. I'm new to writing jobs for jenkins. > > > > > But afaik, gluster is not dependent on the kernel, so building that with > > pbuilder in a chroot should be sufficient no matter what Debian, as long as > > it is a up to date one, no ? > > Yes, gluster is not dependent on kernel, but I'm unaware of using chroot > for different debian version . > For this Kaleb would be the better person to answer. > @kaleb can you please answer this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 14:26:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 14:26:44 +0000 Subject: [Gluster-infra] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 --- Comment #12 from M. Scherer --- I created the user, tell me if it doesn't work. The server is download.rht.gluster.org (not download.gluster, who is a proxy). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Aug 22 16:34:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 22 Aug 2019 16:34:22 +0000 Subject: [Gluster-infra] [Bug 1744671] New: Smoke is failing for the changeset Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1744671 Bug ID: 1744671 Summary: Smoke is failing for the changeset Product: GlusterFS Version: 6 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: sheggodu at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Smoke job is failing for https://review.gluster.org/#/c/glusterfs/+/23284/ . Recheck is also not working properly. Please fix the issue. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Aug 23 03:58:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 23 Aug 2019 03:58:27 +0000 Subject: [Gluster-infra] [Bug 1711945] create account on download.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711945 spamecha at redhat.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Flags| |needinfo?(mscherer at redhat.c | |om) --- Comment #2 from spamecha at redhat.com --- Hi Michael Please create the account for me as well. Below are the required details: Public ssh key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHKKTwASBKVg4nN3p1vUj87906qFi8KQb/gTmt7ITPDg1GAvVhMJhbC4pT58/k9YjDf2Ez07VZ7fTYs9hqWHF4ZsJ2rbO2MPaHl4Fnfb8MP+Wq33juiznKRZU9+TRTFt83rDoRjDFwzhfGt6zdBPam6Etu0mR55OvWg8XM35wbdW0OP/pjIdQdjVoDp+YdpaX43lCr3M80NsbjAxk7xcPTrpqAK90qpVw1C5mqwHNeqJIGK/enADhaDaMhBPoNpWK1cy5xMnJcBbYXjrUZ4yqmhzJ48yUQiHYzlZZkx4JirbdZzE7FfRZt88crec9KTp1a/GLznP3L0dFA59SWAMKV root at shep-mac User name: spamecha -- You are receiving this mail because: You are on the CC list for the bug. From amukherj at redhat.com Sat Aug 24 06:04:45 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Sat, 24 Aug 2019 11:34:45 +0530 Subject: [Gluster-infra] rpm rawhide is failing since last few days! Message-ID: One of them is https://build.gluster.org/job/rpm-rawhide/3084/ -- - Atin (atinm) -------------- next part -------------- An HTML attachment was scrubbed... URL: From atumball at redhat.com Sat Aug 24 06:35:35 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Sat, 24 Aug 2019 12:05:35 +0530 Subject: [Gluster-infra] rpm rawhide is failing since last few days! In-Reply-To: References: Message-ID: The last successful was https://build.gluster.org/job/rpm-rawhide/3057/ ( https://review.gluster.org/#/c/glusterfs/+/23015/) and first failure is of https://review.gluster.org/#/c/glusterfs/+/23211/... I suspect issue may be related to the build seen after rpc dependency patch was merged, which got resolved by https://review.gluster.org/#/c/glusterfs/+/23263/ Inclined to merge Makefile fixes, and see if it fixes the issue. If not, we can revert rpc-dependency handling patch altogether. -Amar On Sat, Aug 24, 2019 at 11:35 AM Atin Mukherjee wrote: > One of them is > https://build.gluster.org/job/rpm-rawhide/3084/ > -- > - Atin (atinm) > _______________________________________________ > Gluster-infra mailing list > Gluster-infra at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-infra -- Amar Tumballi (amarts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Tue Aug 27 02:33:28 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Tue, 27 Aug 2019 08:03:28 +0530 Subject: [Gluster-infra] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4710 In-Reply-To: <752238962.92.1566842514995.JavaMail.jenkins@jenkins-el7.rht.gluster.org> References: <1957465285.88.1566762495598.JavaMail.jenkins@jenkins-el7.rht.gluster.org> <752238962.92.1566842514995.JavaMail.jenkins@jenkins-el7.rht.gluster.org> Message-ID: Since last few days I was trying to understand the nightly failures we were seeing even after addressing the port already in use issue. So here's the analysis: >From console output of https://build.gluster.org/job/regression-test-burn-in/4710/consoleFull *19:51:56* Started by upstream project "nightly-master " build number 843 *19:51:56* originally caused by:*19:51:56* Started by timer*19:51:56* Running as SYSTEM*19:51:57* Building remotely on builder209.aws.gluster.org (centos7) in workspace /home/jenkins/root/workspace/regression-test-burn-in*19:51:58* No credentials specified*19:51:58* > git rev-parse --is-inside-work-tree # timeout=10*19:51:58* Fetching changes from the remote Git repository*19:51:58* > git config remote.origin.url git://review.gluster.org/glusterfs.git # timeout=10*19:51:58* Fetching upstream changes from git://review.gluster.org/glusterfs.git*19:51:58* > git --version # timeout=10*19:51:58* > git fetch --tags --progress git://review.gluster.org/glusterfs.git refs/heads/master # timeout=10*19:52:01* > git rev-parse origin/master^{commit} # timeout=10*19:52:01* Checking out Revision a31fad885c30cbc1bea652349c7d52bac1414c08 (origin/master)*19:52:01* > git config core.sparsecheckout # timeout=10*19:52:01 > git checkout -f a31fad885c30cbc1bea652349c7d52bac1414c08 # timeout=10 19:52:02 Commit message: "tests: heal-info add --xml option for more coverage"**19:52:02* > git rev-list --no-walk a31fad885c30cbc1bea652349c7d52bac1414c08 # timeout=10*19:52:02* [regression-test-burn-in] $ /bin/bash /tmp/jenkins7274529097702336737.sh*19:52:02* Start time Mon Aug 26 14:22:02 UTC 2019 The latest commit which it picked up as part of git checkout is quite old and hence we continue to see the similar failures in the latest nightly runs which has been already addressed by commit c370c70 commit c370c70f77079339e2cfb7f284f3a2fb13fd2f97 Author: Mohit Agrawal Date: Tue Aug 13 18:45:43 2019 +0530 rpc: glusterd start is failed and throwing an error Address already in use Problem: Some of the .t are failed due to bind is throwing an error EADDRINUSE Solution: After killing all gluster processes .t is trying to start glusterd but somehow if kernel has not cleaned up resources(socket) then glusterd startup is failed due to bind system call failure.To avoid the issue retries to call bind 10 times to execute system call succesfully Change-Id: Ia5fd6b788f7b211c1508c1b7304fc08a32266629 Fixes: bz#1743020 Signed-off-by: Mohit Agrawal So the (puzzling) question is - why are we picking up old commit? In my local setup when I run the following command I do see the latest commit id being picked up: atin at dhcp35-96:~/codebase/upstream/glusterfs_master/glusterfs$ git rev-parse origin/master^{commit} # timeout=10 7926992e65d0a07fdc784a6e45740306d9b4a9f2 atin at dhcp35-96:~/codebase/upstream/glusterfs_master/glusterfs$ git show 7926992e65d0a07fdc784a6e45740306d9b4a9f2 commit 7926992e65d0a07fdc784a6e45740306d9b4a9f2 (origin/master, origin/HEAD, master) Author: Sanju Rakonde Date: Mon Aug 26 12:38:40 2019 +0530 glusterd: Unused value coverity fix CID: 1288765 updates: bz#789278 Change-Id: Ie6b01f81339769f44d82fd7c32ad0ed1a697c69c Signed-off-by: Sanju Rakonde ---------- Forwarded message --------- From: Date: Mon, Aug 26, 2019 at 11:32 PM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4710 To: See < https://build.gluster.org/job/regression-test-burn-in/4710/display/redirect> ------------------------------------------ [...truncated 4.18 MB...] ./tests/features/lock-migration/lkmigration-set-option.t - 7 second ./tests/bugs/upcall/bug-1458127.t - 7 second ./tests/bugs/transport/bug-873367.t - 7 second ./tests/bugs/snapshot/bug-1260848.t - 7 second ./tests/bugs/shard/shard-inode-refcount-test.t - 7 second ./tests/bugs/replicate/bug-986905.t - 7 second ./tests/bugs/replicate/bug-921231.t - 7 second ./tests/bugs/replicate/bug-1132102.t - 7 second ./tests/bugs/replicate/bug-1037501.t - 7 second ./tests/bugs/posix/bug-1175711.t - 7 second ./tests/bugs/posix/bug-1122028.t - 7 second ./tests/bugs/glusterfs/bug-861015-log.t - 7 second ./tests/bugs/fuse/bug-983477.t - 7 second ./tests/bugs/ec/bug-1227869.t - 7 second ./tests/bugs/distribute/bug-1086228.t - 7 second ./tests/bugs/cli/bug-1087487.t - 7 second ./tests/bitrot/br-stub.t - 7 second ./tests/basic/ctime/ctime-noatime.t - 7 second ./tests/basic/afr/ta-write-on-bad-brick.t - 7 second ./tests/basic/afr/ta.t - 7 second ./tests/basic/afr/ta-shd.t - 7 second ./tests/basic/afr/root-squash-self-heal.t - 7 second ./tests/basic/afr/granular-esh/add-brick.t - 7 second ./tests/bugs/upcall/bug-1369430.t - 6 second ./tests/bugs/snapshot/bug-1064768.t - 6 second ./tests/bugs/shard/bug-1258334.t - 6 second ./tests/bugs/replicate/bug-1250170-fsync.t - 6 second ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t - 6 second ./tests/bugs/quota/bug-1243798.t - 6 second ./tests/bugs/quota/bug-1104692.t - 6 second ./tests/bugs/protocol/bug-1321578.t - 6 second ./tests/bugs/nfs/bug-915280.t - 6 second ./tests/bugs/io-cache/bug-858242.t - 6 second ./tests/bugs/glusterfs-server/bug-877992.t - 6 second ./tests/bugs/glusterfs/bug-902610.t - 6 second ./tests/bugs/distribute/bug-884597.t - 6 second ./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t - 6 second ./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t - 6 second ./tests/bugs/bug-1702299.t - 6 second ./tests/bugs/bug-1371806_2.t - 6 second ./tests/bugs/bug-1258069.t - 6 second ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t - 6 second ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t - 6 second ./tests/basic/glusterd/thin-arbiter-volume-probe.t - 6 second ./tests/basic/gfapi/libgfapi-fini-hang.t - 6 second ./tests/basic/fencing/fencing-crash-conistency.t - 6 second ./tests/basic/ec/statedump.t - 6 second ./tests/basic/distribute/file-create.t - 6 second ./tests/basic/afr/tarissue.t - 6 second ./tests/basic/afr/gfid-heal.t - 6 second ./tests/basic/afr/afr-read-hash-mode.t - 6 second ./tests/basic/afr/add-brick-self-heal.t - 6 second ./tests/gfid2path/gfid2path_fuse.t - 5 second ./tests/bugs/shard/bug-1259651.t - 5 second ./tests/bugs/replicate/bug-767585-gfid.t - 5 second ./tests/bugs/replicate/bug-1686568-send-truncate-on-arbiter-from-shd.t - 5 second ./tests/bugs/replicate/bug-1626994-info-split-brain.t - 5 second ./tests/bugs/replicate/bug-1365455.t - 5 second ./tests/bugs/replicate/bug-1101647.t - 5 second ./tests/bugs/nfs/bug-877885.t - 5 second ./tests/bugs/nfs/bug-847622.t - 5 second ./tests/bugs/nfs/bug-1116503.t - 5 second ./tests/bugs/md-cache/setxattr-prepoststat.t - 5 second ./tests/bugs/md-cache/bug-1211863_unlink.t - 5 second ./tests/bugs/md-cache/afr-stale-read.t - 5 second ./tests/bugs/io-stats/bug-1598548.t - 5 second ./tests/bugs/glusterfs/bug-895235.t - 5 second ./tests/bugs/glusterfs/bug-856455.t - 5 second ./tests/bugs/glusterfs/bug-848251.t - 5 second ./tests/bugs/glusterd/quorum-value-check.t - 5 second ./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t - 5 second ./tests/bugs/ec/bug-1179050.t - 5 second ./tests/bugs/distribute/bug-912564.t - 5 second ./tests/bugs/distribute/bug-1368012.t - 5 second ./tests/bugs/core/bug-986429.t - 5 second ./tests/bugs/core/bug-908146.t - 5 second ./tests/bugs/bug-1371806_1.t - 5 second ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t - 5 second ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t - 5 second ./tests/basic/playground/template-xlator-sanity.t - 5 second ./tests/basic/hardlink-limit.t - 5 second ./tests/basic/glusterd/arbiter-volume-probe.t - 5 second ./tests/basic/ec/nfs.t - 5 second ./tests/basic/ec/ec-read-policy.t - 5 second ./tests/basic/ec/ec-anonymous-fd.t - 5 second ./tests/basic/afr/arbiter-remove-brick.t - 5 second ./tests/gfid2path/gfid2path_nfs.t - 4 second ./tests/gfid2path/get-gfid-to-path.t - 4 second ./tests/gfid2path/block-mount-access.t - 4 second ./tests/bugs/upcall/bug-upcall-stat.t - 4 second ./tests/bugs/trace/bug-797171.t - 4 second ./tests/bugs/snapshot/bug-1178079.t - 4 second ./tests/bugs/shard/bug-1342298.t - 4 second ./tests/bugs/shard/bug-1272986.t - 4 second ./tests/bugs/rpc/bug-954057.t - 4 second ./tests/bugs/replicate/bug-886998.t - 4 second ./tests/bugs/replicate/bug-1480525.t - 4 second ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t - 4 second ./tests/bugs/replicate/bug-1325792.t - 4 second ./tests/bugs/readdir-ahead/bug-1670253-consistent-metadata.t - 4 second ./tests/bugs/posix/bug-gfid-path.t - 4 second ./tests/bugs/posix/bug-765380.t - 4 second ./tests/bugs/posix/bug-1619720.t - 4 second ./tests/bugs/nfs/zero-atime.t - 4 second ./tests/bugs/nfs/subdir-trailing-slash.t - 4 second ./tests/bugs/nfs/socket-as-fifo.t - 4 second ./tests/bugs/nfs/showmount-many-clients.t - 4 second ./tests/bugs/nfs/bug-1161092-nfs-acls.t - 4 second ./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t - 4 second ./tests/bugs/glusterfs-server/bug-873549.t - 4 second ./tests/bugs/glusterfs-server/bug-864222.t - 4 second ./tests/bugs/glusterfs/bug-893378.t - 4 second ./tests/bugs/glusterd/bug-948729/bug-948729-force.t - 4 second ./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t - 4 second ./tests/bugs/glusterd/bug-1091935-brick-order-check-from-cli-to-glusterd.t - 4 second ./tests/bugs/geo-replication/bug-1296496.t - 4 second ./tests/bugs/ec/bug-1161621.t - 4 second ./tests/bugs/distribute/bug-1088231.t - 4 second ./tests/bugs/cli/bug-977246.t - 4 second ./tests/bugs/cli/bug-1004218.t - 4 second ./tests/bugs/bug-1138841.t - 4 second ./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t - 4 second ./tests/bugs/access-control/bug-1051896.t - 4 second ./tests/bitrot/bug-1221914.t - 4 second ./tests/basic/ec/ec-internal-xattrs.t - 4 second ./tests/basic/distribute/non-root-unlink-stale-linkto.t - 4 second ./tests/basic/distribute/bug-1265677-use-readdirp.t - 4 second ./tests/basic/changelog/changelog-rename.t - 4 second ./tests/basic/afr/ta-check-locks.t - 4 second ./tests/basic/afr/heal-info.t - 4 second ./tests/performance/quick-read.t - 3 second ./tests/line-coverage/meta-max-coverage.t - 3 second ./tests/bugs/upcall/bug-1422776.t - 3 second ./tests/bugs/upcall/bug-1394131.t - 3 second ./tests/bugs/unclassified/bug-1034085.t - 3 second ./tests/bugs/snapshot/bug-1111041.t - 3 second ./tests/bugs/shard/bug-1256580.t - 3 second ./tests/bugs/shard/bug-1250855.t - 3 second ./tests/bugs/replicate/bug-976800.t - 3 second ./tests/bugs/replicate/bug-880898.t - 3 second ./tests/bugs/read-only/bug-1134822-read-only-default-in-graph.t - 3 second ./tests/bugs/readdir-ahead/bug-1446516.t - 3 second ./tests/bugs/readdir-ahead/bug-1439640.t - 3 second ./tests/bugs/readdir-ahead/bug-1390050.t - 3 second ./tests/bugs/quota/bug-1287996.t - 3 second ./tests/bugs/quick-read/bug-846240.t - 3 second ./tests/bugs/nl-cache/bug-1451588.t - 3 second ./tests/bugs/nfs/bug-1210338.t - 3 second ./tests/bugs/nfs/bug-1166862.t - 3 second ./tests/bugs/md-cache/bug-1632503.t - 3 second ./tests/bugs/md-cache/bug-1476324.t - 3 second ./tests/bugs/glusterfs-server/bug-861542.t - 3 second ./tests/bugs/glusterfs/bug-869724.t - 3 second ./tests/bugs/glusterfs/bug-844688.t - 3 second ./tests/bugs/glusterfs/bug-1482528.t - 3 second ./tests/bugs/glusterd/bug-948729/bug-948729.t - 3 second ./tests/bugs/glusterd/bug-948729/bug-948729-mode-script.t - 3 second ./tests/bugs/fuse/bug-1336818.t - 3 second ./tests/bugs/fuse/bug-1126048.t - 3 second ./tests/bugs/distribute/bug-907072.t - 3 second ./tests/bugs/core/log-bug-1362520.t - 3 second ./tests/bugs/core/io-stats-1322825.t - 3 second ./tests/bugs/core/bug-913544.t - 3 second ./tests/bugs/core/bug-845213.t - 3 second ./tests/bugs/core/bug-834465.t - 3 second ./tests/bugs/core/bug-1421721-mpx-toggle.t - 3 second ./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t - 3 second ./tests/bugs/core/bug-1117951.t - 3 second ./tests/bugs/core/949327.t - 3 second ./tests/bugs/cli/bug-983317-volume-get.t - 3 second ./tests/bugs/cli/bug-961307.t - 3 second ./tests/bugs/access-control/bug-1387241.t - 3 second ./tests/bitrot/bug-internal-xattrs-check-1243391.t - 3 second ./tests/basic/quota-rename.t - 3 second ./tests/basic/glusterd/check-cloudsync-ancestry.t - 3 second ./tests/basic/fops-sanity.t - 3 second ./tests/basic/fencing/test-fence-option.t - 3 second ./tests/basic/ec/ec-fallocate.t - 3 second ./tests/basic/ec/dht-rename.t - 3 second ./tests/basic/distribute/lookup.t - 3 second ./tests/basic/distribute/debug-xattrs.t - 3 second ./tests/line-coverage/some-features-in-libglusterfs.t - 2 second ./tests/bugs/unclassified/bug-991622.t - 2 second ./tests/bugs/shard/bug-1245547.t - 2 second ./tests/bugs/replicate/bug-884328.t - 2 second ./tests/bugs/readdir-ahead/bug-1512437.t - 2 second ./tests/bugs/posix/disallow-gfid-volumeid-removexattr.t - 2 second ./tests/bugs/nfs/bug-970070.t - 2 second ./tests/bugs/nfs/bug-1302948.t - 2 second ./tests/bugs/logging/bug-823081.t - 2 second ./tests/bugs/glusterfs-server/bug-889996.t - 2 second ./tests/bugs/glusterfs/bug-860297.t - 2 second ./tests/bugs/glusterfs/bug-811493.t - 2 second ./tests/bugs/glusterd/bug-1085330-and-bug-916549.t - 2 second ./tests/bugs/fuse/bug-1283103.t - 2 second ./tests/bugs/distribute/bug-924265.t - 2 second ./tests/bugs/distribute/bug-1204140.t - 2 second ./tests/bugs/core/bug-924075.t - 2 second ./tests/bugs/core/bug-903336.t - 2 second ./tests/bugs/core/bug-1119582.t - 2 second ./tests/bugs/core/bug-1111557.t - 2 second ./tests/bugs/cli/bug-969193.t - 2 second ./tests/bugs/cli/bug-949298.t - 2 second ./tests/bugs/cli/bug-1378842-volume-get-all.t - 2 second ./tests/basic/md-cache/bug-1418249.t - 2 second ./tests/basic/afr/arbiter-cli.t - 2 second ./tests/line-coverage/volfile-with-all-graph-syntax.t - 1 second ./tests/bugs/shard/bug-1261773.t - 1 second ./tests/bugs/replicate/ta-inode-refresh-read.t - 1 second ./tests/bugs/glusterfs/bug-892730.t - 1 second ./tests/bugs/glusterfs/bug-853690.t - 1 second ./tests/bugs/cli/bug-921215.t - 1 second ./tests/bugs/cli/bug-867252.t - 1 second ./tests/bugs/cli/bug-764638.t - 1 second ./tests/bugs/cli/bug-1047378.t - 1 second ./tests/basic/posixonly.t - 1 second ./tests/basic/peer-parsing.t - 1 second ./tests/basic/netgroup_parsing.t - 1 second ./tests/basic/gfapi/sink.t - 1 second ./tests/basic/exports_parsing.t - 1 second ./tests/basic/glusterfsd-args.t - 0 second 4 test(s) failed ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t ./tests/bugs/glusterd/brick-mux-validation.t ./tests/bugs/glusterd/bug-1595320.t 0 test(s) generated core 10 test(s) needed retry ./tests/bugs/core/bug-1119582.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t ./tests/bugs/glusterd/brick-mux-validation.t ./tests/bugs/glusterd/bug-1595320.t ./tests/bugs/glusterd/bug-1696046.t ./tests/bugs/glusterd/optimized-basic-testcases.t ./tests/bugs/replicate/bug-1134691-afr-lookup-metadata-heal.t ./tests/bugs/replicate/bug-976800.t ./tests/bugs/snapshot/bug-1111041.t Result is 1 tar: Removing leading `/' from member names kernel.core_pattern = /%e-%p.core Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list maintainers at gluster.org https://lists.gluster.org/mailman/listinfo/maintainers -------------- next part -------------- An HTML attachment was scrubbed... URL: