From bugzilla at redhat.com Sat Jun 1 13:15:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 13:15:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22797 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 1 13:15:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 13:15:38 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #674 from Worker Ant --- REVIEW: https://review.gluster.org/22797 (glusterd: remove trivial conditions) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 1 17:27:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 17:27:55 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #675 from Worker Ant --- REVIEW: https://review.gluster.org/22797 (glusterd: remove trivial conditions) merged (#1) on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 1 20:20:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 20:20:30 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 1 20:23:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 20:23:29 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22798 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 1 20:23:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 20:23:29 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22798 (ec/fini: Fix race between xlator cleanup and on going async fop) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 1 21:01:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 01 Jun 2019 21:01:10 +0000 Subject: [Bugs] [Bug 1716097] New: infra: create suse-packing@lists.nfs-ganesha.org alias Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716097 Bug ID: 1716097 Summary: infra: create suse-packing at lists.nfs-ganesha.org alias Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Is there an OSAS ticketing system to use instead of this? Anyway, forwarded to me. Thanks Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 2 09:22:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 02 Jun 2019 09:22:05 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22799 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 2 09:22:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 02 Jun 2019 09:22:06 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #50 from Worker Ant --- REVIEW: https://review.gluster.org/22799 (lcov: run more fops on translators) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 2 18:18:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 02 Jun 2019 18:18:13 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22800 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 2 18:18:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 02 Jun 2019 18:18:13 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #676 from Worker Ant --- REVIEW: https://review.gluster.org/22800 ([WIP] (multiple files) CALLOC -> MALLOC when serializing a dictionary) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 02:59:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 02:59:51 +0000 Subject: [Bugs] [Bug 1651445] [RFE] storage.reserve option should take size of disk as input instead of percentage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-03 02:59:51 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/21686 (posix: add storage.reserve-size option) merged (#13) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 04:01:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 04:01:18 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1631 from Worker Ant --- REVIEW: https://review.gluster.org/22741 (across: coverity fixes) merged (#12) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 04:08:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 04:08:31 +0000 Subject: [Bugs] [Bug 1715012] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-03 04:08:31 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22787 (If bind-address is IPv6 return it successfully) merged (#2) on release-6 by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 04:08:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 04:08:53 +0000 Subject: [Bugs] [Bug 1714172] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-03 04:08:53 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22774 (cluster/ec: honor contention notifications for partially acquired locks) merged (#2) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 04:23:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 04:23:03 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22801 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 04:23:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 04:23:05 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1632 from Worker Ant --- REVIEW: https://review.gluster.org/22801 (glusterd: coverity fix) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 06:22:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 06:22:17 +0000 Subject: [Bugs] [Bug 1703322] Need to document about fips-mode-rchecksum in gluster-7 release notes. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703322 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ravishankar at redha | |t.com) Severity|unspecified |medium --- Comment #1 from Yaniv Kaul --- https://review.gluster.org/#/c/glusterfs/+/22609/ is merged. Can we now document it? When can we remove this option altogether and have it as a default (and then remove all the gf_rsync_md5_checksum() code and friends) ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 07:57:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 07:57:57 +0000 Subject: [Bugs] [Bug 1714851] issues with 'list.h' elements in clang-scan In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714851 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jahernan at redhat.com --- Comment #1 from Xavi Hernandez --- I'm not sure we really have an issue in list_for_each_entry_safe(). Even if the list is empty and list_first_entry() is used (which is true that it returns a bad pointer when list is empty), what we get is a pointer to an invalid structure. That's true. However, the macro only dereferences the 'list' field, which is guaranteed to be valid, even if the list is empty, and in this case it will exit the loop, so no unsafe pointers will be passed to the body of the loop. Additionally, clang-scan complains about the entry pointer being NULL inside the loop. The only case where this can happen is when the list is not initialized with INIT_LIST_HEAD() and the memory is cleared with 0's. However clang-scan doesn't provide a trace path from allocation to list_for_each_entry_safe() call where this can be proved. So my guess is that clang-scan assumes that any value is possible for a given pointer passed as an argument. In that case many false-positives will appear, since it's assuming something that is not true most of the cases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 08:26:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 08:26:06 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22803 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 08:26:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 08:26:07 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #51 from Worker Ant --- REVIEW: https://review.gluster.org/22803 (tests/geo-rep: Add geo-rep glusterd test cases) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 08:43:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 08:43:47 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #52 from Worker Ant --- REVIEW: https://review.gluster.org/22789 (lcov: improve line coverage) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 08:51:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 08:51:32 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22804 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 08:51:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 08:51:33 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #677 from Worker Ant --- REVIEW: https://review.gluster.org/22804 (tests/geo-rep: Fix the comment) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 13:10:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:10:37 +0000 Subject: [Bugs] [Bug 1712668] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22805 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 13:10:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:10:39 +0000 Subject: [Bugs] [Bug 1712668] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22805 (cli: Remove-brick warning seems unnecessary) posted (#1) for review on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 13:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:31:57 +0000 Subject: [Bugs] [Bug 1597798] 'mv' of directory on encrypted volume fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1597798 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(vbellur at redhat.co | |m) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 13:32:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:32:18 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(vbellur at redhat.co | |m) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 13:32:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:32:29 +0000 Subject: [Bugs] [Bug 1714973] upgrade after tier code removal results in peer rejection. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714973 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-03 13:32:29 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22785 (glusterd/tier: gluster upgrade broken because of tier) merged (#5) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 13:33:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:33:41 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(jahernan at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 13:34:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 13:34:29 +0000 Subject: [Bugs] [Bug 1635784] brick process segfault In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635784 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |INSUFFICIENT_DATA Last Closed| |2019-06-03 13:34:29 --- Comment #7 from Yaniv Kaul --- (In reply to Yaniv Kaul from comment #6) > Does it still happen on newer releases? Closing for the time being. Please re-open if you have more information. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 14:05:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:05:13 +0000 Subject: [Bugs] [Bug 1703322] Need to document about fips-mode-rchecksum in gluster-7 release notes. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703322 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline Flags|needinfo?(ravishankar at redha | |t.com) | --- Comment #2 from Ravishankar N --- (In reply to Yaniv Kaul from comment #1) > https://review.gluster.org/#/c/glusterfs/+/22609/ is merged. Can we now > document it? > I was targeting it for the glusterfs-7 release notes. > When can we remove this option altogether and have it as a default (and then > remove all the gf_rsync_md5_checksum() code and friends) ? Technically, we could do it today since 3.x is EOL and 4.1 onwards clients have the logic to check the dict for what type of checksum the server is sending and act accordingly. But people might use 3.x clients with 4.x or later servers 'as long as the mount succeeds', so maybe it is better to have it for some more time. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 14:06:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:06:06 +0000 Subject: [Bugs] [Bug 1716440] New: SMBD thread panics when connected to from OS X machine Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 Bug ID: 1716440 Summary: SMBD thread panics when connected to from OS X machine Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: libgfapi Severity: high Assignee: bugs at gluster.org Reporter: ryan at magenta.tv QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1576680 --> https://bugzilla.redhat.com/attachment.cgi?id=1576680&action=edit Debug level 10 log of client connection when panic occurs Description of problem: When connecting to a share, the SMB thread for that client panics and constantly restarts. This was tested from a machine running OS X 10.14.4. I've not been able to test from a windows machine yet. Version-Release number of selected component (if applicable): Gluster = 6.1 Samba = 4.9.6 How reproducible: Every time SMB configuration: [global] security = user netbios name = NAS01 clustering = no server signing = no max log size = 10000 log file = /var/log/samba/log-%M-test.smbd logging = file log level = 10 passdb backend = tdbsam guest account = nobody map to guest = bad user force directory mode = 0777 force create mode = 0777 create mask = 0777 directory mask = 0777 store dos attributes = yes load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes glusterfs:volfile_server = localhost kernel share modes = No [VFS] vfs objects = glusterfs glusterfs:volume = mcv02 path = / read only = no guest ok = yes Steps to Reproduce: 1. Use provided SMB configuration 2. Restart SMB service 3. Connect to share from client using guest user 4. Tail client logs on server to see panics Actual results: SMB thread panics and restarts Expected results: Client connects and SMB thread doesn't panic Additional info: Tested without Gluster VFS and used the FUSE mount point instead and system did not panic -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 14:08:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:08:10 +0000 Subject: [Bugs] [Bug 1663519] Memory leak when smb.conf has "store dos attributes = yes" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663519 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #5 from ryan at magenta.tv --- Hi Anoop, Sorry for the delay. I've tried to re-test, however we're now using Gluster 6.1 and Samba 4.9.6. Another issue has come up which is preventing me testing this issue. I've raised a bug for it here https://bugzilla.redhat.com/show_bug.cgi?id=1716440. Once i'm able to re-test I will update this ticket. Best, Ryan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 14:09:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:09:07 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Component|libgfapi |gluster-smb -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 14:13:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:13:43 +0000 Subject: [Bugs] [Bug 1709248] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Blocks| |1708043 Depends On|1708043 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708043 [Bug 1708043] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 14:33:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 14:33:05 +0000 Subject: [Bugs] [Bug 1716455] New: OS X error -50 when creating sub-folder on Samba share when using Gluster VFS Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716455 Bug ID: 1716455 Summary: OS X error -50 when creating sub-folder on Samba share when using Gluster VFS Product: GlusterFS Version: 6 Hardware: x86_64 OS: Mac OS Status: NEW Component: gluster-smb Severity: high Assignee: bugs at gluster.org Reporter: ryan at magenta.tv CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1576693 --> https://bugzilla.redhat.com/attachment.cgi?id=1576693&action=edit Debug level 10 log Description of problem: OS X finder produces -50 error when trying to create a folder anywhere other than at the top of a share. This occurs when using the Gluster VFS module. Version-Release number of selected component (if applicable): OS X = 10.14.4 Samba = 4.9.6 Gluster = 6.1 How reproducible: Everytime Steps to Reproduce: 1. Connect to share 2. Create folder at root of share 3. Go into that folder 4. Try to create folder 5. Create fails and produces error -50 Actual results: Error -50 produced and folder is not created Expected results: Folder is created without error Additional info: SMB configuration: [global] security = ADS workgroup = DOMAIN realm = DOMAIN.LOCAL netbios name = NAS01 max protocol = SMB3 min protocol = SMB2 ea support = yes clustering = yes server signing = no max log size = 10000 glusterfs:loglevel = 5 log file = /var/log/samba/log-%M.smbd logging = file log level = 10 template shell = /sbin/nologin winbind offline logon = false winbind refresh tickets = yes winbind enum users = Yes winbind enum groups = Yes allow trusted domains = yes passdb backend = tdbsam idmap cache time = 604800 idmap negative cache time = 300 winbind cache time = 604800 idmap config magenta:backend = rid idmap config magenta:range = 10000-999999 idmap config * : backend = tdb idmap config * : range = 3000-7999 guest account = nobody map to guest = bad user force directory mode = 0777 force create mode = 0777 create mask = 0777 directory mask = 0777 hide unreadable = no store dos attributes = no unix extensions = no load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes glusterfs:volfile_server = localhost kernel share modes = No strict locking = auto oplocks = yes durable handles = yes kernel oplocks = no posix locking = no level2 oplocks = no readdir_attr:aapl_rsize = yes readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no [qc_only] guest ok = no read only = no vfs objects = glusterfs glusterfs:volume = mcv01 path = "/data/qc_only" valid users = @"QC_ops" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 19:22:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 19:22:48 +0000 Subject: [Bugs] [Bug 1716626] New: Invalid memory access while executing cleanup_and_exit Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Bug ID: 1716626 Summary: Invalid memory access while executing cleanup_and_exit Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: replicate Keywords: Reopened Assignee: ksubrahm at redhat.com Reporter: rkavunga at redhat.com QA Contact: nchilaka at redhat.com CC: bugs at gluster.org, pkarampu at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1708926 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1708926 +++ Description of problem: when executing a cleanup_and_exit, a shd daemon is crashed. This is because there is a chance that a parallel graph free thread might be executing another cleanup Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. run ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t in a loop 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-05-11 17:59:31 UTC --- REVIEW: https://review.gluster.org/22709 (glusterfsd/cleanup: Protect graph object under a lock) posted (#1) for review on master by mohammed rafi kc --- Additional comment from Pranith Kumar K on 2019-05-14 07:09:23 UTC --- Rafi, Could you share the bt of the core so that it is easier to understand why exactly it crashed? Pranith --- Additional comment from Mohammed Rafi KC on 2019-05-14 16:01:36 UTC --- Stack trace of thread 30877: #0 0x0000000000406a07 cleanup_and_exit (glusterfsd) #1 0x0000000000406b5d glusterfs_sigwaiter (glusterfsd) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30879: #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable (libpthread.so.0) #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30881: #0 0x00007f50ffd14cdf __GI___select (libc.so.6) #1 0x00007f51003ef1cd runner (libglusterfs.so.0) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30880: #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable (libpthread.so.0) #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30876: #0 0x00007f51000d7500 __GI___nanosleep (libpthread.so.0) #1 0x00007f510038a346 gf_timer_proc (libglusterfs.so.0) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30882: #0 0x00007f50ffd1e06e epoll_ctl (libc.so.6) #1 0x00007f51003d931e event_handled_epoll (libglusterfs.so.0) #2 0x00007f50eed9a781 socket_event_poll_in (socket.so) #3 0x00007f51003d8c9b event_dispatch_epoll_handler (libglusterfs.so.0) #4 0x00007f51000cd58e start_thread (libpthread.so.0) #5 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30875: #0 0x00007f51000cea6d __GI___pthread_timedjoin_ex (libpthread.so.0) #1 0x00007f51003d8387 event_dispatch_epoll (libglusterfs.so.0) #2 0x0000000000406592 main (glusterfsd) #3 0x00007f50ffc44413 __libc_start_main (libc.so.6) #4 0x00000000004067de _start (glusterfsd) Stack trace of thread 30878: #0 0x00007f50ffce97f8 __GI___nanosleep (libc.so.6) #1 0x00007f50ffce96fe __sleep (libc.so.6) #2 0x00007f51003a4f5a pool_sweeper (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30883: #0 0x00007f51000d6b8d __lll_lock_wait (libpthread.so.0) #1 0x00007f51000cfda9 __GI___pthread_mutex_lock (libpthread.so.0) #2 0x00007f510037cd1f _gf_msg_plain_internal (libglusterfs.so.0) #3 0x00007f510037ceb3 _gf_msg_plain (libglusterfs.so.0) #4 0x00007f5100382d43 gf_log_dump_graph (libglusterfs.so.0) #5 0x00007f51003b514f glusterfs_process_svc_attach_volfp (libglusterfs.so.0) #6 0x000000000040b16d mgmt_process_volfile (glusterfsd) #7 0x0000000000410792 mgmt_getspec_cbk (glusterfsd) #8 0x00007f51003256b1 rpc_clnt_handle_reply (libgfrpc.so.0) #9 0x00007f5100325a53 rpc_clnt_notify (libgfrpc.so.0) #10 0x00007f5100322973 rpc_transport_notify (libgfrpc.so.0) #11 0x00007f50eed9a45c socket_event_poll_in (socket.so) #12 0x00007f51003d8c9b event_dispatch_epoll_handler (libglusterfs.so.0) #13 0x00007f51000cd58e start_thread (libpthread.so.0) #14 0x00007f50ffd1d683 __clone (libc.so.6) --- Additional comment from Pranith Kumar K on 2019-05-15 05:34:33 UTC --- (In reply to Mohammed Rafi KC from comment #3) > Stack trace of thread 30877: > #0 0x0000000000406a07 cleanup_and_exit (glusterfsd) > #1 0x0000000000406b5d glusterfs_sigwaiter (glusterfsd) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30879: > #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable > (libpthread.so.0) > #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) > #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30881: > #0 0x00007f50ffd14cdf __GI___select (libc.so.6) > #1 0x00007f51003ef1cd runner (libglusterfs.so.0) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30880: > #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable > (libpthread.so.0) > #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) > #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30876: > #0 0x00007f51000d7500 __GI___nanosleep (libpthread.so.0) > #1 0x00007f510038a346 gf_timer_proc (libglusterfs.so.0) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30882: > #0 0x00007f50ffd1e06e epoll_ctl (libc.so.6) > #1 0x00007f51003d931e event_handled_epoll > (libglusterfs.so.0) > #2 0x00007f50eed9a781 socket_event_poll_in (socket.so) > #3 0x00007f51003d8c9b event_dispatch_epoll_handler > (libglusterfs.so.0) > #4 0x00007f51000cd58e start_thread (libpthread.so.0) > #5 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30875: > #0 0x00007f51000cea6d __GI___pthread_timedjoin_ex > (libpthread.so.0) > #1 0x00007f51003d8387 event_dispatch_epoll > (libglusterfs.so.0) > #2 0x0000000000406592 main (glusterfsd) > #3 0x00007f50ffc44413 __libc_start_main (libc.so.6) > #4 0x00000000004067de _start (glusterfsd) > > Stack trace of thread 30878: > #0 0x00007f50ffce97f8 __GI___nanosleep (libc.so.6) > #1 0x00007f50ffce96fe __sleep (libc.so.6) > #2 0x00007f51003a4f5a pool_sweeper (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30883: > #0 0x00007f51000d6b8d __lll_lock_wait (libpthread.so.0) > #1 0x00007f51000cfda9 __GI___pthread_mutex_lock > (libpthread.so.0) > #2 0x00007f510037cd1f _gf_msg_plain_internal > (libglusterfs.so.0) > #3 0x00007f510037ceb3 _gf_msg_plain (libglusterfs.so.0) > #4 0x00007f5100382d43 gf_log_dump_graph (libglusterfs.so.0) > #5 0x00007f51003b514f glusterfs_process_svc_attach_volfp > (libglusterfs.so.0) > #6 0x000000000040b16d mgmt_process_volfile (glusterfsd) > #7 0x0000000000410792 mgmt_getspec_cbk (glusterfsd) > #8 0x00007f51003256b1 rpc_clnt_handle_reply (libgfrpc.so.0) > #9 0x00007f5100325a53 rpc_clnt_notify (libgfrpc.so.0) > #10 0x00007f5100322973 rpc_transport_notify (libgfrpc.so.0) > #11 0x00007f50eed9a45c socket_event_poll_in (socket.so) > #12 0x00007f51003d8c9b event_dispatch_epoll_handler > (libglusterfs.so.0) > #13 0x00007f51000cd58e start_thread (libpthread.so.0) > #14 0x00007f50ffd1d683 __clone (libc.so.6) Was graph->active NULL? What lead to the crash? --- Additional comment from Worker Ant on 2019-05-17 18:08:44 UTC --- REVIEW: https://review.gluster.org/22743 (afr/frame: Destroy frame after afr_selfheal_entry_granular) posted (#1) for review on master by mohammed rafi kc --- Additional comment from Worker Ant on 2019-05-21 11:37:12 UTC --- REVIEW: https://review.gluster.org/22743 (afr/frame: Destroy frame after afr_selfheal_entry_granular) merged (#3) on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-05-31 11:28:15 UTC --- REVIEW: https://review.gluster.org/22709 (glusterfsd/cleanup: Protect graph object under a lock) merged (#10) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 [Bug 1708926] Invalid memory access while executing cleanup_and_exit -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 19:22:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 19:22:48 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1716626 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 [Bug 1716626] Invalid memory access while executing cleanup_and_exit -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 3 19:22:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 19:22:50 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 3 19:23:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 03 Jun 2019 19:23:36 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST Assignee|ksubrahm at redhat.com |rkavunga at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 00:07:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 00:07:30 +0000 Subject: [Bugs] [Bug 1716695] New: Fix memory leaks that are present even after an xlator fini [client side xlator] Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 Bug ID: 1716695 Summary: Fix memory leaks that are present even after an xlator fini [client side xlator] Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: There are quite a few memory leaks identified for client side xlators. 1) xlators/cluster/afr/src/afr.c ---> this->local_pool is not freed 2) xlators/cluster/ec/src/ec.c ----> this->itable is not freed 3) protocol/client/src/client.c ----> this->local_pool is not freed I will add more to this list in case if I found any other leaks Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 00:19:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 00:19:01 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22806 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 00:19:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 00:19:02 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22806 (afr/fini: Free local_pool data during an afr fini) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 00:20:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 00:20:14 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22807 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 00:20:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 00:20:15 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22807 (ec/fini: Free itable during an ec fini) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 02:59:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 02:59:31 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22809 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 02:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 02:59:33 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1633 from Worker Ant --- REVIEW: https://review.gluster.org/22809 (posix: coverity fix) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 04:17:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 04:17:20 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Vivek Das changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |vdas at redhat.com Blocks| |1696809 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 04:17:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 04:17:24 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack for | |dev&qe approved in-flight | |RHGS3.5 BZs Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 Rule Engine Rule| |327 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:07:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:07:27 +0000 Subject: [Bugs] [Bug 1716760] New: Make debugging hung frames easier Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Bug ID: 1716760 Summary: Make debugging hung frames easier Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: core Assignee: atumball at redhat.com Reporter: pkarampu at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1714098 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1714098 +++ Description of problem: At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlator populate it whenever they can, but other xlators won't be able to assign one when they need to create a new frame/stack. What we need is for unique to be correct. If a stack with same unique is present in successive statedumps, that means the same operation is still in progress. This makes finding hung frames part of debugging hung frames easier. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-05-27 06:27:36 UTC --- REVIEW: https://review.gluster.org/22773 (stack: Make sure to have unique call-stacks in all cases) posted (#1) for review on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-05-30 15:55:06 UTC --- REVIEW: https://review.gluster.org/22773 (stack: Make sure to have unique call-stacks in all cases) merged (#4) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 [Bug 1714098] Make debugging hung frames easier -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:07:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:07:27 +0000 Subject: [Bugs] [Bug 1714098] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1716760 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 [Bug 1716760] Make debugging hung frames easier -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 05:07:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:07:30 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:08:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:08:13 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|atumball at redhat.com |pkarampu at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:18:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:18:20 +0000 Subject: [Bugs] [Bug 1716766] New: [Thin-arbiter] TA process is not picking 24007 as port while starting up Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716766 Bug ID: 1716766 Summary: [Thin-arbiter] TA process is not picking 24007 as port while starting up Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: TA process is not picking 24007 as port while starting up. Problem: In unit file of TA process we have been using ta-vol as volume id and also ta-vol-server.transport.socket.listen-port=24007 In our volume file for TA process we only consider volname as "ta" and not as "ta-vol". That's why it was not able to assign this port number to or ta process as in volume file it will try to find server xlato as ta-vol volume ta-server <<<<<<<<< not ta-vol 46 type protocol/server 47 option transport.listen-backlog 10 48 option transport.socket.keepalive-count 9 49 option transport.socket.keepalive-interval 2 50 option transport.socket.keepalive-time 20 51 option transport.tcp-user-timeout 0 52 option transport.socket.keepalive 1 53 option auth.addr./mnt/thin-arbiter.allow * 54 option auth-path /mnt/thin-arbiter 55 option transport.address-family inet 56 option transport-type tcp 57 subvolumes ta-io-stats 58 end-volume Solution: Just need to change the command which Unit file is going to execute. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 05:25:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:25:41 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Pranith Kumar K --- Patch link: https://code.engineering.redhat.com/gerrit/#/c/172304 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:30:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:30:09 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ryan at magenta.tv) --- Comment #1 from Anoop C S --- (In reply to ryan from comment #0) > [VFS] > vfs objects = glusterfs 'fruit' and 'stream_xattr' vfs modules are recommended to be loaded while connecting/accessing/operating on SMB shares using Samba from Mac OS X clients. Can you re-try connecting to shares with following additional settings: vfs objects = fruit streams_xattr glusterfs fruit:encoding = native Also please add the following in [global] section: ea support = yes fruit:aapl = yes -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 05:39:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:39:07 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:41:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:41:55 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Version|unspecified |rhgs-3.5 Assignee|sunkumar at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:47:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:47:38 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22810 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:47:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:47:39 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22810 (xlator/log: Add more logging in xlator_is_cleanup_starting) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:57:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:57:14 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 05:57:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 05:57:15 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1634 from Worker Ant --- REVIEW: https://review.gluster.org/22801 (glusterd: coverity fix) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:01:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:01:28 +0000 Subject: [Bugs] [Bug 1716766] [Thin-arbiter] TA process is not picking 24007 as port while starting up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716766 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22811 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:01:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:01:29 +0000 Subject: [Bugs] [Bug 1716766] [Thin-arbiter] TA process is not picking 24007 as port while starting up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716766 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22811 (cluster/replicate: Modify command in unit file to assign port correctly) posted (#1) for review on master by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:04:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:04:55 +0000 Subject: [Bugs] [Bug 1716766] [Thin-arbiter] TA process is not picking 24007 as port while starting up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716766 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |aspandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:15:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:15:05 +0000 Subject: [Bugs] [Bug 1716790] New: geo-rep: Rename with same name testcase is failing with EV Volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716790 Bug ID: 1716790 Summary: geo-rep: Rename with same name testcase is failing with EV Volume Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Rename with same name testcase is failing with EC Volume Version-Release number of selected component (if applicable): mainline How reproducible: Occasionally Steps to Reproduce: Occasional upstream regression run failures 1. https://build.gluster.org/job/centos7-regression/6281/console 2. https://build.gluster.org/job/centos7-regression/6278/ Actual results: geo-rep EC volume rename testcase failed occasionally Expected results: geo-rep EC volume rename testcase should always pass Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:16:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:16:09 +0000 Subject: [Bugs] [Bug 1716790] geo-rep: Rename with same name testcase is failing with EV Volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716790 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |sacharya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:22:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:22:20 +0000 Subject: [Bugs] [Bug 1716790] geo-rep: Rename with same destination name test case occasionally fails on EC Volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716790 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|geo-rep: Rename with same |geo-rep: Rename with same |name testcase is failing |destination name test case |with EV Volume |occasionally fails on EC | |Volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 06:25:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:25:25 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #678 from Worker Ant --- REVIEW: https://review.gluster.org/22804 (tests/geo-rep: Remove a rename test case on EC volume) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:25:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:25:49 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #53 from Worker Ant --- REVIEW: https://review.gluster.org/22803 (tests/geo-rep: Add geo-rep glusterd test cases) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:55:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:55:25 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #2 from ryan at magenta.tv --- Hi Anoop, Thanks for getting back to me. I've tried your suggestion but unfortunately the issue still remains. Here is my updated smb.conf: [global] security = user netbios name = NAS01 clustering = no server signing = no max log size = 10000 log file = /var/log/samba/log-%M-test.smbd logging = file log level = 10 passdb backend = tdbsam guest account = nobody map to guest = bad user force directory mode = 0777 force create mode = 0777 create mask = 0777 directory mask = 0777 store dos attributes = yes load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes glusterfs:volfile_server = localhost ea support = yes fruit:aapl = yes kernel share modes = No [VFS] vfs objects = fruit streams_xattr glusterfs fruit:encoding = native glusterfs:volume = mcv02 path = / read only = no guest ok = yes This time when creating a new folder at the root of the share, it creates, then disappears, sometimes coming back, sometimes not. When I was able to traverse into a sub-folder, the same error is received. I will attach the debug level 10 logs to the bug. Many thanks for you help, Ryan -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 06:56:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 06:56:23 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 --- Comment #3 from ryan at magenta.tv --- Created attachment 1576920 --> https://bugzilla.redhat.com/attachment.cgi?id=1576920&action=edit Debug level 10 log of issue after adding streams_xattr and fruit -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 07:30:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 07:30:49 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-04 07:30:49 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22681 (features/shard: Fix block-count accounting upon truncate to lower size) merged (#6) on master by Xavi Hernandez -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 07:52:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 07:52:34 +0000 Subject: [Bugs] [Bug 1716812] New: Failed to create volume which transport_type is "tcp, rdma" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Bug ID: 1716812 Summary: Failed to create volume which transport_type is "tcp,rdma" Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: guol-fnst at cn.fujitsu.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force volume create: 11: failed: Failed to create volume files Version-Release number of selected component (if applicable): # gluster --version glusterfs 4.1.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:8b:a9 brd ff:ff:ff:ff:ff:ff inet 193.168.141.101/16 brd 193.168.255.255 scope global dynamic ens192 valid_lft 2591093sec preferred_lft 2591093sec inet6 fe80::250:56ff:fe9c:8ba9/64 scope link valid_lft forever preferred_lft forever 3: ens224: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:53:58 brd ff:ff:ff:ff:ff:ff How reproducible: Steps to Reproduce: 1.rxe_cfg start 2.rxe_cfg add ens192 3.gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force Actual results: volume create: 11: failed: Failed to create volume files Expected results: Success to create volume Additional info: [2019-06-04 07:36:45.966125] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-glusterd: Started running glusterd version 4.1.8 (args: glusterd --xlator-option *.upgrade=on -N) [2019-06-04 07:36:45.970884] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:36:45.970900] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:36:45.970906] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:36:45.973455] E [rpc-transport.c:284:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/4.1.8/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-06-04 07:36:45.973468] W [rpc-transport.c:288:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-06-04 07:36:45.973473] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:36:45.973478] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:36:45.976348] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:36:45.977372] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:36:45.989706] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option upgrade on 9: option event-threads 1 10: option ping-timeout 0 11: option transport.socket.read-fail-log off 12: option transport.socket.keepalive-interval 2 13: option transport.socket.keepalive-time 10 14: option transport-type rdma 15: option working-directory /var/lib/glusterd 16: end-volume 17: +------------------------------------------------------------------------------+ [2019-06-04 07:36:46.005401] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:36:46.006879] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/lib64/libpthread.so.0(+0x7dd5) [0x7f55547bbdd5] -->glusterd(glusterfs_sigwaiter+0xe5) [0x55c659e7dd65] -->glusterd(cleanup_and_exit+0x6b) [0x55c659e7db8b] ) 0-: received signum (15), shutting down [2019-06-04 07:36:46.006997] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007004] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007008] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: GlusterD svc cli, Num: 1238463, Ver: 2, Port: 0 [2019-06-04 07:36:46.007061] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007066] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007070] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: Gluster Handshake, Num: 14398633, Ver: 2, Port: 0 [2019-06-04 07:37:18.784525] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.8 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2019-06-04 07:37:18.787926] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:37:18.787944] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:37:18.787950] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:37:18.814752] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device] [2019-06-04 07:37:18.814780] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device [2019-06-04 07:37:18.814786] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2019-06-04 07:37:18.814844] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:37:18.814852] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:37:19.617049] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:37:19.617342] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:37:19.626546] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-06-04 07:37:19.626791] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:37:20.874611] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:37:20.889571] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:37:20.889588] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:37:20.889601] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files [2019-06-04 07:38:49.194175] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:38:49.211380] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:38:49.211407] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:38:49.211433] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 07:58:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 07:58:44 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 --- Comment #1 from guolei --- Test is ok on glusterfs3.12.9 ,failed on glusterfs3.13.2 and later version. generate_client_volfiles (glusterd_volinfo_t *volinfo, glusterd_client_type_t client_type) { int i = 0; int ret = -1; char filepath[PATH_MAX] = {0,}; char *types[] = {NULL, NULL, NULL}; dict_t *dict = NULL; xlator_t *this = NULL; gf_transport_type type = GF_TRANSPORT_TCP; this = THIS; enumerate_transport_reqs (volinfo->transport_type, types); dict = dict_new (); if (!dict) goto out; for (i = 0; types[i]; i++) { memset (filepath, 0, sizeof (filepath)); ret = dict_set_str (dict, "client-transport-type", types[i]); if (ret) goto out; type = transport_str_to_type (types[i]); ret = dict_set_uint32 (dict, "trusted-client", client_type); if (ret) goto out; if (client_type == GF_CLIENT_TRUSTED) { ret = glusterd_get_trusted_client_filepath (filepath, volinfo, type); } else if (client_type == GF_CLIENT_TRUSTED_PROXY) { glusterd_get_gfproxy_client_volfile (volinfo, filepath, PATH_MAX); <---------------------------- Maybe this is the problem? transport type should be passed to glusterd_get_gfproxy_client_volfile .Or filepath is NULL. ret = dict_set_str (dict, "gfproxy-client", "on"); } else { ret = glusterd_get_client_filepath (filepath, volinfo, type); } if (ret) { gf_msg (this->name, GF_LOG_ERROR, EINVAL, GD_MSG_INVALID_ENTRY, "Received invalid transport-type"); goto out; } * ret = generate_single_transport_client_volfile (volinfo, filepath, dict);* if (ret) goto out; } /* Generate volfile for rebalance process */ glusterd_get_rebalance_volfile (volinfo, filepath, PATH_MAX); ret = build_rebalance_volfile (volinfo, filepath, dict); if (ret) { gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_VOLFILE_CREATE_FAIL, "Failed to create rebalance volfile for %s", volinfo->volname); goto out; } out: if (dict) dict_unref (dict); gf_msg_trace ("glusterd", 0, "Returning %d", ret); return ret; } void glusterd_get_gfproxy_client_volfile (glusterd_volinfo_t *volinfo, char *path, int path_len) { char workdir[PATH_MAX] = {0, }; glusterd_conf_t *priv = THIS->private; GLUSTERD_GET_VOLUME_DIR (workdir, volinfo, priv); switch (volinfo->transport_type) { case GF_TRANSPORT_TCP: snprintf (path, path_len, "%s/trusted-%s.tcp-gfproxy-fuse.vol", workdir, volinfo->volname); break; case GF_TRANSPORT_RDMA: snprintf (path, path_len, "%s/trusted-%s.rdma-gfproxy-fuse.vol", workdir, volinfo->volname); break; default: break; } } -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 09:05:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:05:10 +0000 Subject: [Bugs] [Bug 1716830] New: DHT: directory permissions are wiped out Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Bug ID: 1716830 Summary: DHT: directory permissions are wiped out Product: GlusterFS Version: mainline Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, saraut at redhat.com, storage-qa-internal at redhat.com Depends On: 1716821 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1716821 +++ Description of problem: A sequence of steps can wipe out the permissions set on a directory. Version-Release number of selected component (if applicable): RHGS 3.5.0 How reproducible: Consistently Steps to Reproduce: [root at rhgs313-6 ~]# gluster volume create vol1 192.168.122.6:/bricks/brick1/vol1-1 volume create: vol1: success: please start the volume to access data [root at rhgs313-6 ~]# gluster v start vol1 volume start: vol1: success [root at rhgs313-6 ~]# mount -t glusterfs -s 192.168.122.6:/vol1 /mnt/fuse1 [root at rhgs313-6 fuse]# cd /mnt/fuse1 [root at rhgs313-6 fuse1]# mkdir dir1 [root at rhgs313-6 fuse1]# cd dir1/ [root at rhgs313-6 dir1]# getx /bricks/brick1/vol1-*/dir1 getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/vol1-1/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x000000000000000000000000ffffffff trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 [root at rhgs313-6 dir1]# gluster v add-brick vol1 192.168.122.6:/bricks/brick1/vol1-2 force volume add-brick: success [root at rhgs313-6 dir1]# ll total 0 Check the directory permissions and xattrs on the bricks [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* /bricks/brick1/vol1-1: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 [root at rhgs313-6 dir1]# getx /bricks/brick1/vol1-*/dir1 getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/vol1-1/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x000000000000000000000000ffffffff trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 # file: bricks/brick1/vol1-2/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x00000000000000000000000000000000 trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 >From the mount point, cd one level up and then back again into dir1. [root at rhgs313-6 dir1]# cd .. [root at rhgs313-6 fuse1]# cd dir1 [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* Actual results: [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* /bricks/brick1/vol1-1: total 0 d---------. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 d---------. 2 root root 6 Jun 4 13:50 dir1 Expected results: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 Additional info: --- Additional comment from RHEL Product and Program Management on 2019-06-04 08:29:48 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716821 [Bug 1716821] DHT: directory permissions are wiped out -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 09:23:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:23:40 +0000 Subject: [Bugs] [Bug 1716830] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 09:28:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:28:43 +0000 Subject: [Bugs] [Bug 1716830] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22813 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 09:28:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:28:44 +0000 Subject: [Bugs] [Bug 1716830] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22813 (cluster/dht: Fix directory perms during selfheal) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 09:46:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:46:17 +0000 Subject: [Bugs] [Bug 1716848] New: DHT: directory permissions are wiped out Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 Bug ID: 1716848 Summary: DHT: directory permissions are wiped out Product: GlusterFS Version: 6 Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, saraut at redhat.com, storage-qa-internal at redhat.com Depends On: 1716821, 1716830 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1716830 +++ +++ This bug was initially created as a clone of Bug #1716821 +++ Description of problem: A sequence of steps can wipe out the permissions set on a directory. Version-Release number of selected component (if applicable): RHGS 3.5.0 How reproducible: Consistently Steps to Reproduce: [root at rhgs313-6 ~]# gluster volume create vol1 192.168.122.6:/bricks/brick1/vol1-1 volume create: vol1: success: please start the volume to access data [root at rhgs313-6 ~]# gluster v start vol1 volume start: vol1: success [root at rhgs313-6 ~]# mount -t glusterfs -s 192.168.122.6:/vol1 /mnt/fuse1 [root at rhgs313-6 fuse]# cd /mnt/fuse1 [root at rhgs313-6 fuse1]# mkdir dir1 [root at rhgs313-6 fuse1]# cd dir1/ [root at rhgs313-6 dir1]# getx /bricks/brick1/vol1-*/dir1 getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/vol1-1/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x000000000000000000000000ffffffff trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 [root at rhgs313-6 dir1]# gluster v add-brick vol1 192.168.122.6:/bricks/brick1/vol1-2 force volume add-brick: success [root at rhgs313-6 dir1]# ll total 0 Check the directory permissions and xattrs on the bricks [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* /bricks/brick1/vol1-1: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 [root at rhgs313-6 dir1]# getx /bricks/brick1/vol1-*/dir1 getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/vol1-1/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x000000000000000000000000ffffffff trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 # file: bricks/brick1/vol1-2/dir1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0xbf9444c0f8614d81a5758ed801e9f7e0 trusted.glusterfs.dht=0x00000000000000000000000000000000 trusted.glusterfs.mdata=0x010000000000000000000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6000000005cf629cf00000000302dadf6 >From the mount point, cd one level up and then back again into dir1. [root at rhgs313-6 dir1]# cd .. [root at rhgs313-6 fuse1]# cd dir1 [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* Actual results: [root at rhgs313-6 dir1]# ll /bricks/brick1/vol1-* /bricks/brick1/vol1-1: total 0 d---------. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 d---------. 2 root root 6 Jun 4 13:50 dir1 Expected results: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 /bricks/brick1/vol1-2: total 0 drwxr-xr-x. 2 root root 6 Jun 4 13:50 dir1 Additional info: --- Additional comment from RHEL Product and Program Management on 2019-06-04 08:29:48 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Worker Ant on 2019-06-04 09:28:44 UTC --- REVIEW: https://review.gluster.org/22813 (cluster/dht: Fix directory perms during selfheal) posted (#1) for review on master by N Balachandran Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716821 [Bug 1716821] DHT: directory permissions are wiped out https://bugzilla.redhat.com/show_bug.cgi?id=1716830 [Bug 1716830] DHT: directory permissions are wiped out -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 09:46:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:46:17 +0000 Subject: [Bugs] [Bug 1716830] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1716848 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 [Bug 1716848] DHT: directory permissions are wiped out -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 09:47:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:47:12 +0000 Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 09:51:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:51:37 +0000 Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22814 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 09:51:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:51:38 +0000 Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22814 (cluster/dht: Fix directory perms during selfheal) posted (#1) for review on release-6 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 4 09:53:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 09:53:36 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #54 from Worker Ant --- REVIEW: https://review.gluster.org/22799 (lcov: run more fops on translators) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:28:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:28:42 +0000 Subject: [Bugs] [Bug 1716870] New: Came up with a script to analyze strace outputs from bricks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716870 Bug ID: 1716870 Summary: Came up with a script to analyze strace outputs from bricks Product: GlusterFS Version: mainline Status: NEW Component: scripts Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Debugging performance issues often includes comparison of brick strace files to compare number of syscalls/maximum-latencies per syscall etc with previous runs. This script helps in getting these numbers. Running it creates 3 types of files: 1) syscalls-summary.txt - Prints per syscall counts 2) -latency.txt - This is an intermediate file where all 'syscall' calls from all the strace files will be listed. 3) per-fop-latency.txt - Per syscall it prints top maximum latencies observed. Assumes the files in strace-dir are created using the following command: $ strace -ff -T -p -o Sample output of syscalls-summary.txt: 49857 chmod 49906 chown 97542 close 650309 fgetxattr 18 flistxattr .... Sample output of per-fop-latency.txt: --chmod-- 0.000216 0.000254 0.000266 ... --unlink-- 0.020208 0.025084 0.027231 ... Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:29:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:29:12 +0000 Subject: [Bugs] [Bug 1716871] New: Image size as reported from the fuse mount is incorrect Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 Bug ID: 1716871 Summary: Image size as reported from the fuse mount is incorrect Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: sharding Severity: high Assignee: bugs at gluster.org Reporter: kdhananj at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, kdhananj at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sabose at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com Depends On: 1705884 Blocks: 1667998, 1668001 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1705884 +++ +++ This bug was initially created as a clone of Bug #1668001 +++ Description of problem: ----------------------- The size of the VM image file as reported from the fuse mount is incorrect. For the file of size 1 TB, the size of the file on the disk is reported as 8 ZB. Version-Release number of selected component (if applicable): ------------------------------------------------------------- upstream master How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. On the Gluster storage domain, create the preallocated disk image of size 1TB 2. Check for the size of the file after its creation has succeesded Actual results: --------------- Size of the file is reported as 8 ZB, though the size of the file is 1TB Expected results: ----------------- Size of the file should be the same as the size created by the user Additional info: ---------------- Volume in the question is replica 3 sharded [root at rhsqa-grafton10 ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 7eb49e90-e2b6-4f8f-856e-7108212dbb72 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data (arbiter) Options Reconfigured: performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable cluster.enable-shared-storage: enable --- Additional comment from SATHEESARAN on 2019-01-21 16:32:39 UTC --- Size of the file as reported from the fuse mount: [root@ ~]# ls -lsah /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 8.0Z -rw-rw----. 1 vdsm kvm 1.1T Jan 21 17:14 /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b [root@ ~]# du -shc /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E total Note that the disk image is preallocated with 1072GB of space --- Additional comment from SATHEESARAN on 2019-04-01 19:25:15 UTC --- (In reply to SATHEESARAN from comment #5) > (In reply to Krutika Dhananjay from comment #3) > > Also, do you still have the setup in this state? If so, can I'd like to take > > a look. > > > > -Krutika > > Hi Krutika, > > The setup is no longer available. Let me recreate the issue and provide you > the setup This issue is very easily reproducible. Create a preallocated image on the replicate volume with sharding enabled. Use 'qemu-img' to create the VM image. See the following test: [root@ ~]# qemu-img create -f raw -o preallocation=falloc /mnt/test/vm1.img 1T Formatting '/mnt/test/vm1.img', fmt=raw size=1099511627776 preallocation='falloc' [root@ ]# ls /mnt/test vm1.img [root@ ]# ls -lsah vm1.img 8.0Z -rw-r--r--. 1 root root 1.0T Apr 2 00:45 vm1.img --- Additional comment from Krutika Dhananjay on 2019-04-11 06:07:35 UTC --- So I tried this locally and I am not hitting the issue - [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc /mnt/vm1.img 10G Formatting '/mnt/vm1.img', fmt=raw size=10737418240 preallocation=falloc [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img 10G -rw-r--r--. 1 root root 10G Apr 11 11:26 /mnt/vm1.img [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc /mnt/vm1.img 30G Formatting '/mnt/vm1.img', fmt=raw size=32212254720 preallocation=falloc [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img 30G -rw-r--r--. 1 root root 30G Apr 11 11:32 /mnt/vm1.img Of course, I didn't go beyond 30G due to space constraints on my laptop. If you could share your setup where you're hitting this bug, I'll take a look. -Krutika --- Additional comment from SATHEESARAN on 2019-05-02 05:21:01 UTC --- (In reply to Krutika Dhananjay from comment #7) > So I tried this locally and I am not hitting the issue - > > [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc > /mnt/vm1.img 10G > Formatting '/mnt/vm1.img', fmt=raw size=10737418240 preallocation=falloc > [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img > 10G -rw-r--r--. 1 root root 10G Apr 11 11:26 /mnt/vm1.img > > [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc > /mnt/vm1.img 30G > Formatting '/mnt/vm1.img', fmt=raw size=32212254720 preallocation=falloc > [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img > 30G -rw-r--r--. 1 root root 30G Apr 11 11:32 /mnt/vm1.img > > Of course, I didn't go beyond 30G due to space constraints on my laptop. > > If you could share your setup where you're hitting this bug, I'll take a > look. > > -Krutika I could see this very consistenly in two fashions 1. Create VM image >= 1TB -------------------------- [root at rhsqa-grafton7 test]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G May 2 10:30 vm1.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm2.img 50G Formatting 'vm2.img', fmt=raw size=53687091200 preallocation=falloc [root@ ]# ls -lsah vm2.img 50G -rw-r--r--. 1 root root 50G May 2 10:30 vm2.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm3.img 100G Formatting 'vm3.img', fmt=raw size=107374182400 preallocation=falloc [root@ ]# ls -lsah vm3.img 100G -rw-r--r--. 1 root root 100G May 2 10:33 vm3.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm4.img 500G Formatting 'vm4.img', fmt=raw size=536870912000 preallocation=falloc [root@ ]# ls -lsah vm4.img 500G -rw-r--r--. 1 root root 500G May 2 10:33 vm4.img Once the size reached 1TB, you will see this issue [root@ ]# qemu-img create -f raw -o preallocation=falloc vm6.img 1T Formatting 'vm6.img', fmt=raw size=1099511627776 preallocation=falloc [root@ ]# ls -lsah vm6.img 8.0Z -rw-r--r--. 1 root root 1.0T May 2 10:35 vm6.img <-------- size on disk is too much than expected 2. Recreate the image with the same name ----------------------------------------- Observe that for the second time, the image is created with the same name [root@ ]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G May 2 10:40 vm1.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm1.img 20G <-------- The same file name vm1.img is used Formatting 'vm1.img', fmt=raw size=21474836480 preallocation=falloc [root@ ]# ls -lsah vm1.img 30G -rw-r--r--. 1 root root 20G May 2 10:40 vm1.img <---------- size on the disk is 30G, though the file is created with 20G I will provide setup for the investigation --- Additional comment from SATHEESARAN on 2019-05-02 05:23:07 UTC --- The setup details: ------------------- rhsqa-grafton7.lab.eng.blr.redhat.com ( root/redhat ) volume: data ( replica 3, sharded ) The volume is currently mounted at: /mnt/test Note: This is the RHVH installation. @krutika, if you need more info, just ping me in IRC / google chat --- Additional comment from Krutika Dhananjay on 2019-05-02 10:16:40 UTC --- Found part of the issue. It's just a case of integer overflow. 32-bit signed int is being used to store delta between post-stat and pre-stat block-counts. The range of numbers for 32-bit signed int is [-2,147,483,648, 2,147,483,647] whereas the number of blocks allocated as part of creating a preallocated 1TB file is (1TB/512) = 2,147,483,648 which is just 1 more than INT_MAX (2,147,483,647) which spills over to the negative half the scale making it -2,147,483,648. This number, on being copied to int64 causes the most-significant 32 bits to be filled with 1 making the block-count equal 554050781183 (or 0xffffffff80000000) in magnitude. That's the block-count that gets set on the backend in trusted.glusterfs.shard.file-size xattr in the block-count segment - [root at rhsqa-grafton7 data]# getfattr -d -m . -e hex /gluster_bricks/data/data/vm3.img getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/data/data/vm3.img security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x3faffa7142b74e739f3a82b9359d33e6 trusted.gfid2path.6356251b968111ad=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f766d332e696d67 trusted.glusterfs.shard.block-size=0x0000000004000000 trusted.glusterfs.shard.file-size=0x00000100000000000000000000000000ffffffff800000000000000000000000 <-- notice the "ffffffff80000000" in the block-count segment But .. [root at rhsqa-grafton7 test]# stat vm3.img File: ?vm3.img? Size: 1099511627776 Blocks: 18446744071562067968 IO Block: 131072 regular file Device: 29h/41d Inode: 11473626732659815398 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2019-05-02 14:11:11.693559069 +0530 Modify: 2019-05-02 14:12:38.245068328 +0530 Change: 2019-05-02 14:15:56.190546751 +0530 Birth: - stat shows block-count as 18446744071562067968 which is way bigger than (554050781183 * 512). In the response path, turns out the block-count further gets assigned to a uint64 number. The same number, when expressed as uint64 becomes 18446744071562067968. 18446744071562067968 * 512 is a whopping 8.0 Zettabytes! This bug wasn't seen earlier because the earlier way of preallocating files never used fallocate, so the original signed 32 int variable delta_blocks would never exceed 131072. Anyway, I'll be soon sending a fix for this. Sas, Do you have a single node with at least 1TB free space that you can lend me where I can test the fix? The bug will only be hit when the image size is > 1TB. -Krutika --- Additional comment from Krutika Dhananjay on 2019-05-02 10:18:26 UTC --- (In reply to Krutika Dhananjay from comment #10) > Found part of the issue. Sorry, this not part of the issue but THE issue in its entirety. (That line is from an older draft I'd composed which I forgot to change after rc'ing the bug) > > It's just a case of integer overflow. > 32-bit signed int is being used to store delta between post-stat and > pre-stat block-counts. > The range of numbers for 32-bit signed int is [-2,147,483,648, > 2,147,483,647] whereas the number of blocks allocated > as part of creating a preallocated 1TB file is (1TB/512) = 2,147,483,648 > which is just 1 more than INT_MAX (2,147,483,647) > which spills over to the negative half the scale making it -2,147,483,648. > This number, on being copied to int64 causes the most-significant 32 bits to > be filled with 1 making the block-count equal 554050781183 (or > 0xffffffff80000000) in magnitude. > That's the block-count that gets set on the backend in > trusted.glusterfs.shard.file-size xattr in the block-count segment - > > [root at rhsqa-grafton7 data]# getfattr -d -m . -e hex > /gluster_bricks/data/data/vm3.img > getfattr: Removing leading '/' from absolute path names > # file: gluster_bricks/data/data/vm3.img > security. > selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f7 > 43a733000 > trusted.afr.dirty=0x000000000000000000000000 > trusted.gfid=0x3faffa7142b74e739f3a82b9359d33e6 > trusted.gfid2path. > 6356251b968111ad=0x30303030303030302d303030302d303030302d303030302d3030303030 > 303030303030312f766d332e696d67 > > trusted.glusterfs.shard.block-size=0x0000000004000000 > trusted.glusterfs.shard.file- > size=0x00000100000000000000000000000000ffffffff800000000000000000000000 <-- > notice the "ffffffff80000000" in the block-count segment > > But .. > > [root at rhsqa-grafton7 test]# stat vm3.img > File: ?vm3.img? > Size: 1099511627776 Blocks: 18446744071562067968 IO Block: 131072 > regular file > Device: 29h/41d Inode: 11473626732659815398 Links: 1 > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) > Context: system_u:object_r:fusefs_t:s0 > Access: 2019-05-02 14:11:11.693559069 +0530 > Modify: 2019-05-02 14:12:38.245068328 +0530 > Change: 2019-05-02 14:15:56.190546751 +0530 > Birth: - > > stat shows block-count as 18446744071562067968 which is way bigger than > (554050781183 * 512). > > In the response path, turns out the block-count further gets assigned to a > uint64 number. > The same number, when expressed as uint64 becomes 18446744071562067968. > 18446744071562067968 * 512 is a whopping 8.0 Zettabytes! > > This bug wasn't seen earlier because the earlier way of preallocating files > never used fallocate, so the original signed 32 int variable delta_blocks > would never exceed 131072. > > Anyway, I'll be soon sending a fix for this. --- Additional comment from Worker Ant on 2019-05-03 06:58:51 UTC --- REVIEW: https://review.gluster.org/22655 (features/shard: Fix integer overflow in block count accounting) posted (#1) for review on master by Krutika Dhananjay --- Additional comment from Worker Ant on 2019-05-06 10:49:43 UTC --- REVIEW: https://review.gluster.org/22655 (features/shard: Fix integer overflow in block count accounting) merged (#2) on master by Xavi Hernandez --- Additional comment from Worker Ant on 2019-05-08 08:46:18 UTC --- REVIEW: https://review.gluster.org/22681 (features/shard: Fix block-count accounting upon truncate to lower size) posted (#1) for review on master by Krutika Dhananjay --- Additional comment from Worker Ant on 2019-06-04 07:30:49 UTC --- REVIEW: https://review.gluster.org/22681 (features/shard: Fix block-count accounting upon truncate to lower size) merged (#6) on master by Xavi Hernandez Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1667998 [Bug 1667998] Image size as reported from the fuse mount is incorrect https://bugzilla.redhat.com/show_bug.cgi?id=1668001 [Bug 1668001] Image size as reported from the fuse mount is incorrect https://bugzilla.redhat.com/show_bug.cgi?id=1705884 [Bug 1705884] Image size as reported from the fuse mount is incorrect -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:29:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:29:12 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Krutika Dhananjay changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1716871 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 [Bug 1716871] Image size as reported from the fuse mount is incorrect -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:31:39 +0000 Subject: [Bugs] [Bug 1716870] Came up with a script to analyze strace outputs from bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716870 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22816 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:31:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:31:40 +0000 Subject: [Bugs] [Bug 1716870] Came up with a script to analyze strace outputs from bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716870 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22816 (extras: Script to analyze strace of bricks) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:32:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:32:31 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 --- Comment #92 from Worker Ant --- REVIEW: https://review.gluster.org/22775 (across: clang-scan: fix NULL dereferencing warnings) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:35:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:35:38 +0000 Subject: [Bugs] [Bug 1716871] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22817 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:35:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:35:39 +0000 Subject: [Bugs] [Bug 1716871] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22817 (features/shard: Fix integer overflow in block count accounting) posted (#1) for review on release-6 by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:37:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:37:46 +0000 Subject: [Bugs] [Bug 1716875] New: Inode Unref Assertion failed: inode->ref Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716875 Bug ID: 1716875 Summary: Inode Unref Assertion failed: inode->ref Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: gluster-smb Severity: urgent Assignee: bugs at gluster.org Reporter: ryan at magenta.tv CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1577014 --> https://bugzilla.redhat.com/attachment.cgi?id=1577014&action=edit Client log from Gluster VFS client showing high RAM usage Description of problem: Samba using huge amounts of memory (5GB) per client thread. Upon checking the Gluster client logs, it's filled with messages such as: [2019-04-24 07:44:33.607834] E [inode.c:484:__inode_unref] (-->/lib64/libglusterfs.so.0(gf_dirent_entry_free+0x2b) [0x7ff0a24d555b] -->/lib64/libglusterfs.so.0(inode_unref+0x21) [0x7ff0a24b9921] -->/lib64/libglusterfs.so.0(+0x35156) [0x7ff0a24b9156] ) 0-: Assertion failed: inode->ref [2019-04-30 13:16:47.169047] E [timer.c:37:gf_timer_call_after] (-->/lib64/libglusterfs.so.0(+0x33bec) [0x7ff09d875bec] -->/lib64/libgfrpc.so.0(+0xde88) [0x7ff09dd7ae88] -->/lib64/libglusterfs.so.0(gf_timer_call_after+0x229) [0x7ff09d875fa9] ) 0-timer: Either ctx is NULL or ctx cleanup started [Invalid argument] [2019-05-28 17:47:28.655550] E [MSGID: 140003] [nl-cache.c:777:nlc_init] 0-mcv01-nl-cache: Initing the global timer wheel failed [2019-05-28 17:47:28.655873] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-mcv01-nl-cache: Initialization of volume 'mcv01-nl-cache' failed, review your volfile again [2019-05-28 17:47:28.655887] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-mcv01-nl-cache: initializing translator failed [2019-05-28 17:47:28.655894] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed [2019-05-28 17:47:28.655972] E [MSGID: 104007] [glfs-mgmt.c:744:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:mcv01) [Invalid argument] Version-Release number of selected component (if applicable): Gluster 4.1.7 How reproducible: Unknown Steps to Reproduce: Unsure how to reproduce, only seen this in one environment currently Actual results: All system memory and swap is exhausted. SMBD processes do not get killed off when main SMB service is stopped, where as usually they do. Expected results: System resources are freed up and errors are not present in logs. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 10:54:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 10:54:31 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(jahernan at redhat.c |needinfo?(waza123 at inbox.lv) |om) | --- Comment #5 from Xavi Hernandez --- Sorry for the late answer. I've checked the core dump and it seems to belong to a glusterfs 3.10.10. This is a very old version and it's already EOL. Is it possible to upgrade to a newer supported version and check if it works ? At first sight I don't see a similar bug, but many things have changed since then. If you are unable to upgrade, let me know which version of operating system are you using and which source you use to install gluster packages so that I can find the appropriate symbols to analyze the core. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:08:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:08:19 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22818 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:08:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:08:20 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #55 from Worker Ant --- REVIEW: https://review.gluster.org/22818 (tests/geo-rep: Add geo-rep cli testcases) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:32:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:32:39 +0000 Subject: [Bugs] [Bug 1716871] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22819 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:32:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:32:40 +0000 Subject: [Bugs] [Bug 1716871] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716871 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22819 (features/shard: Fix block-count accounting upon truncate to lower size) posted (#1) for review on release-6 by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:35:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:35:04 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22820 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 11:35:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 11:35:05 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #679 from Worker Ant --- REVIEW: https://review.gluster.org/22820 ([WIP][RFC]dict: use fixed 'hash' for keys that are fixed strings.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 12:27:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 12:27:48 +0000 Subject: [Bugs] [Bug 1468510] Keep all Debug level log in circular in-memory buffer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1468510 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |aspandey at redhat.com Flags| |needinfo?(aspandey at redhat.c | |om) --- Comment #13 from Yaniv Kaul --- Since it was not implemented for ~2 years, shall we close it? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 13:38:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 13:38:03 +0000 Subject: [Bugs] [Bug 1716979] New: Multiple disconnect events being propagated for the same child Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716979 Bug ID: 1716979 Summary: Multiple disconnect events being propagated for the same child Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: rpc Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, ravishankar at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com Depends On: 1703423 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703423 [Bug 1703423] Multiple disconnect events being propagated for the same child -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 13:39:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 13:39:06 +0000 Subject: [Bugs] [Bug 1716979] Multiple disconnect events being propagated for the same child In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716979 --- Comment #1 from Raghavendra G --- Issue was reported upstream by a user via https://github.com/gluster/glusterfs/issues/648 I'm seeing that if I kill a brick in a replica 3 system, AFR keeps getting child_down event repeatedly for the same child. Version-Release number of selected component (if applicable): master (source install) How reproducible: Always. Steps to Reproduce: 1. Create a replica 3 volume and start it. 2. Put a break point in __afr_handle_child_down_event() in glustershd process. 3. Kill any one brick. Actual results: The break point keeps getting hit once every 3 seconds or so repeatedly. Expected results: Only 1 event per one disconnect. Additional info: I haven't checked if the same happens for GF_EVENT_CHILD_UP as well. I think this is regression that needs to be fixed. If this is not a bug please feel free to close stating why. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 13:52:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 13:52:16 +0000 Subject: [Bugs] [Bug 1716979] Multiple disconnect events being propagated for the same child In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716979 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #2 from Raghavendra G --- The multiple disconnect events are due to reconnect/disconnect to glusterd (port 24007). rpc/clnt has a reconnect feature which tries to reconnect to a disconnected brick and client connection to brick is a two step process: 1. connect to glusterd, get brick port then disconnect 2. connect to brick In this case step 1 would be successful and step 2 won't happen as glusterd wouldn't send back the brick port (as brick is dead). Nevertheless there is a chain of connect/disconnect (to glusterd) at rpc layer and these are valid steps as we need reconnection logic. However subsequent disconnect events were prevented from reaching parents of protocol/client as it remembered which was the last sent and if current event is the same as last event, it would skip notification. Before Halo replication feature - https://review.gluster.org/16177, last_sent_event for this test case would be GF_EVENT_DISCONNECT and hence subsequent disconnects were skipped notification to parent xlators. But Halo replication introduced another event GF_EVENT_CHILD_PING which gets notified to parents of protocol/client whenever there is a successful ping response. In this case, the successful ping response would be from glusterd and would change conf->last_sent_event to GF_EVENT_CHILD_PING. This made subsequent disconnect events are not skipped. A patch to propagate GF_EVENT_CHILD_PING only after a successful handshake prevents spurious CHILD_DOWN events to afr. However, I am not sure whether this breaks Halo replication. Would request afr team members comment on the patch (I'll post shortly). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 13:53:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 13:53:06 +0000 Subject: [Bugs] [Bug 1468510] Keep all Debug level log in circular in-memory buffer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1468510 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(aspandey at redhat.c | |om) | Last Closed| |2019-06-04 13:53:06 --- Comment #14 from Ashish Pandey --- Yes, We can closing it now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 14:10:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 14:10:12 +0000 Subject: [Bugs] [Bug 1716979] Multiple disconnect events being propagated for the same child In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716979 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22821 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 4 14:10:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 04 Jun 2019 14:10:13 +0000 Subject: [Bugs] [Bug 1716979] Multiple disconnect events being propagated for the same child In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716979 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22821 (protocol/client: propagte GF_EVENT_CHILD_PING only after a successful handshake) posted (#1) for review on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 05:52:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 05:52:32 +0000 Subject: [Bugs] [Bug 1717282] New: ec ignores lock contention notifications for partially acquired locks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 Bug ID: 1717282 Summary: ec ignores lock contention notifications for partially acquired locks Product: GlusterFS Version: 5 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Depends On: 1708156 Blocks: 1714172 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1708156 +++ Description of problem: When an inodelk is being acquired, it could happen that some bricks have already granted the lock while others don't. From the point of view of ec, the lock is not yet acquired. If at this point one of the bricks that has already granted the lock receives another inodelk request, it will send a contention notification to ec. Currently ec ignores those notifications until the lock is fully acquired. This means than once ec acquires the lock on all bricks, it won't be released immediately when eager-lock is used. Version-Release number of selected component (if applicable): mainline How reproducible: Very frequently when there are multiple concurrent operations on same directory Steps to Reproduce: 1. Create a disperse volume 2. Mount it from several clients 3. Create few files on a directory 4. Do 'ls' of that directory at the same time from all clients Actual results: Some 'ls' take several seconds to complete Expected results: All 'ls' should complete in less than a second Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 [Bug 1708156] ec ignores lock contention notifications for partially acquired locks https://bugzilla.redhat.com/show_bug.cgi?id=1714172 [Bug 1714172] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 05:52:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 05:52:32 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1717282 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 [Bug 1717282] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 5 05:52:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 05:52:32 +0000 Subject: [Bugs] [Bug 1714172] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1717282 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 [Bug 1717282] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 5 05:54:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 05:54:54 +0000 Subject: [Bugs] [Bug 1717282] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22822 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 05:54:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 05:54:55 +0000 Subject: [Bugs] [Bug 1717282] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22822 (cluster/ec: honor contention notifications for partially acquired locks) posted (#1) for review on release-5 by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 06:04:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 06:04:25 +0000 Subject: [Bugs] [Bug 1697986] GlusterFS 5.7 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jahernan at redhat.com Depends On| |1717282 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 [Bug 1717282] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 06:04:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 06:04:25 +0000 Subject: [Bugs] [Bug 1717282] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717282 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1697986 (glusterfs-5.7) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 [Bug 1697986] GlusterFS 5.7 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 07:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 07:21:00 +0000 Subject: [Bugs] [Bug 1716830] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716830 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-05 07:21:00 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22813 (cluster/dht: Fix directory perms during selfheal) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 5 07:21:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 07:21:01 +0000 Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716848 Bug 1716848 depends on bug 1716830, which changed state. Bug 1716830 Summary: DHT: directory permissions are wiped out https://bugzilla.redhat.com/show_bug.cgi?id=1716830 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 5 16:34:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 16:34:01 +0000 Subject: [Bugs] [Bug 1693693] GlusterFS 4.1.9 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693693 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22826 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 16:34:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 16:34:02 +0000 Subject: [Bugs] [Bug 1693693] GlusterFS 4.1.9 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693693 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22826 (doc: Added release notes for 4.1.9) posted (#1) for review on release-4.1 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 17:19:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 17:19:59 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22827 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 5 17:20:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 05 Jun 2019 17:20:00 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #680 from Worker Ant --- REVIEW: https://review.gluster.org/22827 ([WIP]cli: defer create_frame() (and dict creation) to later stages.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 04:05:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 04:05:33 +0000 Subject: [Bugs] [Bug 1655201] dictionary leak at the time of destroying graph In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655201 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 04:06:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 04:06:39 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 05:18:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 05:18:08 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #56 from Worker Ant --- REVIEW: https://review.gluster.org/22818 (tests/geo-rep: Add geo-rep cli testcases) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 06:28:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:28:32 +0000 Subject: [Bugs] [Bug 1717754] New: Enabled features.locks-notify-contention by default Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Bug ID: 1717754 Summary: Enabled features.locks-notify-contention by default Product: GlusterFS Version: mainline Status: NEW Component: locks Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Currently 'features.locks-notify-contention' is disabled by default. This option, when enabled, instructs the locks xlator to send an upcall notification to the current owner of a lock whenever another client tries to acquire a conflicting lock. Both AFR and EC support this notification and react by releasing the lock as soon as possible. This is extremely useful when eager-lock is enabled (it is by default) because it allows AFR and EC to use it to improve performance but we don't loose performance on other clients when access to the same resource is required. Since eager-lock is enabled by default, it doesn't make sense to keep contention notification disabled. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 06:29:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:29:01 +0000 Subject: [Bugs] [Bug 1717754] Enabled features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 06:31:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:31:14 +0000 Subject: [Bugs] [Bug 1717754] Enabled features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22828 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 06:31:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:31:15 +0000 Subject: [Bugs] [Bug 1717754] Enabled features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22828 (locks: enable notify-contention by default) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 06:35:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:35:37 +0000 Subject: [Bugs] [Bug 1717754] Enable features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|Enabled |Enable |features.locks-notify-conte |features.locks-notify-conte |ntion by default |ntion by default -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 06:41:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:41:14 +0000 Subject: [Bugs] [Bug 1717757] New: BItrot: Segmentation Fault if bitrot stub do signature Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Bug ID: 1717757 Summary: BItrot: Segmentation Fault if bitrot stub do signature Product: GlusterFS Version: 5 Status: NEW Component: bitrot Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Docs Contact: bugs at gluster.org Created attachment 1577785 --> https://bugzilla.redhat.com/attachment.cgi?id=1577785&action=edit backtrace of /usr/sbin/glusterfsd Description of problem: Setup: 2-Node VM Cluster with a Replica 2 Volume After doing several "wild" write and delete operations from a Win Client, one of the brick crashes. The crash report says: [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] pending frames: frame : type(0) op(0) frame : type(0) op(23) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-06-05 09:05:05 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7f89faa7264c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7f89faa7cd26] /lib64/libc.so.6(+0x361a0)[0x7f89f9c391a0] /usr/lib64/glusterfs/5.5/xlator/features/bitrot-stub.so(+0x13441)[0x7f89f22ae441] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x22636)[0x7f89f1e68636] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/libglusterfs.so.0(syncop_fsetxattr+0x26b)[0x7f89faab319b] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0xa901)[0x7f89f1c3d901] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x11b66)[0x7f89f1e57b66] /usr/lib64/glusterfs/5.5/xlator/features/access-control.so(+0xaebe)[0x7f89f208febe] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0xb081)[0x7f89f1e51081] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x8c23)[0x7f89f1c3bc23] /usr/lib64/glusterfs/5.5/xlator/features/read-only.so(+0x4e30)[0x7f89f1a2de30] /usr/lib64/glusterfs/5.5/xlator/features/leases.so(+0xa444)[0x7f89f181b444] /usr/lib64/glusterfs/5.5/xlator/features/upcall.so(+0x10a68)[0x7f89f1600a68] /usr/lib64/libglusterfs.so.0(default_create_resume+0x212)[0x7f89fab10132] /usr/lib64/libglusterfs.so.0(call_resume_wind+0x2cf)[0x7f89faa97e5f] /usr/lib64/libglusterfs.so.0(call_resume+0x75)[0x7f89faa983a5] /usr/lib64/glusterfs/5.5/xlator/performance/io-threads.so(+0x6088)[0x7f89f13e7088] /lib64/libpthread.so.0(+0x7569)[0x7f89f9fc4569] /lib64/libc.so.6(clone+0x3f)[0x7f89f9cfb9af] --------- Version-Release number of selected component (if applicable): v5.5 Additional info: The backtrace shows that there is a Nulllpointer for *fd in br_stub_fsetxattr: Thread 1 (Thread 0x7f89f0099700 (LWP 2171)): #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" This results in a segmentation fault in line 1328 of bit-rot_stub.c : if (!IA_ISREG(fd->inode->ia_type)) goto wind; The bitrot-stub wants to signate a file but the corresponding fd is a Nullpointer. The full backtrace is attached!!! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Jun 6 06:44:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:44:04 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 --- Comment #4 from david.spisla at iternity.com --- Additional Information: My setup was a 4-Node Cluster with VM machines (VmWare) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 06:57:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:57:25 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Not sure why this happened, because, for bitrot, a fsetxattr() call shouldn't come at all if fd is NULL. It should have been prevented at higher level itself. I found the reason after digging a bit. Ideally, in case of failure (here, worm_create_cbk() received -1, which means fd is NULL), one shouldn't consume fd and call fsetxattr(). If there is a need to do a xattr op in failure, then one should call setxattr with 'loc' passed in create() call. (you can store it in local). ---- #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" #1 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f89f1e68636 in pl_fsetxattr (frame=0x7f89b825ab48, this=0x7f89ec0194a0, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at posix.c:1566 _new = 0x7f89b846a6e8 old_THIS = 0x7f89ec0194a0 next_xl_fn = 0x7f89faaf9ec0 tmp_cbk = 0x7f89f1e56680 op_ret = op_errno = 0 lockinfo_buf = 0x0 len = 0 __FUNCTION__ = "pl_fsetxattr" #3 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #4 0x00007f89faab319b in syncop_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #5 0x00007f89f1c3d901 in worm_create_cbk (frame=frame at entry=0x7f89b8302fe8, cookie=, this=, op_ret=op_ret at entry=-1, op_errno=op_errno at entry=13, fd=fd at entry=0x0, inode=0x0, buf=0x0, preparent=0x0, postparent=0x0, xdata=0x0) at worm.c:492 ret = 0 priv = 0x7f89ec074b38 dict = 0x7f89b84e9ad8 __FUNCTION__ = "worm_create_cbk" ---- Hopefully this helps. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Jun 6 06:59:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:59:29 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #2 from Amar Tumballi --- Can you check if below works? diff --git a/xlators/features/read-only/src/worm.c b/xlators/features/read-only/src/worm.c index cc3d15b8b2..6b44eae966 100644 --- a/xlators/features/read-only/src/worm.c +++ b/xlators/features/read-only/src/worm.c @@ -431,7 +431,7 @@ worm_create_cbk(call_frame_t *frame, void *cookie, xlator_t *this, priv = this->private; GF_ASSERT(priv); - if (priv->worm_file) { + if (priv->worm_file && (op_ret >= 0)) { dict = dict_new(); if (!dict) { gf_log(this->name, GF_LOG_ERROR, ---- Great if you can confirm this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Jun 6 06:59:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 06:59:51 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Assignee|bugs at gluster.org |atumball at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Jun 6 07:08:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:08:49 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #3 from david.spisla at iternity.com --- I will check it! -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Jun 6 07:12:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:12:54 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Vivek Das changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |vdas at redhat.com Blocks| |1696809 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 07:12:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:12:59 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack for | |dev&qe approved in-flight | |RHGS3.5 BZs Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 Rule Engine Rule| |327 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 07:35:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:35:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22829 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 07:56:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:56:57 +0000 Subject: [Bugs] [Bug 1717782] gluster v get all still showing storage.fips-mode-rchecksum off In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717782 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 08:01:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 08:01:17 +0000 Subject: [Bugs] [Bug 1717782] gluster v get all still showing storage.fips-mode-rchecksum off In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717782 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22830 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 08:01:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 08:01:18 +0000 Subject: [Bugs] [Bug 1717782] gluster v get all still showing storage.fips-mode-rchecksum off In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717782 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22830 (glusterd: store fips-mode-rchecksum option in the info file) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 07:35:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 07:35:41 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #681 from Worker Ant --- REVIEW: https://review.gluster.org/22829 (tests/utils: Fix py2/py3 changelogparser.py compatibility) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 09:21:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:21:37 +0000 Subject: [Bugs] [Bug 1717819] New: Changes to self-heal logic w.r.t. detecting metadata split-brains Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717819 Bug ID: 1717819 Summary: Changes to self-heal logic w.r.t. detecting metadata split-brains Product: GlusterFS Version: mainline Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in metadata split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others for metadata. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 09:34:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:34:29 +0000 Subject: [Bugs] [Bug 1717819] Changes to self-heal logic w.r.t. detecting metadata split-brains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717819 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22831 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 09:34:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:34:30 +0000 Subject: [Bugs] [Bug 1717819] Changes to self-heal logic w.r.t. detecting metadata split-brains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717819 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22831 (Cluster/afr: Don't treat all bricks having metadata pending as split-brain) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 09:35:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:35:29 +0000 Subject: [Bugs] [Bug 1717824] New: Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 Bug ID: 1717824 Summary: Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked Product: GlusterFS Version: mainline Status: NEW Component: locks Assignee: bugs at gluster.org Reporter: xiubli at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: In Glusterfs, we have support the fencing feature support. With this we can suppor the ALUA feature in LIO/TCMU now. The fencing doc: https://review.gluster.org/#/c/glusterfs-specs/+/21925/6/accepted/fencing.md The fencing test example: https://review.gluster.org/#/c/glusterfs/+/21496/12/tests/basic/fencing/fence-basic.c The LIO/tcmu-runner PR of supporting the ALUA is : https://github.com/open-iscsi/tcmu-runner/pull/554. But currently when testing it based the above PR in tcmu-runner by shutting down of the HA node, and start it after 2~3 minutes, in all the HA nodes we can see that the glfs_file_lock() get stucked, the following is from the /var/log/tcmu-runner.log: ==== 2019-06-06 13:50:15.755 1316 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block3: lock call state 2 retries 0. tag 65535 reopen 0 2019-06-06 13:50:15.757 1316 [DEBUG] tcmu_acquire_dev_lock:440 glfs/block3: lock call done. lock state 1 2019-06-06 13:50:55.845 1316 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block4: lock call state 2 retries 0. tag 65535 reopen 0 2019-06-06 13:50:55.847 1316 [DEBUG] tcmu_acquire_dev_lock:440 glfs/block4: lock call done. lock state 1 2019-06-06 13:57:50.102 1315 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block3: lock call state 2 retries 0. tag 65535 reopen 0 2019-06-06 13:57:50.103 1315 [DEBUG] tcmu_acquire_dev_lock:440 glfs/block3: lock call done. lock state 1 2019-06-06 13:57:50.121 1315 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block4: lock call state 2 retries 0. tag 65535 reopen 0 2019-06-06 13:57:50.132 1315 [DEBUG] tcmu_acquire_dev_lock:440 glfs/block4: lock call done. lock state 1 2019-06-06 14:09:03.654 1328 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block3: lock call state 2 retries 0. tag 65535 reopen 0 2019-06-06 14:09:03.662 1328 [DEBUG] tcmu_acquire_dev_lock:440 glfs/block3: lock call done. lock state 1 2019-06-06 14:09:06.700 1328 [DEBUG] tcmu_acquire_dev_lock:388 glfs/block4: lock call state 2 retries 0. tag 65535 reopen 0 ==== The lock operation is never returned. I am using the following glusterfs built by myself: # rpm -qa|grep glusterfs glusterfs-extra-xlators-7dev-0.0.el7.x86_64 glusterfs-api-devel-7dev-0.0.el7.x86_64 glusterfs-7dev-0.0.el7.x86_64 glusterfs-server-7dev-0.0.el7.x86_64 glusterfs-cloudsync-plugins-7dev-0.0.el7.x86_64 glusterfs-resource-agents-7dev-0.0.el7.noarch glusterfs-api-7dev-0.0.el7.x86_64 glusterfs-devel-7dev-0.0.el7.x86_64 glusterfs-regression-tests-7dev-0.0.el7.x86_64 glusterfs-gnfs-7dev-0.0.el7.x86_64 glusterfs-client-xlators-7dev-0.0.el7.x86_64 glusterfs-geo-replication-7dev-0.0.el7.x86_64 glusterfs-debuginfo-7dev-0.0.el7.x86_64 glusterfs-fuse-7dev-0.0.el7.x86_64 glusterfs-events-7dev-0.0.el7.x86_64 glusterfs-libs-7dev-0.0.el7.x86_64 glusterfs-cli-7dev-0.0.el7.x86_64 glusterfs-rdma-7dev-0.0.el7.x86_64 How reproducible: 30%. Steps to Reproduce: 1. create one rep volume(HA >= 2) with the mandantary lock enabled 2. create one gluster-blockd target 3. login and do the fio in the client node 4. shutdown one of the HA nodes, and wait 2 ~3 minutes and start it again Actual results: all the time the fio couldn't recovery and the rw BW will be 0kb/s, and we can see tons of log from /var/log/tcmu-runnner.log file: 2019-06-06 15:01:06.641 1328 [DEBUG] alua_implicit_transition:561 glfs/block4: Lock acquisition operation is already in process. 2019-06-06 15:01:06.648 1328 [DEBUG_SCSI_CMD] tcmu_cdb_print_info:353 glfs/block4: 28 0 0 3 1f 80 0 0 8 0 2019-06-06 15:01:06.648 1328 [DEBUG] alua_implicit_transition:561 glfs/block4: Lock acquisition operation is already in process. 2019-06-06 15:01:06.655 1328 [DEBUG_SCSI_CMD] tcmu_cdb_print_info:353 glfs/block4: 28 0 0 3 1f 80 0 0 8 0 2019-06-06 15:01:06.655 1328 [DEBUG] alua_implicit_transition:561 glfs/block4: Lock acquisition operation is already in process. 2019-06-06 15:01:06.661 1328 [DEBUG_SCSI_CMD] tcmu_cdb_print_info:353 glfs/block4: 28 0 0 3 1f 80 0 0 8 0 2019-06-06 15:01:06.662 1328 [DEBUG] alua_implicit_transition:561 glfs/block4: Lock acquisition operation is already in process. Expected results: just before the shutdown node is up, the fio could be recovery. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 09:36:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:36:21 +0000 Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |spalai at redhat.com Assignee|bugs at gluster.org |spalai at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 09:39:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:39:50 +0000 Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 --- Comment #1 from Xiubo Li --- Created attachment 1577819 --> https://bugzilla.redhat.com/attachment.cgi?id=1577819&action=edit pstack of on node rhel3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 09:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:40:16 +0000 Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 --- Comment #2 from Xiubo Li --- Created attachment 1577820 --> https://bugzilla.redhat.com/attachment.cgi?id=1577820&action=edit pstack of on node rhel1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 09:41:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:41:50 +0000 Subject: [Bugs] [Bug 1717827] New: tests/geo-rep: Add test case to validate non-root geo-replication setup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 Bug ID: 1717827 Summary: tests/geo-rep: Add test case to validate non-root geo-replication setup Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Add test case to validate non-root geo-replication setup. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 09:42:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:42:06 +0000 Subject: [Bugs] [Bug 1717827] tests/geo-rep: Add test case to validate non-root geo-replication setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 09:42:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 09:42:10 +0000 Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 --- Comment #3 from Xiubo Li --- The bt output from the gbd: [root at rhel1 ~]# gdb -p 1325 (gdb) bt #0 0x00007fc7761baf47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007fc7773de468 in event_dispatch_epoll (event_pool=0x559f03d4b560) at event-epoll.c:847 #2 0x0000559f02419658 in main (argc=21, argv=0x7fff9c6722c8) at glusterfsd.c:2871 (gdb) [root at rhel3 ~]# gdb -p 7669 (gdb) bt #0 0x00007fac80bd9f47 in pthread_join () from /usr/lib64/libpthread.so.0 #1 0x00007fac81dfd468 in event_dispatch_epoll (event_pool=0x55de6f845560) at event-epoll.c:847 #2 0x000055de6f143658 in main (argc=21, argv=0x7ffcafc3eff8) at glusterfsd.c:2871 (gdb) The pl_inode->fop_wind_count is: (gdb) thread 2 [Switching to thread 2 (Thread 0x7fc742184700 (LWP 1829))] #0 0x00007fc7761bd965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 (gdb) frame 2 #2 0x00007fc76379c13b in pl_lk (frame=frame at entry=0x7fc750001128, this=this at entry=0x7fc75c0128f0, fd=fd at entry=0x7fc73c0977d8, cmd=cmd at entry=6, flock=flock at entry=0x7fc73c076938, xdata=xdata at entry=0x7fc73c071828) at posix.c:2637 2637 ret = pl_lock_preempt(pl_inode, reqlock); (gdb) p pl_inode->fop_wind_count $1 = -30 (gdb) The pstack logs please see the attachments Thanks. BRs -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 10:19:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 10:19:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22832 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 10:19:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 10:19:52 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #682 from Worker Ant --- REVIEW: https://review.gluster.org/22832 (glusterd: Fix a typo) posted (#1) for review on master by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 11:40:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 11:40:44 +0000 Subject: [Bugs] [Bug 1717876] New: Gluster upstream regression tests are failing with centos 7.7 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Bug ID: 1717876 Summary: Gluster upstream regression tests are failing with centos 7.7 Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Gluster upstream regression tests are failing with centos 7.7 with python3 package delivered. The python test utility files are not python3 compatible. Following are the test utility python files /bugs/distribute/overlap.py ./bugs/nfs/socket-as-fifo.py ./features/ipctest.py ./utils/create-files.py ./utils/getfattr.py ./utils/gfid-access.py ./utils/libcxattr.py ./utils/pidof.py ./utils/setfattr.py ./utils/changelogparser.py Each needs to be tested and made py2/py3 compatible Version-Release number of selected component (if applicable): mainline How reproducible: Always Failures: https://build.gluster.org/job/centos7-regression/6317/consoleFull &https://build.gluster.org/job/centos7-regression/6316/consoleFull -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 11:43:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 11:43:55 +0000 Subject: [Bugs] [Bug 1717876] Gluster upstream regression tests are failing with centos 7.7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22833 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 11:43:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 11:43:56 +0000 Subject: [Bugs] [Bug 1717876] Gluster upstream regression tests are failing with centos 7.7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22833 (tests: Use python2 for tests) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 11:57:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 11:57:16 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-06 11:57:16 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22697 (tests/shd: Add test coverage for shd mux) merged (#15) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 13:29:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 13:29:44 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com --- Comment #3 from Atin Mukherjee --- Vivek/Rahul - we need this patch in 3.5.0 for better debugging experience. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 14:09:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 14:09:30 +0000 Subject: [Bugs] [Bug 1717953] New: SELinux context labels are missing for newly added bricks using add-brick command Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Bug ID: 1717953 Summary: SELinux context labels are missing for newly added bricks using add-brick command Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: scripts Severity: medium Assignee: bugs at gluster.org Reporter: anoopcs at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When we add new bricks to an existing volume using add-brick command "glusterd_brick_t" SELinux context label is not assigned on those new brick paths. Version-Release number of selected component (if applicable): master How reproducible: Always Steps to Reproduce: 1. Create and start a basic distribute-replicate volume 2. Verify that brick paths have "glusted_brick_t" SELinux labels by running `ls -lZ ` 3. Add new bricks to the existing volume 4. Check SELinux labels on newly added brick paths Actual results: "glusterd_brick_t" SELinux label is missing on newly added bricks Expected results: Following SELinux label is expected: system_u:object_r:glusterd_brick_t:s0 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 14:10:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 14:10:00 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |anoopcs at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 6 14:16:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 14:16:17 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22834 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 6 14:16:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 06 Jun 2019 14:16:18 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22834 (extras/hooks: Add SELinux label on new bricks during add-brick) posted (#1) for review on master by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 05:02:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 05:02:31 +0000 Subject: [Bugs] [Bug 1651445] [RFE] storage.reserve option should take size of disk as input instead of percentage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651445 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1573077 Depends On|1573077 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1573077 [Bug 1573077] [RFE] storage.reserve option should take size of disk as input instead of percentage -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 05:38:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 05:38:38 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(khiremat at redhat.c | |om) --- Comment #2 from Atin Mukherjee --- Kotresh - I believe we need to fix this. Can we have a devel ack on this bug? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 07:00:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 07:00:58 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set | |qe_test_coverage flag at QE | |approved BZs -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 08:22:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:22:54 +0000 Subject: [Bugs] [Bug 1718191] New: Regression: Intermittent test failure for quick-read-with-upcall.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718191 Bug ID: 1718191 Summary: Regression: Intermittent test failure for quick-read-with-upcall.t Product: GlusterFS Version: mainline Status: NEW Component: quick-read Severity: urgent Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: While running regression, quick-read-with-upcall.t script fails intermittently. Please debug and fix the problem. Version-Release number of selected component (if applicable): master How reproducible: 1/10 Steps to Reproduce: 1. Submit a patch and run regression. Additional info: Error is normally like below: 08:59:24 ok 11 [ 10/ 3] < 36> 'write_to /mnt/glusterfs/0/test.txt test-message1' 08:59:24 ok 12 [ 10/ 6] < 37> 'test-message1 cat /mnt/glusterfs/0/test.txt' 08:59:24 ok 13 [ 10/ 4] < 38> 'test-message0 cat /mnt/glusterfs/1/test.txt' 08:59:24 not ok 14 [ 3715/ 6] < 45> 'test-message1 cat /mnt/glusterfs/1/test.txt' -> 'Got "test-message0" instead of "test-message1"' 08:59:24 ok 15 [ 10/ 162] < 47> 'gluster --mode=script --wignore volume set patchy features.cache-invalidation on' 08:59:24 ok 16 [ 10/ 148] < 48> 'gluster --mode=script --wignore volume set patchy performance.qr-cache-timeout 15' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:29:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:29:07 +0000 Subject: [Bugs] [Bug 1718191] Regression: Intermittent test failure for quick-read-with-upcall.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718191 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22836 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:29:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:29:08 +0000 Subject: [Bugs] [Bug 1718191] Regression: Intermittent test failure for quick-read-with-upcall.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718191 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22836 (tests/quick-read-upcall: mark it bad) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:30:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:30:08 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #4 from david.spisla at iternity.com --- @Amar I wrote a patch with debug logs and I will observe the bricks now. During this time I have some questions concerning your patch suggestion: 1. According to crash report from the brick locks, there was a failure in [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] just before the crash. What can be the reason for this? 2. If this LOOKUP for acls fails, is it problematic to do a setxattr with loc? If we skip setting xattr when fd is NULL the file on that brick won't have the necessary xattr like trusted.worm_file and other. See an example directly after the crash: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== On the healthy brick I got: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.afr.dirty=0sAAAAAAAAAAAAAAAA trusted.afr.test-client-0=0sAAAABAAAAAMAAAAA trusted.bit-rot.version=0sAgAAAAAAAABc+P64AAEhGQ== trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.glusterfs.mdata=0sAQAAAAAAAAAAAAAAAFz5AJEAAAAAMqdgMwAAAABcRwJEAAAAAAAAAAAAAAAAXPkAkQAAAAAAAAAA trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== trusted.start_time="1559822481" trusted.worm_file=0sMQA= After restarting the faulty brick a heal was triggered and afterwards the file on the faulty brick is heal.It should be ensured that the broken file gets all necessary xattr. What is the better way? Triggering a setxattr with loc in worm_create_cbk or do a heal afterwards? -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 7 08:31:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:31:55 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22837 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:31:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:31:56 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22837 (tests/volume-scale-shd-mux: mark as bad test) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:35:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:35:33 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #5 from Amar Tumballi --- 1. permission denied is mostly probably a issue of missing permission (uid 2000, trying to create an entry in a directory with 755, owned by uid-0 (root)). 2. I think it is better to leave it to heal. If it is a create failure, we should anyways fail the operation is my opinion. -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 7 08:40:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:40:14 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22835 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:40:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:40:15 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #683 from Worker Ant --- REVIEW: https://review.gluster.org/22835 (tests/subdir-mount: give more time for heal to complete) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:41:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:41:35 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #4 from Amar Tumballi --- Reopened because of the test script failure: volume-scale-shd-mux.t 09:09:24 not ok 58 [ 14/ 80343] < 104> '^3$ number_healer_threads_shd patchy_distribute1 __afr_shd_healer_wait' -> 'Got "1" instead of "^3$"' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:49:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:49:00 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Severity|unspecified |high --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22837 (tests/volume-scale-shd-mux: mark as bad test) merged (#1) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 08:49:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 08:49:43 +0000 Subject: [Bugs] [Bug 1718191] Regression: Intermittent test failure for quick-read-with-upcall.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718191 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22836 (tests/quick-read-upcall: mark it bad) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 09:05:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 09:05:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #684 from Worker Ant --- REVIEW: https://review.gluster.org/22829 (tests/utils: Fix py2/py3 util python scripts) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 09:32:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 09:32:08 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sheggodu at redhat.com Flags| |needinfo?(khiremat at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 09:57:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 09:57:15 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #6 from david.spisla at iternity.com --- Allrigth, I will stress the system for a while and if everything is stable I will commit the patch to gerrit -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 7 10:34:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 10:34:53 +0000 Subject: [Bugs] [Bug 1718227] New: SELinux context labels are missing for newly added bricks using add-brick command Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Bug ID: 1718227 Summary: SELinux context labels are missing for newly added bricks using add-brick command Product: GlusterFS Version: 6 OS: Linux Status: NEW Component: scripts Severity: medium Assignee: bugs at gluster.org Reporter: anoopcs at redhat.com CC: bugs at gluster.org Depends On: 1717953 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1717953 +++ Description of problem: When we add new bricks to an existing volume using add-brick command "glusterd_brick_t" SELinux context label is not assigned on those new brick paths. Version-Release number of selected component (if applicable): master How reproducible: Always Steps to Reproduce: 1. Create and start a basic distribute-replicate volume 2. Verify that brick paths have "glusted_brick_t" SELinux labels by running `ls -lZ ` 3. Add new bricks to the existing volume 4. Check SELinux labels on newly added brick paths Actual results: "glusterd_brick_t" SELinux label is missing on newly added bricks Expected results: Following SELinux label is expected: system_u:object_r:glusterd_brick_t:s0 --- Additional comment from Worker Ant on 2019-06-06 19:46:18 IST --- REVIEW: https://review.gluster.org/22834 (extras/hooks: Add SELinux label on new bricks during add-brick) posted (#1) for review on master by Anoop C S Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 10:34:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 10:34:53 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1718227 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 10:36:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 10:36:15 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1686800 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 10:55:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 10:55:40 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |anoopcs at redhat.com Flags| |needinfo?(ryan at magenta.tv) --- Comment #1 from Anoop C S --- (In reply to ryan from comment #0) > Actual results: > CTDB fails to start with following error: > 2019/02/08 20:46:59.612215 ctdbd[2629]: Created PID file > /var/run/ctdb/ctdbd.pid > 2019/02/08 20:46:59.612267 ctdbd[2629]: Listening to ctdb socket > /var/run/ctdb/ctdbd.socket > 2019/02/08 20:46:59.612297 ctdbd[2629]: Unable to set scheduler to > SCHED_FIFO (Operation not permitted) > 2019/02/08 20:46:59.612304 ctdbd[2629]: CTDB daemon shutting down Please use the following CTDB setting in /etc/sysconfig/ctdb: CTDB_NOSETSCHED=yes and try restarting CTDB. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:01:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:01:49 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 --- Comment #2 from Anoop C S --- (In reply to Anoop C S from comment #1) > (In reply to ryan from comment #0) > > Actual results: > > CTDB fails to start with following error: > > 2019/02/08 20:46:59.612215 ctdbd[2629]: Created PID file > > /var/run/ctdb/ctdbd.pid > > 2019/02/08 20:46:59.612267 ctdbd[2629]: Listening to ctdb socket > > /var/run/ctdb/ctdbd.socket > > 2019/02/08 20:46:59.612297 ctdbd[2629]: Unable to set scheduler to > > SCHED_FIFO (Operation not permitted) > > 2019/02/08 20:46:59.612304 ctdbd[2629]: CTDB daemon shutting down > > Please use the following CTDB setting in /etc/sysconfig/ctdb: > CTDB_NOSETSCHED=yes > > and try restarting CTDB. Copy-pasting a summary of the reason for above suggestion from a different bug: CTDB daemon i.e, ctdbd is a service that by default requests for real-time scheduling unless it is instructed not to do so via explicit configuration parameters. By default systemd places all system services into their own control groups in the "cpu" hierarchy. But the "cpu" cgroup controller of the kernel demands absolute real-time budget to be explicitly specified. A reasonable value for required real-time cpu cycles are pre-written into corresponding configuration files. This value getting overwritten by other components in the system results in denial of real-time scheduling to services under this "cpu" hierarchy with error EPERM(Operation not permitted). ref: https://www.freedesktop.org/wiki/Software/systemd/MyServiceCantGetRealtime/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:04:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:04:59 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Component|core |gluster-smb Assignee|moagrawa at redhat.com |anoopcs at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:16:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:16:19 +0000 Subject: [Bugs] [Bug 1680085] OS X clients disconnect from SMB mount points In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680085 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Version|cns-1.0 |4.1 Component|samba |gluster-smb CC| |bugs at gluster.org Assignee|gdeschner at redhat.com |bugs at gluster.org QA Contact|vdas at redhat.com | Product|Red Hat Gluster Storage |GlusterFS -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 11:16:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:16:46 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(khiremat at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:19:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:19:10 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |anoopcs at redhat.com Component|libgfapi |gluster-smb QA Contact|bugs at gluster.org | -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 11:20:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:20:40 +0000 Subject: [Bugs] [Bug 1717876] Gluster upstream regression tests are failing with centos 7.7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED --- Comment #2 from Kotresh HR --- This patch fixed the issue and is merged https://review.gluster.org/#/c/glusterfs/+/22829/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 11:21:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:21:07 +0000 Subject: [Bugs] [Bug 1717876] Gluster upstream regression tests are failing with centos 7.7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 11:21:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:21:29 +0000 Subject: [Bugs] [Bug 1717876] Gluster upstream regression tests are failing with centos 7.7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717876 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-07 11:21:29 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:26:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:26:59 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED CC| |amukherj at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:49:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:49:10 +0000 Subject: [Bugs] [Bug 1718273] New: markdown formatting errors in files present under /doc directory of the project Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718273 Bug ID: 1718273 Summary: markdown formatting errors in files present under /doc directory of the project Product: GlusterFS Version: mainline Status: NEW Component: doc Keywords: Documentation Assignee: kiyer at redhat.com Reporter: kiyer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: There are a lot of markdown files present under /doc directory of the project which are having markdown formatting errors which make these files look really shabby when open on GitHub. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:53:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:53:06 +0000 Subject: [Bugs] [Bug 1718273] markdown formatting errors in files present under /doc directory of the project In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718273 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22825 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 11:53:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 11:53:07 +0000 Subject: [Bugs] [Bug 1718273] markdown formatting errors in files present under /doc directory of the project In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718273 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22825 (Fixing formatting errors in markdown files) posted (#2) for review on master by Kshithij Iyer -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 7 13:11:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 13:11:43 +0000 Subject: [Bugs] [Bug 1718316] New: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Bug ID: 1718316 Summary: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Product: GlusterFS Version: mainline Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, dang at redhat.com, ffilz at redhat.com, grajoria at redhat.com, jthottan at redhat.com, mbenjamin at redhat.com, msaini at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, skoduri at redhat.com, storage-qa-internal at redhat.com Depends On: 1717784 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1717784 +++ Description of problem: ========================= Ganesha-gfapi logs are flooded with errors messages related to gf_uuid_is_null(gfid), when linux untars and lookups are running from multiple clients- --------- [2019-06-06 07:56:12.503603] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7f7e91e8b0ae] -->/lib64/libgfapi.so.0(+0x258f1) [0x7f7e91ea28f1] -->/lib64/libgfapi.so.0(+0x257c4) [0x7f7e91ea27c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] --------- Version-Release number of selected component (if applicable): =========================== # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 Beta (Maipo) # rpm -qa | grep ganesha nfs-ganesha-2.7.3-3.el7rhgs.x86_64 glusterfs-ganesha-6.0-3.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.7.3-3.el7rhgs.x86_64 nfs-ganesha-gluster-2.7.3-3.el7rhgs.x86_64 How reproducible: ===================== 2/2 Steps to Reproduce: ====================== 1.Create 4 node Ganesha cluster 2.Create 4*3 Distribute-replicate Volume.Export the volume via Ganesha 3.Mount the volume on 4 clients via v4.1 protocol 4.Run the following workload Client 1: Run linux untars Client 2: du -sh in loop Client 3: ls -lRt in loop Client 4: find's in loop Actual results: ================== While test is running,ganesha-gfapi logs are flooded with errors related to "gf_uuid_is_null" ====== [2019-06-03 16:54:19.829136] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.006163] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.320293] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] ===== # cat /var/log/ganesha/ganesha-gfapi.log | grep gf_uuid_is_null | wc -l 605340 Expected results: =================== There should not be error messages in ganesha-gfapi.logs Additional info: =================== On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients --- Additional comment from RHEL Product and Program Management on 2019-06-06 08:10:27 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Soumya Koduri on 2019-06-06 09:48:36 UTC --- @Manisha, are these clients connected to different NFS-Ganesha servers? On which machine did you observe these errors? I do not see such messages in the sosreports uploaded. >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients Does this mean, these messages are not seen with just linux untar test? --- Additional comment from Manisha Saini on 2019-06-06 10:16:00 UTC --- (In reply to Soumya Koduri from comment #3) > @Manisha, > > are these clients connected to different NFS-Ganesha servers? On which > machine did you observe these errors? I do not see such messages in the > sosreports uploaded. Hi soumya, All the clients are connected to single server VIP I see there is some issue with how sosreport collecting ganesha logs.All logs are not captured as part of sosreport. > > >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients > > Does this mean, these messages are not seen with just linux untar test? No.Not seen with only untars --- Additional comment from Soumya Koduri on 2019-06-07 10:08:03 UTC --- Thanks Manisha for sharing the setup and logs. "0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] " The above message is logged while processing upcall requests. Somehow the gfid passed has become NULL. IMO there are two issues to be considered here - > there are so many upcall requests generated even though there is only single server serving all the clients. Seems like the data being accessed is huge and hence the server is trying clean up the inodes from the lru list. While destroying a inode, upcall xlator sends cache invalidation request to all its clients to notify that the particular file/inode entry is no more cached by the server. This logic can be optimized a bit here. For nameless lookups, server generates a dummy inode (say inodeD) and later links it to inode (if there is no entry already present for that file/dir) in the cbk path. So as part of lookup_cbk, though the inode (inodeD) received is invalid, upcall xlator creates an inode_ctx entry as it eventually can get linked to the inode table. However in certain cases, if there is already an inode (say inodeC) present for that particular file, this new inode (inodeD) created will be purged, which results in sending upcall notifications to the clients. in Manisha's testcase, as the data created is huge and being looked up in a loop, there are many such dummy inode entries getting purged resulting in huge number of upcall notifications sent to the client. We can avoid this issue to an extent by checking if the inode is valid or not (i.e, linked or not) before sending callback notifications. note - this has been day-1 issue but good to be fixed. * Another issue is gfid becoming NULL in upcall args. > I couldn't reproduce this issue on my setup. However seems like in upcall xlator we already check if the gfid is not NULL before sending notification. GF_VALIDATE_OR_GOTO("upcall_client_cache_invalidate", !(gf_uuid_is_null(gfid)), out); So that means somewhere in the client processing, gfid has become NULL. From further code-reading I see a potential issue in upcall processing callback function - In glfs_cbk_upcall_data(), -- args->fs = fs; args->upcall_data = gf_memdup(upcall_data, sizeof(*upcall_data)); -- gf_memdup() may not be the right routine to use here as upcall_data structure contains pointers to other data. This definitely needs to be fixed. However would like to re-confirm if this caused gfid to become NULL. Request Manisha to share setup (if possible) while the tests going on to confirm this theory. Thanks! Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 13:14:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 13:14:59 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #685 from Worker Ant --- REVIEW: https://review.gluster.org/22832 (glusterd: Fix typos) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 13:59:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 13:59:52 +0000 Subject: [Bugs] [Bug 1718338] New: Upcall: Avoid sending upcalls for invalid Inode Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Bug ID: 1718338 Summary: Upcall: Avoid sending upcalls for invalid Inode Product: GlusterFS Version: mainline Hardware: All OS: All Status: NEW Component: upcall Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: For nameless LOOKUPs, server creates a new inode which shall remain invalid until the fop is successfully processed post which it is linked to the inode table. But incase if there is an already linked inode for that entry, it discards that newly created inode which results in upcall notification. This may result in client being bombarded with unnecessary upcalls affecting performance if the data set is huge. This issue can be avoided by looking up and storing the upcall context in the original linked inode (if exists), thus saving up on those extra callbacks. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:08:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:08:38 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1717784 Depends On|1717784 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:08:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:08:38 +0000 Subject: [Bugs] [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1717784 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:09:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:09:42 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22839 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:09:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:09:44 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22839 (gfapi: fix incorrect initialization of upcall syncop arguments) posted (#1) for review on master by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:10:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:10:25 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Assignee|bugs at gluster.org |skoduri at redhat.com -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:10:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:10:52 +0000 Subject: [Bugs] [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22840 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 7 14:11:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 07 Jun 2019 14:11:30 +0000 Subject: [Bugs] [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Assignee|bugs at gluster.org |skoduri at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 02:13:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 02:13:07 +0000 Subject: [Bugs] [Bug 1717782] gluster v get all still showing storage.fips-mode-rchecksum off In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717782 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-08 02:13:07 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22830 (glusterd: store fips-mode-rchecksum option in the info file) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 8 05:40:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 05:40:05 +0000 Subject: [Bugs] [Bug 1716766] [Thin-arbiter] TA process is not picking 24007 as port while starting up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716766 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-08 05:40:05 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22811 (cluster/replicate: Modify command in unit file to assign port correctly) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 8 05:42:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 05:42:56 +0000 Subject: [Bugs] [Bug 1715921] uss.t tests times out with brick-mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715921 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-08 05:42:56 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22728 (uss: Ensure that snapshot is deleted before creating a new snapshot) merged (#10) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 05:46:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 05:46:10 +0000 Subject: [Bugs] [Bug 1718273] markdown formatting errors in files present under /doc directory of the project In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718273 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-08 05:46:10 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22825 (Fixing formatting errors in markdown files) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 8 05:47:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 05:47:06 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22810 (xlator/log: Add more logging in xlator_is_cleanup_starting) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 8 08:14:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 08:14:37 +0000 Subject: [Bugs] [Bug 1688226] Brick Still Died After Restart Glusterd & Glusterfsd Services In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688226 Eng Khalid Jamal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-06-08 08:14:37 --- Comment #4 from Eng Khalid Jamal --- i think there is no one can solve this issue for me , when i check my brick i find my disk is completely offline i replace my disk , and i make gluster replace brick then re balance my volume then heal it every thing is going write but in future is there any solution for this issue . Best regards -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 14:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 14:40:16 +0000 Subject: [Bugs] [Bug 1697986] GlusterFS 5.7 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22842 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 14:40:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 14:40:17 +0000 Subject: [Bugs] [Bug 1697986] GlusterFS 5.7 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22842 (doc: Added release notes for 5.7) posted (#1) for review on release-5 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 14:41:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 14:41:50 +0000 Subject: [Bugs] [Bug 1718555] New: (glusterfs-6.3) - GlusterFS 6.3 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Bug ID: 1718555 Summary: (glusterfs-6.3) - GlusterFS 6.3 tracker Product: GlusterFS Version: 4.1 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: hgowtham at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Tracker bug for 6.3 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 15:00:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 15:00:25 +0000 Subject: [Bugs] [Bug 1718555] (glusterfs-6.3) - GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22843 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 15:00:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 15:00:26 +0000 Subject: [Bugs] [Bug 1718555] (glusterfs-6.3) - GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22843 (doc: Added release notes for 6.3) posted (#1) for review on release-6 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 16:34:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 16:34:10 +0000 Subject: [Bugs] [Bug 1718562] New: flock failure (regression) Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 Bug ID: 1718562 Summary: flock failure (regression) Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: locks Severity: urgent Assignee: bugs at gluster.org Reporter: jaco at uls.co.za CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: after a small number of flock rounds the lock remains behind indefinitely until cleared with volume clear-locks, whereafter which normal operation resumes again. I suspect this happens when there is contention on the lock. I've got a setup where these locks are used syncronization mechanism. So a process on host a will take the lock, and release it on shutdown, at which point another host is likely already trying to obtain the lock, and never manages to do so (clearing granted allows the lock to proceed, but randomly clearing locks is a high-risk operation). Version-Release number of selected component (if applicable): glusterfs 6.1 (confirmed working correctly on 3.12.3 and 4.0.2, suspected correct on 4.1.5 but no longer have a setup with 4.1.5 around). How reproducible: Trivial. In the mentioned application it's on almost every single lock attempt as far as I can determine. Steps to Reproduce: morpheus ~ # gluster volume info shared Volume Name: shared Type: Replicate Volume ID: a4410662-b6e0-4ed0-b1e0-a1cbf168029c Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: morpheus:/mnt/gluster/shared Brick2: r2d2:/mnt/gluster/shared Options Reconfigured: transport.address-family: inet nfs.disable: on morpheus ~ # mkdir /mnt/t morpheus ~ # mount -t glusterfs localhost:shared /mnt/t morpheus ~ # r2d2 ~ # mkdir /mnt/t r2d2 ~ # mount -t glusterfs localhost:shared /mnt/t r2d2 ~ # morpheus ~ # cd /mnt/t/ morpheus ~ # ls -l total 0 morpheus /mnt/t # exec 3>lockfile; c=0; while flock -w 10 -x 3; do (( c++ )); echo "Iteration $c passed"; exec 3<&-; exec 3>lockfile; done; echo "Failed after $c iterations"; exec 3<&- Iteration 1 passed Iteration 2 passed Iteration 3 passed ... r2d2 /mnt/t # exec 3>lockfile; c=0; while flock -w 10 -x 3; do (( c++ )); echo "Iteration $c passed"; exec 3<&-; exec 3>lockfile; done; echo "Failed after $c iterations"; exec 3<&- Iteration 1 passed Iteration 2 passed Failed after 2 iterations r2d2 /mnt/t # Iteration 100 passed Iteration 101 passed Iteration 102 passed Failed after 102 iterations morpheus /mnt/t # The two mounts failed at the same time, morpheus just passed more iterations due to being started first. Only iterating on one host I've had to stop it with ^C around 10k iterations, which to me is sufficient indication that it's contention related. After the above failure, I need to either rm the file and then it works again, or I need to issue "gluster volume clear-locks shared /lockfile kind granted posix" On /tmp on my local machine I can run as much invocations of the loop above as I want without issues (ext4 filesystem). On glusterfs 3.12.3 and 4.0.2 I tried the above too, and stopped them after 10k iterations. I have not observed the behaviour on glusterfs 4.1.5 which we used for a very long time. I either need a fix for this, or a way (prefereably with little no downtime, total around 1.8TB of data) to downgrade glusterfs back to 4.1.X. Or a way to get around this reliably from within my application code (mostly control scripts written in bash). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 8 05:47:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 08 Jun 2019 05:47:06 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-05-22 13:11:42 |2019-06-09 00:31:15 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22798 (ec/fini: Fix race between xlator cleanup and on going async fop) merged (#12) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Jun 9 05:34:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 05:34:35 +0000 Subject: [Bugs] [Bug 1703007] The telnet or something would cause high memory usage for glusterd & glusterfsd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703007 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |INSUFFICIENT_DATA Last Closed| |2019-06-09 05:34:35 --- Comment #3 from Atin Mukherjee --- Closing this as we haven't received sufficient information. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 9 05:36:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 05:36:26 +0000 Subject: [Bugs] [Bug 1658733] tests/bugs/glusterd/rebalance-operations-in-single-node.t is failing in brick mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1658733 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |amukherj at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-06-09 05:36:26 --- Comment #5 from Atin Mukherjee --- I do not see a reason in keeping this bug open given we don't see this test failing any longer in recent regressions. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 9 05:37:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 05:37:29 +0000 Subject: [Bugs] [Bug 1662178] Compilation fails for xlators/mgmt/glusterd/src with error "undefined reference to `dlclose'" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662178 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(vnosov at stonefly.c | |om) --- Comment #1 from Atin Mukherjee --- Does this still happen? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 9 05:38:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 05:38:13 +0000 Subject: [Bugs] [Bug 1668245] gluster(8) - Man page - create gluster example session In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668245 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(skandark at redhat.c | |om) --- Comment #1 from Atin Mukherjee --- Ping! Any work being done on this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Jun 9 05:39:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 05:39:11 +0000 Subject: [Bugs] [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |srakonde at redhat.com Flags| |needinfo?(srakonde at redhat.c | |om) --- Comment #3 from Atin Mukherjee --- Ping! Any progress on this? Is this still seen with latest master? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Jun 9 10:51:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 10:51:30 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22844 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 9 10:51:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 10:51:32 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #686 from Worker Ant --- REVIEW: https://review.gluster.org/22844 ([WIP]multiple files: another attempt to remove includes) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Jun 9 17:29:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 09 Jun 2019 17:29:06 -0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22517 (features/shard: Fix extra unref when inode object is lru'd out and added back) merged (#6) on master by Xavi Hernandez -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 04:21:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 04:21:37 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22845 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 04:21:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 04:21:38 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #687 from Worker Ant --- REVIEW: https://review.gluster.org/22845 (tests: keep glfsxmp in tests directory) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 05:33:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 05:33:01 +0000 Subject: [Bugs] [Bug 1716790] geo-rep: Rename with same destination name test case occasionally fails on EC Volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716790 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 06:03:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:03:49 +0000 Subject: [Bugs] [Bug 1718555] (glusterfs-6.3) - GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Tracking Version|4.1 |6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:07:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:07:35 +0000 Subject: [Bugs] [Bug 1693693] GlusterFS 4.1.9 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693693 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-10 06:07:35 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22826 (doc: Added release notes for 4.1.9) merged (#2) on release-4.1 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:17:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:17:59 +0000 Subject: [Bugs] [Bug 1718734] New: Memory leak in glusterfsd process Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 Bug ID: 1718734 Summary: Memory leak in glusterfsd process Product: GlusterFS Version: 5 Hardware: mips64 OS: Linux Status: NEW Component: disperse Severity: urgent Assignee: bugs at gluster.org Reporter: abhishpaliwal at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1578935 --> https://bugzilla.redhat.com/attachment.cgi?id=1578935&action=edit Script to see the memory leak Description of problem: We are seeing the memory leak in glusterfsd process when writing and deleting the specific file in some interval Version-Release number of selected component (if applicable): Glusterfs 5.4 How reproducible: Here is the Setup details and test which we are doing as below: One client, two gluster Server. The client is writing and deleting one file each 15 minutes by script test_v4.15.sh. IP Server side: 128.224.98.157 /gluster/gv0/ 128.224.98.159 /gluster/gv0/ Client side: 128.224.98.160 /gluster_mount/ Server side: gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/ 128.224.98.159:/gluster/gv0/ force gluster volume start gv0 root at 128:/tmp/brick/gv0# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 128.224.98.157:/gluster/gv0 Brick2: 128.224.98.159:/gluster/gv0 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off exec script: ./ps_mem.py -p 605 -w 61 > log root at 128:/# ./ps_mem.py -p 605 Private + Shared = RAM used Program 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd --------------------------------- 24856.0 KiB ================================= Client side: mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0 /gluster_mount We are using the below script write and delete the file. test_v4.15.sh Also the below script to see the memory increase whihle the script is above script is running in background. ps_mem.py I am attaching the script files as well as the result got after testing the scenario. Actual results: Memory leak is present Expected results: Leak should not be there Additional info: Please see the attached file for more details also attaching the statedumps -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:19:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:19:20 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #1 from Abhishek --- Created attachment 1578937 --> https://bugzilla.redhat.com/attachment.cgi?id=1578937&action=edit logs -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:19:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:19:41 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #2 from Abhishek --- Created attachment 1578938 --> https://bugzilla.redhat.com/attachment.cgi?id=1578938&action=edit logs -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:20:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:20:16 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #3 from Abhishek --- Created attachment 1578939 --> https://bugzilla.redhat.com/attachment.cgi?id=1578939&action=edit Script to write and delete file in gluster mount point -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:23:06 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #4 from Abhishek --- Created attachment 1578940 --> https://bugzilla.redhat.com/attachment.cgi?id=1578940&action=edit Statedumps -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:23:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:23:49 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #5 from Abhishek --- Created attachment 1578941 --> https://bugzilla.redhat.com/attachment.cgi?id=1578941&action=edit Graph for the memory increase -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:24:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:24:50 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #6 from Abhishek --- Attached are some statedumps taken on the gluster fs server. An initial dump, then one from an hour or so later, then one from another 3 hours or so. I believe we have looked before at the statedumps though and not seen any evidence there what is going wrong, but please double-check this. I have a system running with gluster 5.4 setup in replicate mode. So an Active and a Passive server, and one client that has mounted the gluster volume, and simply writes and deletes a file every 15 minutes to the gluster volume. That is all that is going on. What we see is that the memory usage for glusterfsd process is increasing slowly in a liner fashion. I am running a python script every minute to log the memory usage, and then plot the result on a graph. I attach the graph showing glusterfsd private, shared and total memory usage over time (some 78 days running). I also attach two screenshots from 'top' taken at various stages. This is the graph during the one file every 15 minutes write test: Please see the attachment for image.png And BTW, if we 'hammer' the gluster volume with file writes/deletes in a much faster fashion, ie many files written/deleted every second, or even every minute, we see that the glusterfsd memory usage increases only for a very short period, then it levels off and stays level forever at around 35MB total. So there is clearly something different happening in the 'slow' file access case where the total is at nearly 200MB and still increasing. If we run Valgrind, we see that memory allocates are freed up when the process is ended, but the problem we have is that this will be on a system where gluster is up and running all the time. So there seems to be a problem that memory is dynamically allocated each time there is a write/read on the gluster volume, but it is not dynamically freed in runtime. The worry is at some point glusterfsd will completely use up all the memory on the system - might take a long time but this is not acceptable. My steps are here: root at board0:/tmp# gluster --version glusterfs 5.4 root at board0:/tmp# gluster volume status gv0 Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick board0:/export/sdb1/brick 49152 0 Y 1702 Brick board1:/export/sdb1/brick 49152 0 Y 1652 Self-heal Daemon on localhost N/A N/A Y 1725 Self-heal Daemon on board1 N/A N/A Y 1675 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks root at board0:/tmp# jobs [1]+ Running ./ps_mem.py -w 61 > /tmp/ps_mem.log & (wd: ~) root at board0:/tmp# ps -ef | grep gluster root 1608 1 0 May08 ? 00:00:04 /usr/sbin/glusterd -p /var/run/glusterd.pid root 1702 1 0 May08 ? 00:00:14 /usr/sbin/glusterfsd -s board0 --volfile-id gv0.board0.export-sdb1-brick -p /var/run/gluster/vols/gv0/board0-export-sdb1-brick.pid -S /var/run/gluster/6c09da8ec6e017c8.socket --brick-name /export/sdb1/brick -l /var/log/glusterfs/bricks/export-sdb1-brick.log --xlator-option *-posix.glusterd-uuid=336dc4a8-1371-4366-b2f9-003c35e12ca1 --process-name brick --brick-port 49152 --xlator-option gv0-server.listen-port=49152 root 1725 1 0 May08 ? 00:00:03 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/7ae70daf2745f7d4.socket --xlator-option replicate.node-uuid=336dc4a8-1371-4366-b2f9-003c35e12ca1 --process-name glustershd root 3115 1241 0 03:00 ttyS0 00:00:00 grep gluster This is the cmd used to create the gluster volume: gluster volume create gv0 replica 2 board0:/export/sdb1/brick board1:/export/sdb1/brick And on client I do like: mount -t glusterfs board0:gv0 /mnt and then just run the one file each 15 min test ./test_v4.sh To get the data, I run like this after some time: grep glusterfsd ps_mem.log | awk '{ print $1 "," $4 "," $7 }' > gluster54-glusterfsd.csv Then plot the points in excel -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:32:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:32:06 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nbalacha at redhat.com Component|disperse |core -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 06:43:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 06:43:04 +0000 Subject: [Bugs] [Bug 1718741] New: GlusterFS having high CPU Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718741 Bug ID: 1718741 Summary: GlusterFS having high CPU Product: GlusterFS Version: 4.1 Status: NEW Component: glusterfind Assignee: bugs at gluster.org Reporter: suresh3.mani at gmail.com QA Contact: bugs at gluster.org CC: avishwan at redhat.com, bugs at gluster.org, khiremat at redhat.com Target Milestone: --- Classification: Community Description of problem: GlusterFS having high CPU or memory outages. Version-Release number of selected component (if applicable):Gluster 4.1.5 , Redhat 7.4 Please help what will be the issues. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 07:10:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 07:10:33 +0000 Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717824 --- Comment #4 from Susant Kumar Palai --- Just a small update: There are cases where fop_wind_count can go -ve. A basic fix will be never to bring its value down if it is zero. I will update more on this later as I am busy with a few other issues ATM. Susant -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 07:28:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 07:28:37 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 Vivek Das changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1696809 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 07:28:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 07:28:43 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack for | |dev&qe approved in-flight | |RHGS3.5 BZs Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 09:00:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:00:30 +0000 Subject: [Bugs] [Bug 1718562] flock failure (regression) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com, | |jthottan at redhat.com, | |spalai at redhat.com --- Comment #1 from Amar Tumballi --- Hi Jaco, thanks for the report. Will update on this soon. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 09:00:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:00:44 +0000 Subject: [Bugs] [Bug 1718562] flock failure (regression) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 09:18:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:18:12 +0000 Subject: [Bugs] [Bug 1718562] flock failure (regression) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 --- Comment #2 from Yaniv Kaul --- Looks like a simple test worth adding to our CI. We can do with 1000 iterations or so. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 09:18:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:18:37 +0000 Subject: [Bugs] [Bug 1718555] (glusterfs-6.3) - GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |urgent Alias| |glusterfs-6.3 Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 09:18:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:18:52 +0000 Subject: [Bugs] [Bug 1718555] GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|(glusterfs-6.3) - GlusterFS |GlusterFS 6.3 tracker |6.3 tracker | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 09:52:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 09:52:41 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #3 from ryan at magenta.tv --- Hello, We are currently working around this issue with the configuration option you suggested 'CTDB_NOSETSCHED=yes', and I can confirm CTDB starts successfully with this. Best, Ryan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 10:04:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 10:04:06 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22847 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 10:04:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 10:04:07 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #688 from Worker Ant --- REVIEW: https://review.gluster.org/22847 (tests: added cleanup for lock files) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 11:26:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 11:26:04 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1700865 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1700865 [Bug 1700865] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 11:26:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 11:26:04 +0000 Subject: [Bugs] [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DUPLICATE Flags|needinfo?(srakonde at redhat.c | |om) | Last Closed| |2019-06-10 11:26:04 --- Comment #4 from Sanju --- https://review.gluster.org/#/c/glusterfs/+/22600/ has removed this assert condition, so we don't see this assertion failure in log now. Susant is working on this issue and https://bugzilla.redhat.com/show_bug.cgi?id=1700865 is tracking it. So, I'm closing this bug. Thanks, Sanju *** This bug has been marked as a duplicate of bug 1700865 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 11:26:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 11:26:05 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1679892, which changed state. Bug 1679892 Summary: assertion failure log in glusterd.log file when a volume start is triggered https://bugzilla.redhat.com/show_bug.cgi?id=1679892 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 12:02:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 12:02:25 +0000 Subject: [Bugs] [Bug 1718848] New: False positive logging of mount failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718848 Bug ID: 1718848 Summary: False positive logging of mount failure Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Even on successful mount attempts seeing false positive logging: [2019-06-09 18:18:16.079101] E [MSGID: 106176] [glusterd-handshake.c:1038:__server_getspec] 0-management: Failed to mount the volume I believe not every time rsp.op_ret will be 0 in case successful mounts. This needs to be cross checked on why we're logging this unnecessarily. While running subdir-mount.t test caught this log entry. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 12:03:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 12:03:34 +0000 Subject: [Bugs] [Bug 1718848] False positive logging of mount failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718848 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |srakonde at redhat.com --- Comment #1 from Atin Mukherjee --- d42221bec9 (Sanju Rakonde 2019-05-08 07:58:27 +0530 1036) if (rsp.op_ret) d42221bec9 (Sanju Rakonde 2019-05-08 07:58:27 +0530 1037) gf_msg(this->name, GF_LOG_ERROR, 0, GD_MSG_MOUNT_REQ_FAIL, d42221bec9 (Sanju Rakonde 2019-05-08 07:58:27 +0530 1038) "Failed to mount the volume") Was introduced by commit d42221bec9 . -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 12:18:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 12:18:36 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |amukherj at redhat.com, | |pgurusid at redhat.com Flags| |needinfo?(pgurusid at redhat.c | |om) --- Comment #2 from Atin Mukherjee --- Since type GF_TRANSPORT_BOTH_TCP_RDMA isn't handled in the function. Poornima - Was this intentionally done or a bug? I feel it's the latter. Looking at glusterd_get_dummy_client_filepath () we just need to club GF_TRANSPORT_TCP & GF_TRANSPORT_BOTH_TCP_RDMA in the same place. Please confirm. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 14:25:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 14:25:35 +0000 Subject: [Bugs] [Bug 1680085] OS X clients disconnect from SMB mount points In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680085 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #4 from ryan at magenta.tv --- Hi Anoop, I'm currently unable to test this due to issues found on bug 1716440. Best, Ryan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 14:27:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 14:27:01 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #689 from Worker Ant --- REVIEW: https://review.gluster.org/22847 (tests: added cleanup for lock files) merged (#1) on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 14:48:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 14:48:46 +0000 Subject: [Bugs] [Bug 1717819] Changes to self-heal logic w.r.t. detecting metadata split-brains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717819 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-10 14:48:46 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22831 (Cluster/afr: Don't treat all bricks having metadata pending as split-brain) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 17:17:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 17:17:57 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |srakonde at redhat.com --- Comment #3 from Sanju --- Looking at the code, I feel we missed handle GF_TRANSPORT_BOTH_TCP_RDMA. As we have provided choice to create volume using tcp,rdma we should handle GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile(). This issue exists in the latest master too. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 10 17:45:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 17:45:29 +0000 Subject: [Bugs] [Bug 1718998] New: Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 Bug ID: 1718998 Summary: Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure Product: GlusterFS Version: mainline Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Test case "split-brain-favorite-child-policy.t" is failing to heal files within the $HEAL_TIMEOUT. Failure log: 20:00:59 ok 132, LINENUM:194 20:00:59 not ok 133 Got "2" instead of "^0$", LINENUM:195 20:00:59 FAILED COMMAND: ^0$ get_pending_heal_count patchy 20:00:59 ok 134, LINENUM:197 20:00:59 ok 135, LINENUM:199 20:00:59 ok 136, LINENUM:201 20:00:59 Failed 1/136 subtests -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 18:15:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 18:15:21 +0000 Subject: [Bugs] [Bug 1718998] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22850 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 18:15:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 18:15:22 +0000 Subject: [Bugs] [Bug 1718998] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22850 (tests: Fix split-brain-favorite-child-policy.t failure) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 10 23:28:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 10 Jun 2019 23:28:56 +0000 Subject: [Bugs] [Bug 1662178] Compilation fails for xlators/mgmt/glusterd/src with error "undefined reference to `dlclose'" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662178 vnosov changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(vnosov at stonefly.c | |om) | --- Comment #2 from vnosov --- Installation of Glisters 6.2 does not have this problem. Viktor. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 04:23:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 04:23:36 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 04:25:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 04:25:24 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22851 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 04:25:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 04:25:25 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 04:30:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 04:30:15 +0000 Subject: [Bugs] [Bug 1718998] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-11 04:30:15 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22850 (tests: Fix split-brain-favorite-child-policy.t failure) merged (#1) on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 05:23:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 05:23:17 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-06-11 05:23:17 --- Comment #4 from Anoop C S --- Closing the bug report as per confirmation in comment #3. See comment #2 for details. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 05:51:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 05:51:39 +0000 Subject: [Bugs] [Bug 1719112] New: Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719112 Bug ID: 1719112 Summary: Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com QA Contact: nchilaka at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1718998 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1718998 +++ Description of problem: Test case "split-brain-favorite-child-policy.t" is failing to heal files within the $HEAL_TIMEOUT. Failure log: 20:00:59 ok 132, LINENUM:194 20:00:59 not ok 133 Got "2" instead of "^0$", LINENUM:195 20:00:59 FAILED COMMAND: ^0$ get_pending_heal_count patchy 20:00:59 ok 134, LINENUM:197 20:00:59 ok 135, LINENUM:199 20:00:59 ok 136, LINENUM:201 20:00:59 Failed 1/136 subtests Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 [Bug 1718998] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 05:51:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 05:51:39 +0000 Subject: [Bugs] [Bug 1718998] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718998 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1719112 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1719112 [Bug 1719112] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 05:51:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 05:51:40 +0000 Subject: [Bugs] [Bug 1719112] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719112 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 05:58:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 05:58:59 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Krutika Dhananjay changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 06:06:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 06:06:02 +0000 Subject: [Bugs] [Bug 1719112] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719112 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 07:52:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 07:52:08 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0-4 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 07:53:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 07:53:11 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0-5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 08:00:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 08:00:51 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 errata-xmlrpc changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |ON_QA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 08:00:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 08:00:58 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 errata-xmlrpc changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |ON_QA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 08:50:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 08:50:43 +0000 Subject: [Bugs] [Bug 1719174] New: broken regression link? Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719174 Bug ID: 1719174 Summary: broken regression link? Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Regression job in one of my patches failed at https://build.gluster.org/job/centos7-regression/6404/consoleFull with no indication of what test failed. While accessing the link following pops up: Problem accessing //job/centos7-regression/6404/consoleFull. Reason: Not found Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 09:10:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 09:10:41 +0000 Subject: [Bugs] [Bug 1719112] Fix test case "tests/basic/afr/split-brain-favorite-child-policy.t" failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719112 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |amukherj at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-06-11 09:10:41 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 11:44:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 11:44:37 +0000 Subject: [Bugs] [Bug 1718555] GlusterFS 6.3 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718555 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-11 11:44:37 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22843 (doc: Added release notes for 6.3) merged (#3) on release-6 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 11:47:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 11:47:39 +0000 Subject: [Bugs] [Bug 1717757] BItrot: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 david.spisla at iternity.com changed: What |Removed |Added ---------------------------------------------------------------------------- Version|5 |mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue Jun 11 12:36:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 12:36:24 +0000 Subject: [Bugs] [Bug 1719290] New: Glusterfs mount helper script not working with IPv6 because of regular expression or man is wrong Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719290 Bug ID: 1719290 Summary: Glusterfs mount helper script not working with IPv6 because of regular expression or man is wrong Product: GlusterFS Version: 5 Hardware: All OS: Linux Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: aga_1990 at hotmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Version-Release number of selected component (if applicable): 5.6 Description of problem: Based on man glusterfs I think this format should work: ... mount -t glusterfs [-o ] ,, ,..:/[/] ... So list the servers and use , as the separator but as I see this doesn't work well unfortunatelly with IPv6 addresses. Command to try: mount -t glusterfs fd00:16::106,fd00:16::107:/oam /mnt/test1/ In the log I see next: [2019-06-11 12:25:16.453643] I [MSGID: 100030] [glusterfsd.c:2725:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.6 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=fd00:16: --volfile-id=oam /mnt/test1) So the ipv6 address is wrong. Possible solution: In this file: /usr/sbin/mount.glusterfs the following line is wrong (nearby line 720): server_ip=$(echo "$volfile_loc" | sed -n 's/\([a-zA-Z0-9:%.\-]*\):.*/\1/p'); you should use this: server_ip=$(echo "$volfile_loc" | sed -n 's/\([a-zA-Z0-9:%,.\-]*\):.*/\1/p'); Because of the , is missing from the regex after % After I applied this patch relevant lines from log: [2019-06-11 12:33:40.345671] I [MSGID: 100030] [glusterfsd.c:2725:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.6 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=fd00:16::106 --volfile-server=fd00:16::107 --volfile-id=/oam /mnt/test1) The IP looks like more better and the mount point is ok. How reproducible: Mount command provided above -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 13:28:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 13:28:03 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #690 from Worker Ant --- REVIEW: https://review.gluster.org/22845 (tests: keep glfsxmp in tests directory) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 13:28:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 13:28:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22796 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 13:28:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 13:28:52 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #691 from Worker Ant --- REVIEW: https://review.gluster.org/22796 (libglusterfs: cleanup iovec functions) merged (#12) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 11 14:25:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 14:25:18 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 nchilaka changed: What |Removed |Added ---------------------------------------------------------------------------- QA Contact|nchilaka at redhat.com |anepatel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 15:51:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 15:51:47 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(anoopcs at redhat.co | |m) --- Comment #5 from ryan at magenta.tv --- Is there any way to undo the changes made by the script to allow real-time scheduling? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 11 16:40:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 11 Jun 2019 16:40:46 +0000 Subject: [Bugs] [Bug 1719388] New: infra: download.gluster.org /var/www/html/... is out of free space Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 Bug ID: 1719388 Summary: infra: download.gluster.org /var/www/html/... is out of free space Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 04:17:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 04:17:48 +0000 Subject: [Bugs] [Bug 1709248] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-12 04:17:48 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22716 (geo-rep : fix mountbroker setup) merged (#13) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 05:32:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 05:32:37 +0000 Subject: [Bugs] [Bug 1678640] Running 'control-cpu-load.sh' prevents CTDB starting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678640 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(anoopcs at redhat.co | |m) | --- Comment #6 from Anoop C S --- (In reply to ryan from comment #5) > Is there any way to undo the changes made by the script to allow real-time > scheduling? If you are concerned about performance penalty in disabling real-time scheduling for CTDB then I don't think it will be obviously visible. If not you are good to go with current setting. On the other side you may have to ask whoever came up with this script for more details around it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 05:46:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 05:46:52 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ryan at magenta.tv) --- Comment #4 from Anoop C S --- (In reply to ryan from comment #2) > Hi Anoop, > > Thanks for getting back to me. > I've tried your suggestion but unfortunately the issue still remains. Here > is my updated smb.conf: > > [global] > security = user > netbios name = NAS01 > clustering = no How many nodes are there in your cluster? In case of a Samba cluster having more than one node it is recommended to use CTDB with clustering parameter set to 'yes'. > server signing = no > > max log size = 10000 > log file = /var/log/samba/log-%M-test.smbd > logging = file > log level = 10 > > passdb backend = tdbsam > guest account = nobody > map to guest = bad user > > force directory mode = 0777 > force create mode = 0777 > create mask = 0777 > directory mask = 0777 > > store dos attributes = yes > > load printers = no > printing = bsd > printcap name = /dev/null > disable spoolss = yes > > glusterfs:volfile_server = localhost > ea support = yes > fruit:aapl = yes > kernel share modes = No > > [VFS] > vfs objects = fruit streams_xattr glusterfs > fruit:encoding = native > glusterfs:volume = mcv02 > path = / > read only = no > guest ok = yes > > This time when creating a new folder at the root of the share, it creates, > then disappears, sometimes coming back, sometimes not. > When I was able to traverse into a sub-folder, the same error is received. Can you re-check after restarting the services with 'posix locking' parameter set to 'no' in [global] section of smb.conf? -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 06:33:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 06:33:47 +0000 Subject: [Bugs] [Bug 1629877] GlusterFS can be improved (clone for Gluster-5) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1629877 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22855 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 06:33:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 06:33:48 +0000 Subject: [Bugs] [Bug 1629877] GlusterFS can be improved (clone for Gluster-5) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1629877 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- Keywords| |Reopened --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22855 (tests/utils: Fix py2/py3 util python scripts) posted (#1) for review on release-5 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 09:00:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 09:00:12 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-12 09:00:12 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22834 (extras/hooks: Add SELinux label on new bricks during add-brick) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 09:00:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 09:00:12 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Bug 1718227 depends on bug 1717953, which changed state. Bug 1717953 Summary: SELinux context labels are missing for newly added bricks using add-brick command https://bugzilla.redhat.com/show_bug.cgi?id=1717953 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 10:18:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:18:16 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Anoop C S --- Re-opening as previous patch failed to install and package new hook script. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 10:18:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:18:17 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Bug 1718227 depends on bug 1717953, which changed state. Bug 1717953 Summary: SELinux context labels are missing for newly added bricks using add-brick command https://bugzilla.redhat.com/show_bug.cgi?id=1717953 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 10:24:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:24:14 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22856 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 10:24:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:24:15 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22856 (extras/hooks: Install and package newly added post add-brick hook script) posted (#1) for review on master by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 10:54:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:54:39 +0000 Subject: [Bugs] [Bug 1489325] Place to host gerritstats In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489325 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #5 from M. Scherer --- I would prefer it in the cage, on a separate VM (I already started the playbook). As for the stats and end results, that should be to the reporter, not me. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 10:57:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 10:57:36 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Flags|needinfo?(mscherer at redhat.c |needinfo?(dkhandel at redhat.c |om) |om) --- Comment #3 from M. Scherer --- Deepshika, what kind of info is needed ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 11:02:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 11:02:21 +0000 Subject: [Bugs] [Bug 1504713] Move planet build to be triggered by Jenkins In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1504713 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #3 from M. Scherer --- Nope, nothing changed. That's kinda a lower priority, since the system work well enough most of the time, and has a rather low impact. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 11:38:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 11:38:34 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(dkhandel at redhat.c | |om) | --- Comment #4 from Deepshikha khandelwal --- Misc, I've no idea on how can I give access to wordpress instance. It would be great if there is any documentation on this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 11:44:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 11:44:27 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #5 from ryan at magenta.tv --- Hi Anoop, Usually we have 2 in our development cluster, however for testing I stopped the CTDB services on one node and performed the test. I didn't stop the services on the second node however. After repeating the testing with both stopped, I can't re-create the issue. Was there something in the logs about CTDB? Best, Ryan -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 11:50:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 11:50:25 +0000 Subject: [Bugs] [Bug 1716455] OS X error -50 when creating sub-folder on Samba share when using Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716455 --- Comment #1 from ryan at magenta.tv --- Issue exists on Gluster 5.6 and 6. Can also be re-produced with this smb.conf: [global] security = user netbios name = NAS01 clustering = no server signing = no max log size = 10000 log file = /var/log/samba/log-%M-test.smbd logging = file log level = 2 passdb backend = tdbsam guest account = nobody map to guest = bad user force directory mode = 0777 force create mode = 0777 create mask = 0777 directory mask = 0777 store dos attributes = yes load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes #posix locking = no glusterfs:volfile_server = localhost ea support = yes fruit:aapl = yes kernel share modes = No [VFS] vfs objects = fruit streams_xattr glusterfs fruit:encoding = native glusterfs:volume = mcv02 path = / read only = no guest ok = yes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 11:50:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 11:50:43 +0000 Subject: [Bugs] [Bug 1716455] OS X error -50 when creating sub-folder on Samba share when using Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716455 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Version|6 |5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 12:09:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 12:09:44 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sacharya at redhat.c | |om) --- Comment #2 from M. Scherer --- Ok so before opening a account, I would like to discuss the plan for automating that. I kinda feel unease of the fact we are still doing everything manually (especially after the nfs ganesha issue that we found internally), and while I do not have personnaly the ressources nor time to automate (was on TODO list, but after Nigel departure and the migration to AWS, this was pushed down the line), I would like to take on this opportunity to first discuss that, and then open the account. In that order, because experience show that the reverse order is not consecutive of any action (curiously, folks listen to me more when they wait on me for something, so I hope folks will excuse me for that obvious blackmail, but ot should be quick). So, how long would it take to automate the release from Jenkins to download.gluster, and who would be dedicated on it on the gluster side ? (once we agree on a deadline, I will create a account that expire automatically after that time, just to make sure we do not leave a gapping hole open) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 12:16:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 12:16:11 +0000 Subject: [Bugs] [Bug 1348072] Backups for Gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1348072 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #6 from M. Scherer --- I need to review the current status, I know we have database backups, but backups is one half of the coin, we also need to test the recovery from end to end. And testing the recovery requires to be able to automate the installation of Gerrit, something that was cautiously delayed due to the criticity of the service (aka, that's still not managed by ansible). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 12:17:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 12:17:18 +0000 Subject: [Bugs] [Bug 1489417] Gerrit shouldn't offer http or git for code download In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489417 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #5 from M. Scherer --- Dunno, I think Nigel had a specific plan for this, but that's not on my radar. I would however keep it open so we do not forget, once more urgent stuff are done (or once we get more ressources, who would have a side effect of fixing more urgent stuff) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 12:53:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 12:53:29 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ryan at magenta.tv) --- Comment #6 from Anoop C S --- (In reply to ryan from comment #5) > Hi Anoop, > > Usually we have 2 in our development cluster, however for testing I stopped > the CTDB services on one node and performed the test. May I ask why? Also in that case how were you accessing the server from Mac client machine? Using public IP available on the node(where CTDB is running) or with direct node IP? > I didn't stop the services on the second node however. > After repeating the testing with both stopped, I can't re-create the issue. Now when CTDB is stopped on both nodes you must have accessed shares using node IP. > Was there something in the logs about CTDB? You cannot expect CTDB logging in smbd log file specific to a client. CTDB logs entries in /var/log/log.ctdb. My gut feeling is that the behaviour you are facing is due to lack of synchronized tdbs across nodes in the cluster which is one of the point why we run CTDB in a cluster. Therefore I would suggest you run CTDB on both nodes and access the cluster using public IPs after making sure that the cluster is in HEALTHY state. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 14:21:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 14:21:38 +0000 Subject: [Bugs] [Bug 1719778] New: build fails for every patch on release 5 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719778 Bug ID: 1719778 Summary: build fails for every patch on release 5 Product: GlusterFS Version: 5 Status: NEW Component: core Severity: high Assignee: bugs at gluster.org Reporter: hgowtham at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: The smoke test started to fail on the release 5 branch. Version-Release number of selected component (if applicable): release 5 How reproducible: has been happening to all release 5 patches lately. Even the one that passed earlier have started to fail Steps to Reproduce: 1. 2. 3. Actual results: The smoke test fails Expected results: The smoke test has to pass Additional info: The link to the build failure: https://build.gluster.org/job/strfmt_errors/18888/artifact/RPMS/el6/i686/build.log logs: Mock Version: 1.4.16 ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target i686 --nodeps /builddir/build/SPECS/glusterfs.spec'], chrootPath='/var/lib/mock/epel-6-i386/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8'}shell=Falselogger=timeout=0uid=0gid=135user='mockbuild'nspawn_args=[]unshare_net=TrueprintOutput=False) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target i686 --nodeps /builddir/build/SPECS/glusterfs.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8'} and shell False BUILDSTDERR: warning: Could not canonicalize hostname: builder39.int.rht.gluster.org Building target platforms: i686 Building for target i686 Wrote: /builddir/build/SRPMS/glusterfs-5.6-0.7.gitb71e7b3.el6.src.rpm Child return code was: 0 ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bb --target i686 --nodeps /builddir/build/SPECS/glusterfs.spec'], chrootPath='/var/lib/mock/epel-6-i386/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8'}shell=Falselogger=timeout=0uid=0gid=135user='mockbuild'nspawn_args=[]unshare_net=TrueprintOutput=False) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bb --target i686 --nodeps /builddir/build/SPECS/glusterfs.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8'} and shell False Building target platforms: i686 Building for target i686 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.xkCF2q + umask 022 + cd /builddir/build/BUILD + LANG=C + export LANG + unset DISPLAY + cd /builddir/build/BUILD + rm -rf glusterfs-5.6 + /usr/bin/gzip -dc /builddir/build/SOURCES/glusterfs-5.6.tar.gz + /bin/tar -xf - + STATUS=0 + '[' 0 -ne 0 ']' + cd glusterfs-5.6 + /bin/chmod -Rf a+rX,u+w,g-w,o-w . + echo 'fixing python shebangs...' fixing python shebangs... + for f in api events extras geo-replication libglusterfs tools xlators + find api -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find events -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find extras -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find geo-replication -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find libglusterfs -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find tools -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + for f in api events extras geo-replication libglusterfs tools xlators + find xlators -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' '{}' ';' + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.kGZI5V + umask 022 + cd /builddir/build/BUILD + cd glusterfs-5.6 + LANG=C + export LANG + unset DISPLAY + ./autogen.sh ... GlusterFS autogen ... Running aclocal... Running autoheader... Running libtoolize... Running autoconf... Running automake... BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/Makefile.in' not found BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/examples/Makefile.in' not found BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/src/Makefile.in' not found BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/src/setup.py.in' not found BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/src/__init__.py.in' not found BUILDSTDERR: configure.ac:30: required file `xlators/features/glupy/src/glupy/Makefile.in' not found Please proceed with configuring, compiling, and installing. + CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables' + export CFLAGS + CXXFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables' + export CXXFLAGS + FFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -I/usr/lib/gfortran/modules' + export FFLAGS + ./configure --build=i686-redhat-linux-gnu --host=i686-redhat-linux-gnu --target=i686-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --without-tmpfilesdir --disable-events --disable-georeplication --without-ocf --without-server --disable-syslog --disable-tiering --without-libtirpc checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a pax tar archive... gnutar checking build system type... i686-redhat-linux-gnu checking host system type... i686-redhat-linux-gnu checking for i686-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for i686-redhat-linux-gnu-objdump... no checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for i686-redhat-linux-gnu-ar... no checking for ar... ar checking for i686-redhat-linux-gnu-strip... no checking for strip... strip checking for i686-redhat-linux-gnu-ranlib... no checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking for rpcgen... yes checking for /etc/centos-release... yes CentOS release 6.10 (Final) checking for dlopen in -ldl... yes checking for flex... flex checking lex output file root... lex.yy checking lex library... none needed checking whether yytext is a pointer... no checking size of short... 2 checking size of int... 4 checking size of long... 4 checking size of long long... 8 checking for bison... bison -y checking for i686-redhat-linux-gnu-ld... /usr/bin/ld checking for MD5 in -lcrypto... yes checking for pthread_mutex_init in -lpthread... yes checking for dlopen... yes checking for rl_do_undo in -lreadline... yes checking for gettext in -lintl... no checking sys/xattr.h usability... yes checking sys/xattr.h presence... yes checking for sys/xattr.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking sys/extattr.h usability... no checking sys/extattr.h presence... no checking for sys/extattr.h... no checking openssl/dh.h usability... yes checking openssl/dh.h presence... yes checking for openssl/dh.h... yes checking openssl/ecdh.h usability... yes checking openssl/ecdh.h presence... yes checking for openssl/ecdh.h... yes checking for pow in -lm... yes checking for i686-redhat-linux-gnu-pkg-config... no checking for pkg-config... /usr/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for UUID... yes checking for uuid.h... yes checking sys/acl.h usability... yes checking sys/acl.h presence... yes checking for sys/acl.h... yes checking acl/libacl.h usability... yes checking acl/libacl.h presence... yes checking for acl/libacl.h... yes checking openssl/md5.h usability... yes checking openssl/md5.h presence... yes checking for openssl/md5.h... yes checking for adler32 in -lz... yes checking linux/falloc.h usability... yes checking linux/falloc.h presence... yes checking for linux/falloc.h... yes checking linux/oom.h usability... yes checking linux/oom.h presence... yes checking for linux/oom.h... yes checking for pthread_spin_init... yes checking for strnlen... yes checking for setfsuid... yes checking for setfsgid... yes checking for umount2... yes checking whether /usr/bin/python2 version >= 2.6... yes checking for /usr/bin/python2 version... 2.6 checking for /usr/bin/python2 platform... linux2 checking for /usr/bin/python2 script directory... ${prefix}/lib/python2.6/site-packages checking for /usr/bin/python2 extension module directory... ${exec_prefix}/lib/python2.6/site-packages checking for TLS_method in -lssl... no checking for TLSv1_2_method in -lssl... yes checking openssl/cmac.h usability... yes checking openssl/cmac.h presence... yes checking for openssl/cmac.h... yes checking sys/epoll.h usability... yes checking sys/epoll.h presence... yes checking for sys/epoll.h... yes checking for ibv_get_device_list in -libverbs... yes checking for rdma_create_id in -lrdmacm... yes checking whether RDMA_OPTION_ID_REUSEADDR is declared... yes checking for ZLIB... yes features requiring zlib enabled: yes checking for XML... yes checking for curl_easy_setopt in -lcurl... yes checking openssl/hmac.h usability... yes checking openssl/hmac.h presence... yes checking for openssl/hmac.h... yes checking openssl/evp.h usability... yes checking openssl/evp.h presence... yes checking for openssl/evp.h... yes checking openssl/bio.h usability... yes checking openssl/bio.h presence... yes checking for openssl/bio.h... yes checking openssl/buffer.h usability... yes checking openssl/buffer.h presence... yes checking for openssl/buffer.h... yes checking execinfo.h usability... yes checking execinfo.h presence... yes checking for execinfo.h... yes checking for malloc_stats... yes checking for struct stat.st_atim.tv_nsec... yes checking for struct stat.st_atimespec.tv_nsec... no checking for linkat... yes checking for clock_gettime in -lrt... yes checking argp.h usability... yes checking argp.h presence... yes checking for argp.h... yes checking for gcc __atomic builtins... no checking for gcc __sync builtins... yes checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking for llistxattr... yes checking for fdatasync... yes checking for fallocate... yes checking for posix_fallocate... yes checking for utimensat... yes checking whether SEEK_HOLE is declared... no checking for /etc/debian_version... no checking for /etc/SuSE-release... no checking for /etc/redhat-release... yes checking whether gcc accepts -Werror=format-security... yes checking whether gcc accepts -Werror=implicit-function-declaration... yes checking if compiling with clang... no checking for readline in -lreadline -lcurses... yes checking for readline in -lreadline -ltermcap... yes checking for readline in -lreadline -lncurses... yes checking for io_setup in -laio... yes building glupy with -isystem -isystem /usr/include/python2.6 -I/usr/include/python2.6 -lpthread -ldl -lutil -lm -lpython2.6 checking for URCU... yes checking for URCU_CDS... no checking for URCU_CDS... yes configure: creating ./config.status config.status: creating Makefile config.status: creating libglusterfs/Makefile config.status: creating libglusterfs/src/Makefile config.status: creating libglusterfs/src/gfdb/Makefile config.status: creating geo-replication/src/peer_gsec_create config.status: creating geo-replication/src/peer_mountbroker config.status: creating geo-replication/src/peer_mountbroker.py config.status: creating geo-replication/src/peer_georep-sshkey.py config.status: creating extras/peer_add_secret_pub config.status: creating geo-replication/syncdaemon/conf.py config.status: creating geo-replication/gsyncd.conf config.status: creating extras/snap_scheduler/conf.py config.status: creating glusterfsd/Makefile config.status: creating glusterfsd/src/Makefile config.status: creating rpc/Makefile config.status: creating rpc/rpc-lib/Makefile config.status: creating rpc/rpc-lib/src/Makefile config.status: creating rpc/rpc-transport/Makefile config.status: creating rpc/rpc-transport/socket/Makefile config.status: creating rpc/rpc-transport/socket/src/Makefile config.status: creating rpc/rpc-transport/rdma/Makefile config.status: creating rpc/rpc-transport/rdma/src/Makefile config.status: creating rpc/xdr/Makefile config.status: creating rpc/xdr/src/Makefile config.status: creating rpc/xdr/gen/Makefile config.status: creating xlators/Makefile config.status: creating xlators/meta/Makefile config.status: creating xlators/meta/src/Makefile config.status: creating xlators/mount/Makefile config.status: creating xlators/mount/fuse/Makefile config.status: creating xlators/mount/fuse/src/Makefile config.status: creating xlators/mount/fuse/utils/mount.glusterfs config.status: creating xlators/mount/fuse/utils/mount_glusterfs config.status: creating xlators/mount/fuse/utils/Makefile config.status: creating xlators/storage/Makefile config.status: creating xlators/storage/posix/Makefile config.status: creating xlators/storage/posix/src/Makefile config.status: creating xlators/storage/bd/Makefile config.status: creating xlators/storage/bd/src/Makefile config.status: creating xlators/cluster/Makefile config.status: creating xlators/cluster/afr/Makefile config.status: creating xlators/cluster/afr/src/Makefile config.status: creating xlators/cluster/stripe/Makefile config.status: creating xlators/cluster/stripe/src/Makefile config.status: creating xlators/cluster/dht/Makefile config.status: creating xlators/cluster/dht/src/Makefile config.status: creating xlators/cluster/ec/Makefile config.status: creating xlators/cluster/ec/src/Makefile config.status: creating xlators/performance/Makefile config.status: creating xlators/performance/write-behind/Makefile config.status: creating xlators/performance/write-behind/src/Makefile config.status: creating xlators/performance/read-ahead/Makefile config.status: creating xlators/performance/read-ahead/src/Makefile config.status: creating xlators/performance/readdir-ahead/Makefile config.status: creating xlators/performance/readdir-ahead/src/Makefile config.status: creating xlators/performance/io-threads/Makefile config.status: creating xlators/performance/io-threads/src/Makefile config.status: creating xlators/performance/io-cache/Makefile config.status: creating xlators/performance/io-cache/src/Makefile config.status: creating xlators/performance/symlink-cache/Makefile config.status: creating xlators/performance/symlink-cache/src/Makefile config.status: creating xlators/performance/quick-read/Makefile config.status: creating xlators/performance/quick-read/src/Makefile config.status: creating xlators/performance/open-behind/Makefile config.status: creating xlators/performance/open-behind/src/Makefile config.status: creating xlators/performance/md-cache/Makefile config.status: creating xlators/performance/md-cache/src/Makefile config.status: creating xlators/performance/decompounder/Makefile config.status: creating xlators/performance/decompounder/src/Makefile config.status: creating xlators/performance/nl-cache/Makefile config.status: creating xlators/performance/nl-cache/src/Makefile config.status: creating xlators/debug/Makefile config.status: creating xlators/debug/sink/Makefile config.status: creating xlators/debug/sink/src/Makefile config.status: creating xlators/debug/trace/Makefile config.status: creating xlators/debug/trace/src/Makefile config.status: creating xlators/debug/error-gen/Makefile config.status: creating xlators/debug/error-gen/src/Makefile config.status: creating xlators/debug/delay-gen/Makefile config.status: creating xlators/debug/delay-gen/src/Makefile config.status: creating xlators/debug/io-stats/Makefile config.status: creating xlators/debug/io-stats/src/Makefile config.status: creating xlators/protocol/Makefile config.status: creating xlators/protocol/auth/Makefile config.status: creating xlators/protocol/auth/addr/Makefile config.status: creating xlators/protocol/auth/addr/src/Makefile config.status: creating xlators/protocol/auth/login/Makefile config.status: creating xlators/protocol/auth/login/src/Makefile config.status: creating xlators/protocol/client/Makefile config.status: creating xlators/protocol/client/src/Makefile config.status: creating xlators/protocol/server/Makefile config.status: creating xlators/protocol/server/src/Makefile config.status: creating xlators/features/Makefile config.status: creating xlators/features/arbiter/Makefile config.status: creating xlators/features/arbiter/src/Makefile config.status: creating xlators/features/thin-arbiter/Makefile config.status: creating xlators/features/thin-arbiter/src/Makefile config.status: creating xlators/features/changelog/Makefile config.status: creating xlators/features/changelog/src/Makefile config.status: creating xlators/features/changelog/lib/Makefile config.status: creating xlators/features/changelog/lib/src/Makefile config.status: creating xlators/features/changetimerecorder/Makefile config.status: creating xlators/features/changetimerecorder/src/Makefile BUILDSTDERR: config.status: error: cannot find input file: xlators/features/glupy/Makefile.in RPM build errors: BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build) BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build) Child return code was: 1 EXCEPTION: [Error()] Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 96, in trace result = func(*args, **kw) File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736, in do_with_status raise exception.Error("Command failed: \n # %s\n%s" % (command, output), child.returncode) mockbuild.exception.Error: Command failed: # bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps /builddir/build/SPECS/glusterfs.spec -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 14:53:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 14:53:12 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #7 from Abhishek --- Created attachment 1579854 --> https://bugzilla.redhat.com/attachment.cgi?id=1579854&action=edit statedumps log for 6.1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 14:54:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 14:54:43 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #8 from Abhishek --- Hi, As requested we have checked tested the same on Glusterfs 6.1 and memory leak is present here as well. Please check the "statedumps log for 6.1" in attachment for more details. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:08 +0000 Subject: [Bugs] [Bug 764805] Change documentation for AMI appliance to add port 80 as a requirement In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=764805 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:09 +0000 Subject: [Bugs] [Bug 765500] nfs.mem-factor should be documented In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=765500 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:10 +0000 Subject: [Bugs] [Bug 784780] Unclear/inconsistent iptables examples In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=784780 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:12 +0000 Subject: [Bugs] [Bug 790173] Copying the .pem info for ssh geo-replication instructions should use ssh-copy-id command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=790173 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:13 +0000 Subject: [Bugs] [Bug 790174] Documentation needs to suggest syncing all gluster nodes to NTP In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=790174 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:14 +0000 Subject: [Bugs] [Bug 790178] Add note for brick creation that states XFS inode size should be 512 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=790178 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:17 -0000 Subject: [Bugs] [Bug 798795] Confusing docs for auth.allow and auth.reject In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=798795 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:17 +0000 Subject: [Bugs] [Bug 803372] 9.5. Rebalancing Volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=803372 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:18 +0000 Subject: [Bugs] [Bug 814809] Users don't understand that you cannot write to bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=814809 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:19 +0000 Subject: [Bugs] [Bug 816182] [ Sec 7.1.2.3 - Testing Mounted Volumes ] df output needs to be formatted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=816182 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:20 +0000 Subject: [Bugs] [Bug 818902] Confusing example of distributed striped volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=818902 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:21 +0000 Subject: [Bugs] [Bug 819469] Geo-replication doesn't support syncing of mknod and pipe files . In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=819469 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:22 +0000 Subject: [Bugs] [Bug 820292] Installation Guide dependency list needs updating In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=820292 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:24 +0000 Subject: [Bugs] [Bug 841524] subdirectory nfs mount on solaris In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=841524 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:25 +0000 Subject: [Bugs] [Bug 848809] Gluster File System 3.3.0 Administration PDF - Heal command incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=848809 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:27 +0000 Subject: [Bugs] [Bug 852556] Need information on retiring bricks/nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=852556 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:28 +0000 Subject: [Bugs] [Bug 865696] Publish GlusterFS 3.3 Administration Guide HTML files on a server instead of HTML tarball In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=865696 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:29 +0000 Subject: [Bugs] [Bug 906238] glusterfs client hang when parallel operate the same dir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=906238 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:35 +0000 Subject: [Bugs] [Bug 950761] forge.gluster.org - No MetaData Pages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=950761 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:36 +0000 Subject: [Bugs] [Bug 951469] Gluster_File_System Administration_Guide-en-US.pdf cluster.quorum-type, log-level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=951469 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:37 +0000 Subject: [Bugs] [Bug 951473] cluster.min-free-disk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=951473 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:41 +0000 Subject: [Bugs] [Bug 1067291] Description of the port numbers are incorrect in Troubleshooting page In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1067291 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:55 +0000 Subject: [Bugs] [Bug 1101757] Out of date instructions for getting started guide In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1101757 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:58 +0000 Subject: [Bugs] [Bug 1135548] Error in quick start: start volume and specify mount point In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1135548 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:30:59 +0000 Subject: [Bugs] [Bug 1138992] gluster.org broken links In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1138992 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:31:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:31:01 +0000 Subject: [Bugs] [Bug 1154098] Bad debian sources.list configuration in multiarch context In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1154098 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:31:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:31:02 +0000 Subject: [Bugs] [Bug 1157462] Dead Link - Missing Documentation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1157462 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:31:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:31:03 +0000 Subject: [Bugs] [Bug 1213061] Guide for setting up GlusterFS with SSL/ TLS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1213061 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 12 21:31:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 12 Jun 2019 21:31:07 +0000 Subject: [Bugs] [Bug 1276483] Unprivileged account used for geo-replication needs access to SSL/TLS private key when using TLS on the Management Path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1276483 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|pyaduvan at redhat.com |scarpent at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 03:14:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 03:14:18 +0000 Subject: [Bugs] [Bug 1712668] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-13 03:14:18 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22805 (cli: Remove-brick warning seems unnecessary) merged (#4) on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 05:28:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 05:28:12 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkavunga at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 05:29:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 05:29:12 +0000 Subject: [Bugs] [Bug 1716097] infra: create suse-packing@lists.nfs-ganesha.org alias In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716097 Marc Dequ?nes (Duck) changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |duck at redhat.com --- Comment #2 from Marc Dequ?nes (Duck) --- Quack, I created the list with Kaleb as owner. Now Kaleb can setup the list of admins and moderators as well as the list description. Please be careful to avoid manual modifications to the infra. I saw while deploying that suse-packaging@ had be manually added to the aliases file for Postfix. We do regular updates of the infra to fix and improve various things and these changes are going to be overwritten and the expected feature lost. \_o< -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 07:45:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 07:45:54 +0000 Subject: [Bugs] [Bug 1304465] dnscache in libglusterfs returns 127.0.0.1 for 1st non-localhost request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1304465 --- Comment #2 from Rinku --- Glusterfs version used for testing : v6.3 Servers used for testing : Hostname : GlusterNode1.com, Ip : 192.168.1.10 Hostname : GlusterNode2.com, Ip : 192.168.1.20 Following distributed volume was created : # gluster v info devvol Volume Name: devvol Type: Distribute Volume ID: 2b3a4bb0-f8c2-40e5-b5f5-61b9e89ace45 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.1.10:/testxfs/brick-a1/brick Brick2: 192.168.1.20:/testxfs/brick-b1/brick Options Reconfigured: transport.address-family: inet nfs.disable: on Fired the following command from the client : # mount -t glusterfs -o backup-volfile-servers=GlusterNode2.com,log-level=DEBUG GlusterNode1.com:/devvol /mnt1 Result : Was successfully able to mount the volume. Logs : [2019-06-13 07:25:32.159993] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-192.168.1.10 (port-24007) for hostname: GlusterNode1.com and port: 24007 . . . [2019-06-13 07:25:36.391372] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-192.168.1.20 (port-24007) for hostname: GlusterNode2.com and port: 24007 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 08:27:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 08:27:05 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Upasana changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1704851 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704851 [Bug 1704851] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 11:29:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:29:02 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sacharya at redhat.c |needinfo?(mscherer at redhat.c |om) |om) --- Comment #3 from Shwetha K Acharya --- Hi Misc, We have built the debian packages for glusterfs 6.2, and waiting for the creation of accounts to upload the packages. Here, https://github.com/gluster/glusterfs/issues/683 is a github issue, asking reasons for delay on the same. It will be helpful if we are unblocked by this soon. Can you do the needful? About automating the procedure, I will initiate a discussion with the team, and get back to you. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:31:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:31:46 +0000 Subject: [Bugs] [Bug 1720201] New: Healing not proceeding during in-service upgrade on a disperse volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Bug ID: 1720201 Summary: Healing not proceeding during in-service upgrade on a disperse volume Product: GlusterFS Version: mainline Status: NEW Component: ctime Keywords: Regression Severity: high Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: amukherj at redhat.com, aspandey at redhat.com, bugs at gluster.org, jahernan at redhat.com, khiremat at redhat.com, kiyer at redhat.com, nchilaka at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, ubansal at redhat.com, vdas at redhat.com Depends On: 1713664 Blocks: 1696809, 1703434, 1704851 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703434 [Bug 1703434] Post inservice upgrade of first node in the cluster, core generated and heal on EC stuck https://bugzilla.redhat.com/show_bug.cgi?id=1704851 [Bug 1704851] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup https://bugzilla.redhat.com/show_bug.cgi?id=1713664 [Bug 1713664] Healing not proceeding during in-service upgrade on a disperse volume -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 11:32:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:32:08 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 11:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:33:17 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 --- Comment #1 from Kotresh HR --- Description of problem: ======================= Was doing an inservice upgrade from 5.x to 6.x on a 6 node setup With a distributed-dispersed volume and brickmux enabled setup Version-Release number of selected component (if applicable): ============================================================= 2 nodes still on 5.x 4 nodes on 6.x How reproducible: ================ 1/1 Steps to Reproduce: ================== 1.Create a distributed-dispersed volume with brick mux enabled on a 5.x setup 2.Mount the volume and start the IO's 3.Upgraded 2 nodes at a time and wait for healing to complete -- This completed successfully 4.Upgrade the next 2 nodes and start healing 5.Healing is not progressing from the past 5 hours (Has 150 files in heal info from then) Actual results: =============== Healing is not completing Expected results: ================ Healing should complete successfully -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:34:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:34:04 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Hardware|Unspecified |All Blocks|1696809, 1703434, 1704851 | OS|Unspecified |Linux Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703434 [Bug 1703434] Post inservice upgrade of first node in the cluster, core generated and heal on EC stuck https://bugzilla.redhat.com/show_bug.cgi?id=1704851 [Bug 1704851] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:37:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:37:57 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22858 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:37:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:37:59 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22858 (posix/ctime: Fix ctime upgrade issue) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:57:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:57:17 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Anees Patel changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkavunga at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 11:58:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 11:58:51 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #4 from M. Scherer --- Sure give me a deadline, and I will create the account. I mean, I do not even need a precise one. Would you agree on "We do in 3 months", in which case I create the account right now (with expiration as set). (I need a public ssh key and a username) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 13:08:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:08:57 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sankarshan at redhat.com Flags| |needinfo?(sankarshan at redhat | |.com) --- Comment #5 from Kaleb KEITHLEY --- (In reply to M. Scherer from comment #2) > Ok so before opening a account, I would like to discuss the plan for > automating that. > I kinda feel unease of the fact we are still doing everything manually > (especially after the nfs ganesha issue that we found internally), and while > I do not have personnaly the ressources nor time to automate (was on TODO > list, but after Nigel departure and the migration to AWS, this was pushed > down the line), I would like to take on this opportunity to first discuss > that, and then open the account. > > In that order, because experience show that the reverse order is not > consecutive of any action (curiously, folks listen to me more when they wait > on me for something, so I hope folks will excuse me for that obvious > blackmail, but ot should be quick). > > So, how long would it take to automate the release from Jenkins to > download.gluster, and who would be dedicated on it on the gluster side ? > (once we agree on a deadline, I will create a account that expire > automatically after that time, just to make sure we do not leave a gapping > hole open) You, NIgel, and I had a discussion in Berlin over two years ago about this and Nigel was supposed to automate it in Jenkins. Someone like Sankarshan will have to identify a resource for doing the work now. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 13:17:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:17:00 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sankarshan at redhat | |.com) | --- Comment #6 from M. Scherer --- Yup, but clearly, as long as someone was doing the job manually, this was set as a lesser priority than a lot of things (like fixing the fires all over the place). The increasing backlog of tasks do not make me think we can do it without someone taking ownership of that, and as you rightfully point, that's something we all want since more than 2 years :/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 13:25:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:25:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22859 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 13:25:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:25:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #692 from Worker Ant --- REVIEW: https://review.gluster.org/22859 ([WIP]glusterd.h: remove unneeded macros or move them to their users.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 13:43:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:43:38 +0000 Subject: [Bugs] [Bug 1718848] False positive logging of mount failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718848 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22860 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 13:43:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:43:39 +0000 Subject: [Bugs] [Bug 1718848] False positive logging of mount failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718848 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22860 (glusterd: assign zero to ret on successful sys_read()) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 13:46:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 13:46:11 +0000 Subject: [Bugs] [Bug 1710744] [FUSE] Endpoint is not connected after "Found anomalies" error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 --- Comment #3 from Pavel Znamensky --- Have caught it again. Is there any workaround? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 14:06:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 14:06:59 +0000 Subject: [Bugs] [Bug 1719388] infra: download.gluster.org /var/www/html/... is out of free space In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- Fixed, I have added 2G (only 5G free without more commands). I will now see if there is some missing cleanup, and why I got no nagios alert for that metrics :/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 14:12:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 14:12:46 +0000 Subject: [Bugs] [Bug 1719388] infra: download.gluster.org /var/www/html/... is out of free space In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 --- Comment #2 from M. Scherer --- So: https://download.gluster.org/pub/gluster/glusterfs/nightly/sources/ is taking 8G, and seems unused and no longer up to date. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 15:06:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 15:06:10 +0000 Subject: [Bugs] [Bug 1720290] New: ctime changes: tar still complains file changed as we read it if uss is enabled Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 Bug ID: 1720290 Summary: ctime changes: tar still complains file changed as we read it if uss is enabled Product: GlusterFS Version: mainline Status: NEW Component: ctime Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, jthottan at redhat.com, khiremat at redhat.com, nchilaka at redhat.com, rabhat at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vdas at redhat.com Depends On: 1709301 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709301 [Bug 1709301] ctime changes: tar still complains file changed as we read it if uss is enabled -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 15:06:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 15:06:37 +0000 Subject: [Bugs] [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 15:21:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 15:21:31 +0000 Subject: [Bugs] [Bug 1719388] infra: download.gluster.org /var/www/html/... is out of free space In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 --- Comment #3 from M. Scherer --- ok so not only is monitoring out (not sure why, it worked when deployed), but / is full, because /var/log is taking 5G (why, dunno, I guess a lot of requests or something), and no compression, this filled the server. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 15:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 15:30:43 +0000 Subject: [Bugs] [Bug 1719388] infra: download.gluster.org /var/www/html/... is out of free space In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 --- Comment #4 from M. Scherer --- So, that was a missing package (optional dep), breaking the acl on munin. Not sure how to clear that cleanly, but alerting shoulw work. Now, to clean stuff... -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 13 17:21:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 17:21:20 +0000 Subject: [Bugs] [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22861 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 13 17:21:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 13 Jun 2019 17:21:21 +0000 Subject: [Bugs] [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22861 (uss: Fix tar issue with ctime and uss enabled) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 04:17:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 04:17:38 +0000 Subject: [Bugs] [Bug 1720453] New: Unable to access review.gluster.org Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720453 Bug ID: 1720453 Summary: Unable to access review.gluster.org Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Created attachment 1580537 --> https://bugzilla.redhat.com/attachment.cgi?id=1580537&action=edit browser screenshot Description of problem: Not able to access the website, see attached screenshot. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 04:35:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 04:35:49 +0000 Subject: [Bugs] [Bug 1105277] Failure to execute gverify.sh. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1105277 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com, | |sacharya at redhat.com Assignee|sarumuga at redhat.com |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 04:36:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 04:36:56 +0000 Subject: [Bugs] [Bug 1105277] Failure to execute gverify.sh. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1105277 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sunkumar at redhat.com Assignee|sunkumar at redhat.com |sacharya at redhat.com --- Comment #9 from Amar Tumballi --- vnosov, thanks for all the details here. Missed this in triaging, as it already had 'Triaged' keyword. We will look into this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 05:39:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 05:39:53 +0000 Subject: [Bugs] [Bug 1720463] New: [Thin-arbiter] : Wait for connection with TA node before sending lookup/create of ta-replica id file Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720463 Bug ID: 1720463 Summary: [Thin-arbiter] : Wait for connection with TA node before sending lookup/create of ta-replica id file Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When we mount a ta volume, as soon as 2 data bricks are connected we consider that the mount is done and then send a lookup/create on ta file on ta node. However, this connection with ta node might not have been completed. Due to this delay, ta replica id file will not be created and we will see ENOTCONN error in log file. As we know that this ta node could have a higher latency, we should wait for reasonable time for connection to happen before sending lookup/create on replica id file. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 05:40:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 05:40:25 +0000 Subject: [Bugs] [Bug 1720463] [Thin-arbiter] : Wait for connection with TA node before sending lookup/create of ta-replica id file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720463 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |aspandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 07:00:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 07:00:03 +0000 Subject: [Bugs] [Bug 1717754] Enable features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1720488 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720488 [Bug 1720488] Enable features.locks-notify-contention by default -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 07:00:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 07:00:03 +0000 Subject: [Bugs] [Bug 1720488] New: Enable features.locks-notify-contention by default Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720488 Bug ID: 1720488 Summary: Enable features.locks-notify-contention by default Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: locks Assignee: kdhananj at redhat.com Reporter: aspandey at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, jahernan at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1717754 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717754 [Bug 1717754] Enable features.locks-notify-contention by default -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 07:00:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 07:00:04 +0000 Subject: [Bugs] [Bug 1720488] Enable features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720488 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 07:01:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 07:01:10 +0000 Subject: [Bugs] [Bug 1720488] Enable features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720488 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST Assignee|kdhananj at redhat.com |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 08:10:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:10:40 +0000 Subject: [Bugs] [Bug 1716440] SMBD thread panics when connected to from OS X machine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716440 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ryan at magenta.tv) | --- Comment #7 from ryan at magenta.tv --- Hi Anoop, Usually when discovering an issue, we try to reduce as many variables as possible whilst still being able to reproduce the issue. For the tests, we use the node's IP address as CTDB is usually disabled when we carry out the testing. The issue was discovered when using the cluster in it's usual configuration, which is using CTDB, Winbind and Samba, and then connecting via the CTDB IP addresses. I can share our usual configuration with you if this helps? Please let me know if I can gather any more info for you. Best, Ryan -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:22:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:22:12 +0000 Subject: [Bugs] [Bug 1720453] Unable to access review.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720453 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- yeah, there is a DNS issue. I am on it and I suspect that I found the exact issue. Postmortem will explain if that fix, right now, I am waiting on DNS propagation. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:24:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:24:37 +0000 Subject: [Bugs] [Bug 1720557] New: gfapi: provide an option for changing statedump path in glfs-api. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720557 Bug ID: 1720557 Summary: gfapi: provide an option for changing statedump path in glfs-api. Product: GlusterFS Version: 6 Status: NEW Component: core Keywords: FutureFeature Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: aliang at redhat.com, areis at redhat.com, atumball at redhat.com, berrange at redhat.com, bugs at gluster.org, chayang at redhat.com, coli at redhat.com, ddepaula at redhat.com, jen at redhat.com, juzhang at redhat.com, knoel at redhat.com, lolyu at redhat.com, mrezanin at redhat.com, ndevos at redhat.com, rcyriac at redhat.com, rhel8-maint at redhat.com, rhinduja at redhat.com, sasundar at redhat.com, virt-maint at redhat.com Depends On: 1689097 Blocks: 1447694, 1720461 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1689097 +++ +++ This bug was initially created as a clone of Bug #1586201 +++ TL;DR: We need the ability to control the gluster logging from libvirt/QEMU when using the gluster driver. +++ This bug was initially created as a clone of Bug #1447694 +++ QEMU is unable to store Gluster specific debugging logs under /var/run/gluster/... This is because QEMU runs as user "qemu" and does not have write permissions there. This debugging (called 'statedump') can be triggered from the Gluster CLI which sends an event to QEMU (when libgfapi is used). When the glusterfs packages make sure that a group "gluster" has write permissions to /var/run/gluster/, could the qemu(-kvm-rhev) package be adjusted to have the "qemu" user in the "gluster" group? This would allow more debugging options that help with potential issues when QEMU runs with disk-images over libgfapi.so. --- Additional comment from Worker Ant on 2019-03-15 07:41:40 UTC --- REVIEW: https://review.gluster.org/22364 (gfapi: provide an api for setting statedump path) posted (#1) for review on master by Amar Tumballi --- Additional comment from Amar Tumballi on 2019-06-14 05:28:24 UTC --- The patch is now merged. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689097 [Bug 1689097] gfapi: provide an option for changing statedump path in glfs-api. https://bugzilla.redhat.com/show_bug.cgi?id=1720461 [Bug 1720461] gfapi: provide an option for changing statedump path in glfs-api. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:26:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:26:00 +0000 Subject: [Bugs] [Bug 1720557] gfapi: provide an option for changing statedump path in glfs-api. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720557 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22864 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:26:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:26:01 +0000 Subject: [Bugs] [Bug 1720557] gfapi: provide an option for changing statedump path in glfs-api. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720557 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22864 (gfapi: provide an api for setting statedump path) posted (#1) for review on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:31:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:31:29 +0000 Subject: [Bugs] [Bug 1663519] Memory leak when smb.conf has "store dos attributes = yes" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663519 --- Comment #6 from ryan at magenta.tv --- We're seeing this issue on nearly all of our clusters in production. One common factor is the type of application which is using the share. These applications are Media Asset Management tools which either walk the filesystem or listen for file system notifications, and then process the file. We are seeing the issue on systems that have 'store dos attributes = no' set, although the memory usage pattern is very different. With 'store dos attributes = yes', the issue will cause a system with 64GB of memory to OOM within 24hrs. With 'store dos attributes = no' the same system will not OOM for months. The memory usage is slow and gradual, but we still have multiple SMBD threads with over 6GB of RSS memory usage. The sernet/samba team has assisted us in tracing this back through the stack and have confirmed the issue seems to be within the gluster VFS module. Please let me know if I can get any more data, logs etc to progress this issue. Many thanks, Ryan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:33:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:33:33 +0000 Subject: [Bugs] [Bug 1720453] Unable to access review.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720453 --- Comment #3 from M. Scherer --- So, i think the root cause is fixed (at least from my perspective), so DNS propagation should occurs quickly and fix it for others. Writing the post mortme at the moment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:43:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:43:37 +0000 Subject: [Bugs] [Bug 1263231] [RFE]: Gluster should provide "share mode"/"share reservation" support In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1263231 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-06-14 08:43:37 --- Comment #9 from Amar Tumballi --- Not worked on this for last 3 years. Not in plan for immediate work (at least 6 months). Will revisit after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 08:46:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:46:31 +0000 Subject: [Bugs] [Bug 1464639] Possible stale read in afr due to un-notified pending xattr change In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1464639 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(pgurusid at redhat.c | |om) --- Comment #3 from Amar Tumballi --- Any updates? Can this be revised? or closed as not relevant? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 08:47:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:47:54 +0000 Subject: [Bugs] [Bug 1467822] replace-brick command should fail if brick to be replaced is source brick for data to be healed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1467822 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(ravishankar at redha | |t.com) --- Comment #1 from Amar Tumballi --- Any update? Looks important... at least to warn user. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 08:53:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:53:16 +0000 Subject: [Bugs] [Bug 1473968] rda-cache-limit filled with (null) value after use parallel-readdir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1473968 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Severity|unspecified |high --- Comment #12 from Amar Tumballi --- Hi Vitaly, Sorry for being late. (That too late by an year). Is it the issue with glusterfs-6.x releases? We haven't heard the same in bugzilla or mailing list in quite sometime now. If not happening for you with latter releases, would like to close the bug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:54:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:54:38 +0000 Subject: [Bugs] [Bug 1720566] New: [GSS]Can't rebalance GlusterFS volume because unix socket's path name is too long Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Bug ID: 1720566 Summary: [GSS]Can't rebalance GlusterFS volume because unix socket's path name is too long Product: GlusterFS Version: mainline Hardware: All OS: Linux Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, jpankaja at redhat.com, moagrawa at redhat.com, nbalacha at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1720192 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720192 [Bug 1720192] [GSS]Can't rebalance GlusterFS volume because unix socket's path name is too long -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:54:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:54:45 +0000 Subject: [Bugs] [Bug 1473968] rda-cache-limit filled with (null) value after use parallel-readdir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1473968 --- Comment #13 from Amar Tumballi --- We also fixed many formatting issues with 32bit compiler, and now a CI job runs to validate that none of our patches break 32bit systems. I hope the issues would have got fixed as of th -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:54:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:54:57 +0000 Subject: [Bugs] [Bug 1720566] [GSS]Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:56:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:56:21 +0000 Subject: [Bugs] [Bug 1489417] Gerrit shouldn't offer http or git for code download In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489417 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 08:57:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 08:57:53 +0000 Subject: [Bugs] [Bug 1720566] Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|[GSS]Can't rebalance |Can't rebalance GlusterFS |GlusterFS volume because |volume because unix |unix socket's path name is |socket's path name is too |too long |long --- Comment #1 from Mohit Agrawal --- Can't rebalance GlusterFS volume because unix socket's path name is too long -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:00:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:00:37 +0000 Subject: [Bugs] [Bug 1501378] Buffer overflow checks is missing in lock xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1501378 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(jthottan at redhat.c | |om) --- Comment #3 from Amar Tumballi --- Is this a valid bug? Patch is abandoned... Xavi's comment says its not a valid overflow issue. Would like to take it to closure. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:12:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:12:13 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #8 from Amar Tumballi --- An update: While testing https://review.gluster.org/22364 I noticed that 0symbol-check failed when I used access() and not sys_access(). But it didn't fail for stat(). So I suspect only set of 'stat()' functions are missed out. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:16:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:16:28 +0000 Subject: [Bugs] [Bug 1529992] glusterfind session creation/deletion is inconsistent In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529992 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com, | |sunkumar at redhat.com Assignee|bugs at gluster.org |sacharya at redhat.com QA Contact|bugs at gluster.org | Severity|unspecified |medium -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 09:33:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:33:02 +0000 Subject: [Bugs] [Bug 1535511] Gluster CLI shouldn't stop if log file couldn't be opened In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535511 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Assignee|bugs at gluster.org |atumball at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 09:34:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:34:15 +0000 Subject: [Bugs] [Bug 1535511] Gluster CLI shouldn't stop if log file couldn't be opened In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535511 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22865 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:34:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:34:16 +0000 Subject: [Bugs] [Bug 1535511] Gluster CLI shouldn't stop if log file couldn't be opened In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535511 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22865 (cli: don't fail if logging initialize fails) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:38:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:38:56 +0000 Subject: [Bugs] [Bug 1546932] systemd units does not stop all gluster daemons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546932 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Severity|unspecified |medium Last Closed|2018-03-06 03:55:06 |2019-06-14 09:38:56 --- Comment #9 from Amar Tumballi --- > systemctl start glusterfsd [root at localhost ~]# systemctl start glusterfsd Failed to start glusterfsd.service: Unit glusterfsd.service not found. with the later releases, glusterfsd is not allowed to be started directly, but always will start from glusterd. (and is recommended that way). With that, we are closing the issue. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:46:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:46:03 +0000 Subject: [Bugs] [Bug 1546932] systemd units does not stop all gluster daemons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546932 Dmitry Melekhov changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |NEW Resolution|WONTFIX |--- --- Comment #10 from Dmitry Melekhov --- I'm talking about stopping, not starting, this is not fixed! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:49:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:49:45 +0000 Subject: [Bugs] [Bug 1546932] systemd units does not stop all gluster daemons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546932 --- Comment #11 from Amar Tumballi --- Dmitry, my point is, from glusterfs project, we are not supplying glusterfsd.service file itself, so, fixing this issue is not valid in the project, because systemd only bothers about glusterd, and as we would like to cleanup all the processes started by glusterd too, glusterfsd too is stopped. https://github.com/gluster/glusterfs/tree/master/extras/systemd -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:52:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:52:52 +0000 Subject: [Bugs] [Bug 1546932] systemd units does not stop all gluster daemons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546932 --- Comment #12 from Dmitry Melekhov --- Please fix glusterd.service, so while stopping glusterd glusterfsd will be stopped too. Thank you! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 09:55:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 09:55:49 +0000 Subject: [Bugs] [Bug 1566221] tar: stale file handle observed during untar operation when a brick is added In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1566221 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(spalai at redhat.com | |) --- Comment #1 from Amar Tumballi --- Any update on this? Does this happen on latest master? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:00:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:00:43 +0000 Subject: [Bugs] [Bug 1566221] tar: stale file handle observed during untar operation when a brick is added In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1566221 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Flags|needinfo?(spalai at redhat.com | |) | Last Closed| |2019-06-14 10:00:43 --- Comment #2 from Susant Kumar Palai --- This is fixed by the patch https://review.gluster.org/#/c/19849. Closing the bug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:04:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:04:01 +0000 Subject: [Bugs] [Bug 1589695] Provide a cli cmd to modify max-file-size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1589695 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:06:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:06:04 +0000 Subject: [Bugs] [Bug 1566221] tar: stale file handle observed during untar operation when a brick is added In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1566221 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:08:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:08:29 +0000 Subject: [Bugs] [Bug 1598769] Brick is not operational when using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598769 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Severity|unspecified |low --- Comment #3 from Amar Tumballi --- Thanks for the report, but we are not able to look into the RDMA section actively, and are seriously considering from dropping it from active support. More on this @ https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html > ?RDMA? transport support: > > Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work > with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with > IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP > based) network for your volume. > > If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this > after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:15:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:15:53 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #9 from Abhishek --- Hi Team, is there any update on this? Regards, Abhishek -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:16:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:16:54 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com, | |kkeithle at redhat.com, | |ndevos at redhat.com, | |rkothiya at redhat.com Flags| |needinfo?(kkeithle at redhat.c | |om) Severity|unspecified |medium --- Comment #2 from Amar Tumballi --- Kaleb, Niels, would like to hear your opinions here. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:17:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:17:18 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ndevos at redhat.com | |) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:20:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:20:18 +0000 Subject: [Bugs] [Bug 1608305] [glusterfs-3.6.9] Fuse-mount has been forced off In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1608305 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- Looks like this got introduced in that specific version where in-place migration was not supported. Would have caused because of upgrading the clients before, when the servers were of older version. Would like to mention that we have not seen any such issues in a long time. Please upgrade to higher glusterfs version. If you have not seen this issue with higher versions, would like to close the issue as EOL (as 3.6.7 is EOL'd). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:21:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:21:51 +0000 Subject: [Bugs] [Bug 1614275] Fix spurious failures in tests/bugs/ec/bug-1236065.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1614275 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Flags| |needinfo?(pkarampu at redhat.c | |om) Severity|unspecified |high --- Comment #3 from Amar Tumballi --- Noticed that this is not failing in recent times. Should we close it? If not would like to consider focusing on this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:23:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:23:12 +0000 Subject: [Bugs] [Bug 1615224] Fix spurious failures in tests/bugs/ec/ec-1468261.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615224 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Fixed In Version| |Gluster-Spurious-TestFailur | |e Flags| |needinfo?(aspandey at redhat.c | |om) Severity|unspecified |high --- Comment #2 from Amar Tumballi --- The mentioned patch is merged. Should we be closing the issue? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:25:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:25:31 +0000 Subject: [Bugs] [Bug 1615307] Error disabling sockopt IPV6_V6ONLY In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615307 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Flags| |needinfo?(kompastver at gmail. | |com) Severity|unspecified |medium --- Comment #1 from Amar Tumballi --- We did few fixes for IPv6 in glusterfs-5.x and glusterfs-6.x releases. Can you confirm if higher version fixed these issues? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:35:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:35:29 +0000 Subject: [Bugs] [Bug 1628219] High memory consumption depending on volume bricks count In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628219 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Severity|unspecified |high Last Closed| |2019-06-14 10:35:29 --- Comment #2 from Amar Tumballi --- Vladislav, Apologies for the delay, but please notice that we do take some space per translator definition. The more number of bricks, the more memory consumed for the same. Yes, it is a known issue for now. Hence we normally claim support upto 128 nodes/bricks only. For using it for larger counts, one need to use higher RAM for sure. FYI - The structure which gets allocated for each xlator is https://github.com/gluster/glusterfs/blob/v6.0/libglusterfs/src/glusterfs/xlator.h#L767..L864 We won't be able to fix it in near future, as most of the logic depends on this structure. Will be marking the issue as DEFERRED. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:36:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:36:37 +0000 Subject: [Bugs] [Bug 1631247] Issue enabling cluster.use-compound-fops with libgfapi application running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631247 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-06-14 10:36:37 --- Comment #2 from Amar Tumballi --- We now removed all reference to compound-fops. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:38:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:38:20 +0000 Subject: [Bugs] [Bug 1633318] health check fails on restart from crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633318 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |amukherj at redhat.com, | |atumball at redhat.com, | |rabhat at redhat.com Assignee|bugs at gluster.org |moagrawa at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 10:43:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:43:03 +0000 Subject: [Bugs] [Bug 1642488] ganesha-gfapi.log contain many E [dht-helper.c:90:dht_fd_ctx_set] 0-prod-dht: invalid argument: fd [Invalid argument] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642488 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-14 10:43:03 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 10:15:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 10:15:53 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Assignee|bugs at gluster.org |hgowtham at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 11:17:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:17:32 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 --- Comment #10 from hari gowtham --- (In reply to Abhishek from comment #9) > Hi Team, > > is there any update on this? > > Regards, > Abhishek Hi Abhishek, I'll take a look into it. This will take some time. Will update once I have made some progress. Regards, Hari. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:17:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:17:55 +0000 Subject: [Bugs] [Bug 1720615] New: [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 Bug ID: 1720615 Summary: [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 Product: GlusterFS Version: mainline Status: ASSIGNED Component: build Severity: high Priority: urgent Assignee: ndevos at redhat.com Reporter: ndevos at redhat.com CC: bugs at gluster.org Blocks: 1720079 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1720079 +++ Description of problem: yum update fails with the below error for RHEL-8 client packages Error: Problem 1: cannot install both glusterfs-debuginfo-6.0-5.el8.x86_64 and glusterfs-debuginfo-3.12.2-40.2.el8.x86_64 - package glusterfs-cli-debuginfo-3.12.2-40.2.el8.x86_64 requires glusterfs-debuginfo(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed - cannot install the best update candidate for package glusterfs-debuginfo-3.12.2-40.2.el8.x86_64 - problem with installed package glusterfs-cli-debuginfo-3.12.2-40.2.el8.x86_64 Problem 2: package glusterfs-cli-3.12.2-40.2.el8.x86_64 requires glusterfs-libs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed - cannot install both glusterfs-libs-6.0-5.el8.x86_64 and glusterfs-libs-3.12.2-40.2.el8.x86_64 - cannot install the best update candidate for package glusterfs-libs-3.12.2-40.2.el8.x86_64 - cannot install the best update candidate for package glusterfs-cli-3.12.2-40.2.el8.x86_64 Problem 3: problem with installed package glusterfs-cli-3.12.2-40.2.el8.x86_64 - package glusterfs-cli-3.12.2-40.2.el8.x86_64 requires glusterfs-libs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed - cannot install both glusterfs-libs-6.0-5.el8.x86_64 and glusterfs-libs-3.12.2-40.2.el8.x86_64 - package glusterfs-6.0-5.el8.x86_64 requires glusterfs-libs(x86-64) = 6.0-5.el8, but none of the providers can be installed - cannot install the best update candidate for package glusterfs-3.12.2-40.2.el8.x86_64 (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages Version-Release number of selected component (if applicable): glusterfs-6.0-5.el8 RHEL-8.1 How reproducible: Always Steps to Reproduce: 1. Have a RHEL-8.1 system with all glusterfs packages built for rhel-8 client 2. Add latest glusterfs rhel-8 client repo 3. yum update Actual results: Failed to upgrade Expected results: Should upgrade to "6.0-5.el8" Additional info: Packages already available in systems # rpm -qa | grep gluster glusterfs-libs-3.12.2-40.2.el8.x86_64 glusterfs-rdma-3.12.2-40.2.el8.x86_64 glusterfs-rdma-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-client-xlators-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-debuginfo-3.12.2-40.2.el8.x86_64 python2-gluster-3.12.2-40.2.el8.x86_64 glusterfs-fuse-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-3.12.2-40.2.el8.x86_64 glusterfs-api-3.12.2-40.2.el8.x86_64 glusterfs-api-devel-3.12.2-40.2.el8.x86_64 glusterfs-api-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-libs-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-devel-3.12.2-40.2.el8.x86_64 glusterfs-cli-debuginfo-3.12.2-40.2.el8.x86_64 glusterfs-debugsource-3.12.2-40.2.el8.x86_64 glusterfs-client-xlators-3.12.2-40.2.el8.x86_64 glusterfs-fuse-3.12.2-40.2.el8.x86_64 glusterfs-cli-3.12.2-40.2.el8.x86_64 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720079 [Bug 1720079] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:19:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:19:23 +0000 Subject: [Bugs] [Bug 1720615] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 --- Comment #1 from Niels de Vos --- glusterfs-cli has been incorrectly marked as server-only component. It is still useful for clients that run vdsm, nagios or other management/monitoring solutions that use `gluster --remote-host=...` commands. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:21:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:21:10 +0000 Subject: [Bugs] [Bug 1720615] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22868 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:21:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:21:11 +0000 Subject: [Bugs] [Bug 1720615] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22868 (build: always build glusterfs-cli to allow monitoring/managing from clients) posted (#2) for review on master by Niels de Vos -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:28:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:28:06 +0000 Subject: [Bugs] [Bug 1720566] Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22869 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:28:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:28:07 +0000 Subject: [Bugs] [Bug 1720566] Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22869 (glusterd: Can't run rebalance due to long unix socket) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 11:40:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:40:11 +0000 Subject: [Bugs] [Bug 1720620] New: [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720620 Bug ID: 1720620 Summary: [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 Product: GlusterFS Version: 6 Status: NEW Component: build Severity: high Priority: urgent Assignee: bugs at gluster.org Reporter: sheggodu at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com, vdas at redhat.com Blocks: 1720079 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720079 [Bug 1720079] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 11:40:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:40:30 +0000 Subject: [Bugs] [Bug 1720620] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720620 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sheggodu at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 11:47:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 11:47:08 +0000 Subject: [Bugs] [Bug 1720488] Enable features.locks-notify-contention by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720488 nchilaka changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nchilaka at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:02:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:02:33 +0000 Subject: [Bugs] [Bug 1720620] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720620 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-06-14 12:02:33 --- Comment #1 from Sunil Kumar Acharya --- *** This bug has been marked as a duplicate of bug 1720615 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:02:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:02:33 +0000 Subject: [Bugs] [Bug 1720615] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sheggodu at redhat.com --- Comment #3 from Sunil Kumar Acharya --- *** Bug 1720620 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:04:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:04:08 +0000 Subject: [Bugs] [Bug 1473968] rda-cache-limit filled with (null) value after use parallel-readdir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1473968 Vitaly Lipatov changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed|2018-06-20 18:26:11 |2019-06-14 12:04:08 --- Comment #14 from Vitaly Lipatov --- (In reply to Amar Tumballi from comment #13) > We also fixed many formatting issues with 32bit compiler, and now a CI job > runs to validate that none of our patches break 32bit systems. I hope the > issues would have got fixed as of th I've dropped out all 32bit instances of glusterfs, so I believe it runs perfectly after formatting issues have been fixed. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:07:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:07:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22870 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:07:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:07:47 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #693 from Worker Ant --- REVIEW: https://review.gluster.org/22870 (tests: Add missing NFS test tag to the testfile) posted (#1) for review on master by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:08:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:08:54 +0000 Subject: [Bugs] [Bug 1720633] New: Upcall: Avoid sending upcalls for invalid Inode Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 Bug ID: 1720633 Summary: Upcall: Avoid sending upcalls for invalid Inode Product: GlusterFS Version: 6 Hardware: All OS: All Status: NEW Component: upcall Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com CC: bugs at gluster.org Depends On: 1718338 Blocks: 1717784 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1718338 +++ Description of problem: For nameless LOOKUPs, server creates a new inode which shall remain invalid until the fop is successfully processed post which it is linked to the inode table. But incase if there is an already linked inode for that entry, it discards that newly created inode which results in upcall notification. This may result in client being bombarded with unnecessary upcalls affecting performance if the data set is huge. This issue can be avoided by looking up and storing the upcall context in the original linked inode (if exists), thus saving up on those extra callbacks. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-06-07 14:10:52 UTC --- REVIEW: https://review.gluster.org/22840 (upcall: Avoid sending notifications for invalid inodes) posted (#1) for review on master by soumya k Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients https://bugzilla.redhat.com/show_bug.cgi?id=1718338 [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:08:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:08:54 +0000 Subject: [Bugs] [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1720633 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:10 +0000 Subject: [Bugs] [Bug 1720634] New: Upcall: Avoid sending upcalls for invalid Inode Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720634 Bug ID: 1720634 Summary: Upcall: Avoid sending upcalls for invalid Inode Product: GlusterFS Version: 5 Hardware: All OS: All Status: NEW Component: upcall Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com CC: bugs at gluster.org Depends On: 1718338 Blocks: 1717784, 1720633 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1718338 +++ Description of problem: For nameless LOOKUPs, server creates a new inode which shall remain invalid until the fop is successfully processed post which it is linked to the inode table. But incase if there is an already linked inode for that entry, it discards that newly created inode which results in upcall notification. This may result in client being bombarded with unnecessary upcalls affecting performance if the data set is huge. This issue can be avoided by looking up and storing the upcall context in the original linked inode (if exists), thus saving up on those extra callbacks. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-06-07 14:10:52 UTC --- REVIEW: https://review.gluster.org/22840 (upcall: Avoid sending notifications for invalid inodes) posted (#1) for review on master by soumya k Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients https://bugzilla.redhat.com/show_bug.cgi?id=1718338 [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode https://bugzilla.redhat.com/show_bug.cgi?id=1720633 [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:10 +0000 Subject: [Bugs] [Bug 1718338] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718338 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1720634 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720634 [Bug 1720634] Upcall: Avoid sending upcalls for invalid Inode -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:10 +0000 Subject: [Bugs] [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1720634 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720634 [Bug 1720634] Upcall: Avoid sending upcalls for invalid Inode -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:29 +0000 Subject: [Bugs] [Bug 1720635] New: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720635 Bug ID: 1720635 Summary: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Product: GlusterFS Version: 6 Hardware: All OS: All Status: NEW Component: libgfapi Keywords: Triaged Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, dang at redhat.com, ffilz at redhat.com, grajoria at redhat.com, jthottan at redhat.com, mbenjamin at redhat.com, msaini at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, skoduri at redhat.com, storage-qa-internal at redhat.com Depends On: 1718316 Blocks: 1717784 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1718316 +++ +++ This bug was initially created as a clone of Bug #1717784 +++ Description of problem: ========================= Ganesha-gfapi logs are flooded with errors messages related to gf_uuid_is_null(gfid), when linux untars and lookups are running from multiple clients- --------- [2019-06-06 07:56:12.503603] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7f7e91e8b0ae] -->/lib64/libgfapi.so.0(+0x258f1) [0x7f7e91ea28f1] -->/lib64/libgfapi.so.0(+0x257c4) [0x7f7e91ea27c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] --------- Version-Release number of selected component (if applicable): =========================== # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 Beta (Maipo) # rpm -qa | grep ganesha nfs-ganesha-2.7.3-3.el7rhgs.x86_64 glusterfs-ganesha-6.0-3.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.7.3-3.el7rhgs.x86_64 nfs-ganesha-gluster-2.7.3-3.el7rhgs.x86_64 How reproducible: ===================== 2/2 Steps to Reproduce: ====================== 1.Create 4 node Ganesha cluster 2.Create 4*3 Distribute-replicate Volume.Export the volume via Ganesha 3.Mount the volume on 4 clients via v4.1 protocol 4.Run the following workload Client 1: Run linux untars Client 2: du -sh in loop Client 3: ls -lRt in loop Client 4: find's in loop Actual results: ================== While test is running,ganesha-gfapi logs are flooded with errors related to "gf_uuid_is_null" ====== [2019-06-03 16:54:19.829136] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.006163] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.320293] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] ===== # cat /var/log/ganesha/ganesha-gfapi.log | grep gf_uuid_is_null | wc -l 605340 Expected results: =================== There should not be error messages in ganesha-gfapi.logs Additional info: =================== On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients --- Additional comment from RHEL Product and Program Management on 2019-06-06 08:10:27 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Soumya Koduri on 2019-06-06 09:48:36 UTC --- @Manisha, are these clients connected to different NFS-Ganesha servers? On which machine did you observe these errors? I do not see such messages in the sosreports uploaded. >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients Does this mean, these messages are not seen with just linux untar test? --- Additional comment from Manisha Saini on 2019-06-06 10:16:00 UTC --- (In reply to Soumya Koduri from comment #3) > @Manisha, > > are these clients connected to different NFS-Ganesha servers? On which > machine did you observe these errors? I do not see such messages in the > sosreports uploaded. Hi soumya, All the clients are connected to single server VIP I see there is some issue with how sosreport collecting ganesha logs.All logs are not captured as part of sosreport. > > >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients > > Does this mean, these messages are not seen with just linux untar test? No.Not seen with only untars --- Additional comment from Soumya Koduri on 2019-06-07 10:08:03 UTC --- Thanks Manisha for sharing the setup and logs. "0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] " The above message is logged while processing upcall requests. Somehow the gfid passed has become NULL. IMO there are two issues to be considered here - > there are so many upcall requests generated even though there is only single server serving all the clients. Seems like the data being accessed is huge and hence the server is trying clean up the inodes from the lru list. While destroying a inode, upcall xlator sends cache invalidation request to all its clients to notify that the particular file/inode entry is no more cached by the server. This logic can be optimized a bit here. For nameless lookups, server generates a dummy inode (say inodeD) and later links it to inode (if there is no entry already present for that file/dir) in the cbk path. So as part of lookup_cbk, though the inode (inodeD) received is invalid, upcall xlator creates an inode_ctx entry as it eventually can get linked to the inode table. However in certain cases, if there is already an inode (say inodeC) present for that particular file, this new inode (inodeD) created will be purged, which results in sending upcall notifications to the clients. in Manisha's testcase, as the data created is huge and being looked up in a loop, there are many such dummy inode entries getting purged resulting in huge number of upcall notifications sent to the client. We can avoid this issue to an extent by checking if the inode is valid or not (i.e, linked or not) before sending callback notifications. note - this has been day-1 issue but good to be fixed. * Another issue is gfid becoming NULL in upcall args. > I couldn't reproduce this issue on my setup. However seems like in upcall xlator we already check if the gfid is not NULL before sending notification. GF_VALIDATE_OR_GOTO("upcall_client_cache_invalidate", !(gf_uuid_is_null(gfid)), out); So that means somewhere in the client processing, gfid has become NULL. From further code-reading I see a potential issue in upcall processing callback function - In glfs_cbk_upcall_data(), -- args->fs = fs; args->upcall_data = gf_memdup(upcall_data, sizeof(*upcall_data)); -- gf_memdup() may not be the right routine to use here as upcall_data structure contains pointers to other data. This definitely needs to be fixed. However would like to re-confirm if this caused gfid to become NULL. Request Manisha to share setup (if possible) while the tests going on to confirm this theory. Thanks! --- Additional comment from Worker Ant on 2019-06-07 14:09:44 UTC --- REVIEW: https://review.gluster.org/22839 (gfapi: fix incorrect initialization of upcall syncop arguments) posted (#1) for review on master by soumya k Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients https://bugzilla.redhat.com/show_bug.cgi?id=1718316 [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:29 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1720635 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720635 [Bug 1720635] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:43 +0000 Subject: [Bugs] [Bug 1720636] New: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720636 Bug ID: 1720636 Summary: Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients Product: GlusterFS Version: 5 Hardware: All OS: All Status: NEW Component: libgfapi Keywords: Triaged Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, dang at redhat.com, ffilz at redhat.com, grajoria at redhat.com, jthottan at redhat.com, mbenjamin at redhat.com, msaini at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, skoduri at redhat.com, storage-qa-internal at redhat.com Depends On: 1718316 Blocks: 1717784, 1720635 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1718316 +++ +++ This bug was initially created as a clone of Bug #1717784 +++ Description of problem: ========================= Ganesha-gfapi logs are flooded with errors messages related to gf_uuid_is_null(gfid), when linux untars and lookups are running from multiple clients- --------- [2019-06-06 07:56:12.503603] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7f7e91e8b0ae] -->/lib64/libgfapi.so.0(+0x258f1) [0x7f7e91ea28f1] -->/lib64/libgfapi.so.0(+0x257c4) [0x7f7e91ea27c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] --------- Version-Release number of selected component (if applicable): =========================== # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 Beta (Maipo) # rpm -qa | grep ganesha nfs-ganesha-2.7.3-3.el7rhgs.x86_64 glusterfs-ganesha-6.0-3.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.7.3-3.el7rhgs.x86_64 nfs-ganesha-gluster-2.7.3-3.el7rhgs.x86_64 How reproducible: ===================== 2/2 Steps to Reproduce: ====================== 1.Create 4 node Ganesha cluster 2.Create 4*3 Distribute-replicate Volume.Export the volume via Ganesha 3.Mount the volume on 4 clients via v4.1 protocol 4.Run the following workload Client 1: Run linux untars Client 2: du -sh in loop Client 3: ls -lRt in loop Client 4: find's in loop Actual results: ================== While test is running,ganesha-gfapi logs are flooded with errors related to "gf_uuid_is_null" ====== [2019-06-03 16:54:19.829136] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.006163] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] [2019-06-03 16:54:20.320293] E [glfs-handleops.c:1892:glfs_h_find_handle] (-->/lib64/libgfapi.so.0(+0xe0ae) [0x7ff6902d00ae] -->/lib64/libgfapi.so.0(+0x2594a) [0x7ff6902e794a] -->/lib64/libgfapi.so.0(+0x257c4) [0x7ff6902e77c4] ) 0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] ===== # cat /var/log/ganesha/ganesha-gfapi.log | grep gf_uuid_is_null | wc -l 605340 Expected results: =================== There should not be error messages in ganesha-gfapi.logs Additional info: =================== On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients --- Additional comment from RHEL Product and Program Management on 2019-06-06 08:10:27 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Soumya Koduri on 2019-06-06 09:48:36 UTC --- @Manisha, are these clients connected to different NFS-Ganesha servers? On which machine did you observe these errors? I do not see such messages in the sosreports uploaded. >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients Does this mean, these messages are not seen with just linux untar test? --- Additional comment from Manisha Saini on 2019-06-06 10:16:00 UTC --- (In reply to Soumya Koduri from comment #3) > @Manisha, > > are these clients connected to different NFS-Ganesha servers? On which > machine did you observe these errors? I do not see such messages in the > sosreports uploaded. Hi soumya, All the clients are connected to single server VIP I see there is some issue with how sosreport collecting ganesha logs.All logs are not captured as part of sosreport. > > >>> On narrowing down the test scenario,Seems to be the error messages are coming when only du -sh and ls -lRt are running in loop from two different clients > > Does this mean, these messages are not seen with just linux untar test? No.Not seen with only untars --- Additional comment from Soumya Koduri on 2019-06-07 10:08:03 UTC --- Thanks Manisha for sharing the setup and logs. "0-glfs_h_find_handle: invalid argument: !(gf_uuid_is_null(gfid)) [Invalid argument] " The above message is logged while processing upcall requests. Somehow the gfid passed has become NULL. IMO there are two issues to be considered here - > there are so many upcall requests generated even though there is only single server serving all the clients. Seems like the data being accessed is huge and hence the server is trying clean up the inodes from the lru list. While destroying a inode, upcall xlator sends cache invalidation request to all its clients to notify that the particular file/inode entry is no more cached by the server. This logic can be optimized a bit here. For nameless lookups, server generates a dummy inode (say inodeD) and later links it to inode (if there is no entry already present for that file/dir) in the cbk path. So as part of lookup_cbk, though the inode (inodeD) received is invalid, upcall xlator creates an inode_ctx entry as it eventually can get linked to the inode table. However in certain cases, if there is already an inode (say inodeC) present for that particular file, this new inode (inodeD) created will be purged, which results in sending upcall notifications to the clients. in Manisha's testcase, as the data created is huge and being looked up in a loop, there are many such dummy inode entries getting purged resulting in huge number of upcall notifications sent to the client. We can avoid this issue to an extent by checking if the inode is valid or not (i.e, linked or not) before sending callback notifications. note - this has been day-1 issue but good to be fixed. * Another issue is gfid becoming NULL in upcall args. > I couldn't reproduce this issue on my setup. However seems like in upcall xlator we already check if the gfid is not NULL before sending notification. GF_VALIDATE_OR_GOTO("upcall_client_cache_invalidate", !(gf_uuid_is_null(gfid)), out); So that means somewhere in the client processing, gfid has become NULL. From further code-reading I see a potential issue in upcall processing callback function - In glfs_cbk_upcall_data(), -- args->fs = fs; args->upcall_data = gf_memdup(upcall_data, sizeof(*upcall_data)); -- gf_memdup() may not be the right routine to use here as upcall_data structure contains pointers to other data. This definitely needs to be fixed. However would like to re-confirm if this caused gfid to become NULL. Request Manisha to share setup (if possible) while the tests going on to confirm this theory. Thanks! --- Additional comment from Worker Ant on 2019-06-07 14:09:44 UTC --- REVIEW: https://review.gluster.org/22839 (gfapi: fix incorrect initialization of upcall syncop arguments) posted (#1) for review on master by soumya k Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1717784 [Bug 1717784] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients https://bugzilla.redhat.com/show_bug.cgi?id=1718316 [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients https://bugzilla.redhat.com/show_bug.cgi?id=1720635 [Bug 1720635] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:43 +0000 Subject: [Bugs] [Bug 1718316] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718316 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1720636 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720636 [Bug 1720636] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 12:09:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:09:43 +0000 Subject: [Bugs] [Bug 1720635] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720635 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1720636 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1720636 [Bug 1720636] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:11:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:11:52 +0000 Subject: [Bugs] [Bug 1720635] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720635 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22871 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:11:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:11:53 +0000 Subject: [Bugs] [Bug 1720635] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720635 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22871 (gfapi: fix incorrect initialization of upcall syncop arguments) posted (#1) for review on release-6 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:13:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:13:00 +0000 Subject: [Bugs] [Bug 1720636] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720636 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22872 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:13:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:13:01 +0000 Subject: [Bugs] [Bug 1720636] Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720636 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22872 (gfapi: fix incorrect initialization of upcall syncop arguments) posted (#1) for review on release-5 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:48:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:48:33 +0000 Subject: [Bugs] [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22873 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:48:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:48:34 +0000 Subject: [Bugs] [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22873 (upcall: Avoid sending notifications for invalid inodes) posted (#1) for review on release-6 by soumya k -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:49:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:49:51 +0000 Subject: [Bugs] [Bug 1720634] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720634 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22874 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:49:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:49:52 +0000 Subject: [Bugs] [Bug 1720634] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720634 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22874 (upcall: Avoid sending notifications for invalid inodes) posted (#1) for review on release-5 by soumya k -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 12:53:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 12:53:56 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(kkeithle at redhat.c | |om) | |needinfo?(ndevos at redhat.com | |) | --- Comment #3 from Niels de Vos --- For the Fedora (and CentOS) packaging we follow the Fedora Guidelines. https://fedoraproject.org/wiki/Packaging:UsersAndGroups does not recommend to use sysusers.d (https://www.freedesktop.org/software/systemd/man/sysusers.d.html) but the 'manual' getent/groupadd procedure. We could include a sysusers.d snippet and install it. But it would need to be removed (or not installed) by the included glusterfs.spec.in file. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 16:51:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 16:51:18 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #694 from Worker Ant --- REVIEW: https://review.gluster.org/22844 (multiple files: another attempt to remove includes) merged (#19) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 17:16:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 17:16:26 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22875 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 17:16:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 17:16:26 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #695 from Worker Ant --- REVIEW: https://review.gluster.org/22875 (glfs: add syscall.h after header cleanup) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 17:17:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 17:17:41 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #696 from Worker Ant --- REVIEW: https://review.gluster.org/22875 (glfs: add syscall.h after header cleanup) merged (#1) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 17:25:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 17:25:05 +0000 Subject: [Bugs] [Bug 1719388] infra: download.gluster.org /var/www/html/... is out of free space In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719388 --- Comment #5 from M. Scherer --- so, / was full, i cleand things (especially since that's now behind the proxy, no need to keep log there). Should be compressed in the future. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 14 17:50:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 17:50:10 +0000 Subject: [Bugs] [Bug 1720733] New: glusterfs 4.1.7 client crash Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720733 Bug ID: 1720733 Summary: glusterfs 4.1.7 client crash Product: GlusterFS Version: 4.1 OS: Linux Status: NEW Component: libglusterfsclient Severity: high Assignee: bugs at gluster.org Reporter: danny.lee at appian.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1580779 --> https://bugzilla.redhat.com/attachment.cgi?id=1580779&action=edit Gluster Client Log Description of problem: During a period of a large write, a 42 second disconnect error occurred in the logs. This occurs from time to time, but recovers. But this time, about ~10 seconds later, the client/glusterfs crashed. The error in the client logs was the following: [2019-06-11 15:31:42.794126] I [MSGID: 114018] [client.c:2254:client_rpc_notify] 0-somecompany-client-1: disconnected from somecompany-client-1. Client process will keep trying to connect to glusterd until brick's port is available pending frames: frame : type(1) op(LOOKUP) frame : type(0) op(0) frame : type(0) op(0) frame : type(1) op(WRITE) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) frame : type(1) op(LOOKUP) frame : type(1) op(LOOKUP) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(1) op(OPEN) frame : type(0) op(0) frame : type(0) op(0) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-06-11 15:31:53 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 4.1.6 /lib64/libglusterfs.so.0(+0x25940)[0x7f66fd4ee940] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f66fd4f88a4] /lib64/libc.so.6(+0x36280)[0x7f66fbb53280] /usr/lib64/glusterfs/4.1.6/xlator/protocol/client.so(+0x615e3)[0x7f66f60e35e3] /lib64/libgfrpc.so.0(+0xec20)[0x7f66fd2bbc20] /lib64/libgfrpc.so.0(+0xefb3)[0x7f66fd2bbfb3] /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f66fd2b7e93] /usr/lib64/glusterfs/4.1.6/rpc-transport/socket.so(+0x7636)[0x7f66f83cb636] /usr/lib64/glusterfs/4.1.6/rpc-transport/socket.so(+0xa107)[0x7f66f83ce107] /lib64/libglusterfs.so.0(+0x890c4)[0x7f66fd5520c4] /lib64/libpthread.so.0(+0x7dd5)[0x7f66fc352dd5] /lib64/libc.so.6(clone+0x6d)[0x7f66fbc1aead] Version-Release number of selected component (if applicable): Gluster 4.1.7 Centos 7.6.1810 (Core) How reproducible: Not really sure, but we believe it has something to do with a very large write (~1-3GBs). During that time, either the IO or the network was busy, causing the 42 second disconnect. This was a 3-brick setup with one of the bricks being an arbiter brick. The primary EC2 instance had one of the data bricks and an arbiter brick and the secondary had just one of the data bricks. Both had a FUSE-client mount that connected to the the volume. The primary server was the one doing the large write at the time, and the primary's glusterfs client was the client that crashed, in which we could not access the files in the mount (Transport endpoint is not connected). The secondary's glusterfs client was still able to access the files. "gluster volume status" showed that all the bricks were up and running. We were able to unmount and mount the client later, but at that point, we were unsure if the services using the mount were using stale file pointers, so we restarted the servers to make sure everything was okay. Sadly, the coredump was corrupted and was not recoverable (unrelated). Steps to Reproduce: 1. N/A Actual results: Client glusterfs process crashed and did not recover, so we were unable to access the files on the mount Expected results: Client glusterfs process does not crash, so that we are able to access the files on the mount. Or it crashes and there is a way to recover the mount without having to remount. Additional info: Servers have been up for a few weeks with similar load, but have had no issues until now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 15 03:56:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 15 Jun 2019 03:56:54 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22863 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 15 03:56:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 15 Jun 2019 03:56:55 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 --- Comment #93 from Worker Ant --- REVIEW: https://review.gluster.org/22863 (clang-scan: resolve warning) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 15 03:57:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 15 Jun 2019 03:57:29 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #697 from Worker Ant --- REVIEW: https://review.gluster.org/22870 (tests: Add missing NFS test tag to the testfile) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Jun 15 03:57:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 15 Jun 2019 03:57:58 +0000 Subject: [Bugs] [Bug 1720615] [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720615 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-15 03:57:58 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22868 (build: always build glusterfs-cli to allow monitoring/managing from clients) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Jun 16 14:35:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 16 Jun 2019 14:35:13 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(khiremat at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 03:40:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 03:40:21 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1635 from Worker Ant --- REVIEW: https://review.gluster.org/22795 (geo-rep/gsyncd: name is not freed in one of the cases) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 03:55:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 03:55:39 +0000 Subject: [Bugs] [Bug 1535511] Gluster CLI shouldn't stop if log file couldn't be opened In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535511 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-17 03:55:39 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22865 (cli: don't fail if logging initialize fails) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 04:18:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 04:18:54 +0000 Subject: [Bugs] [Bug 1633318] health check fails on restart from crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633318 --- Comment #1 from Mohit Agrawal --- Hi, As per health check code, I don't think the existence of health check file (.glusterfs/health_check) could be the reason for brick failure but I will try to reproduce it. In health check thread we do always open health_check file with mode (O_CREAT|O_WRONLY|O_TRUNC, 0644)) so even if a file is present open truncate the data from the file so health check always writes the latest timestamp in the health_check file. Here in logs, we can see it is showing the error at the time of comparing timestamp with health_check file, it means somehow timestamp updated in health check file is not matching at the time of reading timestamp from a health_check file. Are you sure after sending kill signal brick was stopped because somehow if more than one instances are running then this type of scenario can arise? 1) Please check ps output if the brick was stopped completely. 2) If the brick was stopped completely kindly share volume configuration, I will try to reproduce the same. Regards, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 04:57:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 04:57:51 +0000 Subject: [Bugs] [Bug 1385249] /etc/sysconfig is redhat specific and does not exist in debian or arch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1385249 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|ASSIGNED |NEW CC| |atumball at redhat.com Flags| |needinfo?(kkeithle at redhat.c | |om) Severity|unspecified |medium --- Comment #1 from Amar Tumballi --- I see that sysconfig dir is now used only in glusterfs.spec file. Should this issue be closed? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:01:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:01:41 +0000 Subject: [Bugs] [Bug 1546732] Bad stat performance after client upgrade from 3.10 to 3.12 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546732 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Performance Priority|unspecified |medium Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |INSUFFICIENT_DATA Severity|unspecified |high Last Closed| |2019-06-17 05:01:41 --- Comment #28 from Amar Tumballi --- Considering there are no update on the question posted, closing this as INSUFFICIENT_DATA. Please consider reopen of the bug if this is still an issue. Also, consider upgrading to higher version (glusterfs-6.x + ) before considering a test with 3.12 or something. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:03:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:03:12 +0000 Subject: [Bugs] [Bug 1650017] glustereventsd ImportError: attempted relative import with no known parent package In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1650017 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|NEW |CLOSED CC| |atumball at redhat.com Assignee|bugs at gluster.org |avishwan at redhat.com Resolution|--- |NEXTRELEASE Fixed In Version| |glusterfs-7.0 Severity|unspecified |medium Last Closed| |2019-06-17 05:03:12 --- Comment #4 from Amar Tumballi --- The above patch is merged now. Would be available in glusterfs-7.0 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:07:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:07:12 +0000 Subject: [Bugs] [Bug 1655333] OSError: [Errno 116] Stale file handle due to rotated files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655333 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com, | |sacharya at redhat.com, | |sunkumar at redhat.com Flags| |needinfo?(sunkumar at redhat.c | |om) Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:11:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:11:39 +0000 Subject: [Bugs] [Bug 1659378] posix_janitor_thread_proc has bug that can't go into the janitor_walker if change the system time forward and change back In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659378 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com, | |moagrawa at redhat.com, | |rabhat at redhat.com Severity|unspecified |low --- Comment #3 from Amar Tumballi --- Marking the issue as low priority right now, as the usecase of changing system is not considered critical. This can happen because we keep the timestamp comparison with epoch values, and once a value is set for higher value, it won't be passing the if condition in case the time is reset back. Best scenario at that time is to restart the brick process. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:13:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:13:08 +0000 Subject: [Bugs] [Bug 1663337] Gluster documentation on quorum-reads option is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663337 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |urgent CC| |atumball at redhat.com, | |ksubrahm at redhat.com, | |pkarampu at redhat.com, | |ravishankar at redhat.com, | |rkavunga at redhat.com Assignee|bugs at gluster.org |ravishankar at redhat.com Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:16:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:16:59 +0000 Subject: [Bugs] [Bug 1664524] Non-root geo-replication session goes to faulty state, when the session is started In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664524 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com, | |khiremat at redhat.com, | |sacharya at redhat.com Assignee|bugs at gluster.org |sunkumar at redhat.com Severity|unspecified |high --- Comment #1 from Amar Tumballi --- Hi Abhilash, can you please upgrade to higher versions of glusterfs? We did fix multiple issues with glusterfs since 3.10.12. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:18:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:18:47 +0000 Subject: [Bugs] [Bug 1720993] New: tests/features/subdir-mount.t is failing for brick_mux regrssion Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 Bug ID: 1720993 Summary: tests/features/subdir-mount.t is failing for brick_mux regrssion Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: tests/features/subdir-mount.t is failing for brick_mux regrssion Version-Release number of selected component (if applicable): How reproducible: Run tests/features/subdir-mount.t in a loop Steps to Reproduce: 1. 2. 3. Actual results: test case tests/features/subdir-mount.t is failing Expected results: test case tests/features/subdir-mount.t should not fail Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:21:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:21:58 +0000 Subject: [Bugs] [Bug 1720993] tests/features/subdir-mount.t is failing for brick_mux regrssion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:22:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:22:54 +0000 Subject: [Bugs] [Bug 1672076] chrome / chromium crash on gluster, sqlite issue? In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672076 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com, | |rgowdapp at redhat.com Severity|unspecified |high --- Comment #1 from Amar Tumballi --- While the change is not an expected behavior, there are multiple things which would have changed. 1. glusterfs version from F27 to F29. 2. Changes in application's access pattern which use glusterfs mount. We recommend you to try and upgrade glusterfs to latest versions, and then see if this is still happening. Also, disabling some of the translators (`gluster volume set volume1 read-ahead off`) like read-ahead, io-cache, md-cache, write-behind. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:24:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:24:46 +0000 Subject: [Bugs] [Bug 1672258] fuse takes memory and doesn't free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672258 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|NEW |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Severity|unspecified |high Last Closed| |2019-06-17 05:24:46 --- Comment #3 from Amar Tumballi --- Closing the issue as CURRENTRELEASE with above comment's data. Please upgrade. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:29:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:29:53 +0000 Subject: [Bugs] [Bug 1677804] POSIX ACLs are absent on FUSE-mounted volume using tmpfs bricks (posix-acl-autoload usually returns -1) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677804 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |anoopcs at redhat.com, | |atumball at redhat.com, | |jthottan at redhat.com, | |spalai at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:30:19 +0000 Subject: [Bugs] [Bug 1720993] tests/features/subdir-mount.t is failing for brick_mux regrssion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 --- Comment #1 from Mohit Agrawal --- test case .t is failing at the time of executing stat command just after an add-brick command executed by test case. After add-brick glusterd executes hook-script S13create-subdir-mounts.sh for healing of the directories those are expected to be present. The hook script executed by glusterd asynchronously so if somehow script is not executed at the time of running stat in .t , stat is failed. To avoid the error check in logs just after add-brick and wait to run hook-script. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:32:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:32:29 +0000 Subject: [Bugs] [Bug 1679170] Integer Overflow possible in md-cache.c due to data type inconsistency In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679170 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.2 Resolution|--- |CURRENTRELEASE Severity|unspecified |high Last Closed| |2019-06-17 05:32:29 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 05:34:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:34:29 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sabose at redhat.com | |) | |needinfo?(ebenahar at redhat.c | |om) | --- Comment #12 from Sahina Bose --- Elad, can you check and update? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:34:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:34:45 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ebenahar at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:44:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:44:24 +0000 Subject: [Bugs] [Bug 1720993] tests/features/subdir-mount.t is failing for brick_mux regrssion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22877 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:44:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:44:25 +0000 Subject: [Bugs] [Bug 1720993] tests/features/subdir-mount.t is failing for brick_mux regrssion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22877 (tests: subdir-mount.t is failing for brick_mux regrssion) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 05:49:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 05:49:02 +0000 Subject: [Bugs] [Bug 1304465] dnscache in libglusterfs returns 127.0.0.1 for 1st non-localhost request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1304465 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-06-17 05:49:02 --- Comment #3 from Rinku --- As this is no more reproducible on latest version (v6.3) we are closing this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 06:31:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 06:31:23 +0000 Subject: [Bugs] [Bug 1686461] Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686461 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Severity|unspecified |low Last Closed| |2019-06-17 06:31:23 --- Comment #2 from Amar Tumballi --- Hi Ryan, We have marked these logs as DEBUG level from GlusterFS-6.0. Please upgrade to get the fixes. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 06:33:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 06:33:24 +0000 Subject: [Bugs] [Bug 1614275] Fix spurious failures in tests/bugs/ec/bug-1236065.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1614275 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |pkarampu at redhat.com Flags|needinfo?(pkarampu at redhat.c | |om) | --- Comment #4 from Pranith Kumar K --- (In reply to Amar Tumballi from comment #3) > Noticed that this is not failing in recent times. Should we close it? If not > would like to consider focusing on this. It is a race, so may not always happen. I will take this up. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 07:06:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 07:06:11 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22806 (afr/fini: Free local_pool data during an afr fini) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 07:23:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 07:23:18 +0000 Subject: [Bugs] [Bug 1690454] mount-shared-storage.sh does not implement mount options In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690454 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Severity|unspecified |medium --- Comment #1 from Amar Tumballi --- Agree that this is an issue. We should consider Arr[3] (options) in mount command. Did you happen to resolve it? Possible to send a patch for the same? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 08:08:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 08:08:10 +0000 Subject: [Bugs] [Bug 1693184] A brick process(glusterfsd) died with 'memory violation' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693184 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Flags| |needinfo?(knjeong at growthsof | |t.co.kr) Severity|unspecified |high --- Comment #1 from Amar Tumballi --- > I'm using a volume with two replicas of the 3.6.9 version of GlusterFS. Is it possible to upgrade version of glusterfs? Current glusterfs versions are at least 2+ years from that version, and we have fixed a **lot** of memory violation errors (coverity/clang-scan etc). We had not seen any issues similar to this strace in a long time. Let us know how the upgrade goes, and ping us if you need help with any upgrade issues. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 08:52:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 08:52:32 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Elad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |aefrat at redhat.com Flags|needinfo?(ebenahar at redhat.c |needinfo?(aefrat at redhat.com |om) |) --- Comment #13 from Elad --- re-assigning need info to Avihai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 08:57:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 08:57:03 +0000 Subject: [Bugs] [Bug 1718741] GlusterFS having high CPU In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718741 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Severity|unspecified |medium --- Comment #1 from Amar Tumballi --- Hi SureshM, Can you provide further details? like What was the application running ? What is glusterfs volume info output etc? Gluster is a process end of the day, and if there is enough load on this process, there would be high CPU utilization. It needs to be understood if it is normal, or something specific to your workload. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 09:08:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 09:08:28 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Avihai changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(aefrat at redhat.com |needinfo?(sabose at redhat.com |) |) --- Comment #14 from Avihai --- Hi Sahina, We currently have glusterfs-server-3.12.6-1.el7.x86_64, last time Elad tried to upgrade and due to this bug broke our/QE gluster mount's. Then he needed to downgrade/reinstall the gluster back to 3.12 so it could work. I do not want to go through this again, do you happen to have a gluster in 5.1 or higher ENV and I'll try to reproduce it from there? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 14 14:07:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 14 Jun 2019 14:07:14 +0000 Subject: [Bugs] [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22861 (uss: Fix tar issue with ctime and uss enabled) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 10:31:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:31:00 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-17 10:31:00 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 10:31:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:31:46 +0000 Subject: [Bugs] [Bug 1718848] False positive logging of mount failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718848 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-17 10:31:46 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22860 (glusterd: log error message only when rsp.op_ret is negative) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 15 03:57:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 15 Jun 2019 03:57:29 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #698 from Worker Ant --- REVIEW: https://review.gluster.org/22625 (core: improve timer accuracy) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 10:37:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:37:32 +0000 Subject: [Bugs] [Bug 1365738] Disperse volume, the deleted file show abnormality in the trashcan, and can't be deleted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1365738 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-06-17 10:37:32 --- Comment #2 from Amar Tumballi --- This is not seen in latest releases, please upgrade to glusterfs-6.x, and let us know how it goes for you. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 10:43:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:43:11 +0000 Subject: [Bugs] [Bug 1493656] Storage hiccup (inaccessible a short while) when a single brick go down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1493656 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #16 from Amar Tumballi --- > Does it resume after 30s? Can you attach glusterfs client logs after it resumed? 1 year since last question. Recommend upgrading the glusterfs to 6.x and test the behavior, and report back here. If there are no further updates in next 1 month, inclined to close the issue as WORKSFORME / WONTFIX. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 10:52:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:52:22 +0000 Subject: [Bugs] [Bug 1534453] Reading over than the file size on dispersed volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1534453 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |aspandey at redhat.com, | |atumball at redhat.com, | |jahernan at redhat.com, | |pgurusid at redhat.com, | |pkarampu at redhat.com, | |skoduri at redhat.com Assignee|bugs at gluster.org |aspandey at redhat.com Flags| |needinfo?(aspandey at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 10:56:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:56:57 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22879 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 10:56:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:56:58 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #699 from Worker Ant --- REVIEW: https://review.gluster.org/22879 (core: fedora 30 compiler warnings) posted (#1) for review on master by Sheetal Pamecha -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 10:58:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 10:58:15 +0000 Subject: [Bugs] [Bug 1534453] Reading over than the file size on dispersed volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1534453 --- Comment #5 from Amar Tumballi --- Hi Jenny, thanks for the reports. We have not seen this in latest glusterfs codebase (glusterfs-6.x). Will update you with testing it on glusterfs-7.0 base. Also considering this should be easy to automate with our regression tests, we would consider automating the same. Would be great if you can help by contributing this test to code (https://github.com/gluster/glusterfs/tree/master/tests) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 11:02:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:02:55 +0000 Subject: [Bugs] [Bug 1539680] RDMA transport bricks crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1539680 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-06-17 11:02:55 --- Comment #3 from Amar Tumballi --- Jiri, Apologies for the delay. Thanks for the report, but we are not able to look into the RDMA section actively, and are seriously considering from dropping it from active support. More on this @ https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html > ?RDMA? transport support: > > Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work > with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with > IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP > based) network for your volume. > > If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this > after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:08:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:08:11 +0000 Subject: [Bugs] [Bug 1576190] RMAN backups to GFS is throwing (ORA-19510: failed to set size of 1531352 blocks for file and ORA-27046: file size is not a multiple of logical block size). In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1576190 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- Hi Somasekhar, Is there any update from you on the questions Vijay asked in this bug? Otherwise we would consider closing this with INSUFFCIENT_DATA/WORKSFORME. Also consider upgrading to glusterfs-6.x before testing if possible while validating, as we have many fixes in this area. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:10:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:10:54 +0000 Subject: [Bugs] [Bug 1593079] IO performance is slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593079 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-06-17 11:10:54 --- Comment #4 from Amar Tumballi --- Thanks for the report, but we are not able to look into the RDMA section actively, and are seriously considering from dropping it from active support. More on this @ https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html > ?RDMA? transport support: > > Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work > with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with > IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP > based) network for your volume. > > If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this > after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:17:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:17:47 +0000 Subject: [Bugs] [Bug 1721105] New: Failed to create volume which transport_type is "tcp, rdma" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 Bug ID: 1721105 Summary: Failed to create volume which transport_type is "tcp,rdma" Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, guol-fnst at cn.fujitsu.com, pgurusid at redhat.com, srakonde at redhat.com Depends On: 1716812 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1716812 +++ Description of problem: gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force volume create: 11: failed: Failed to create volume files Version-Release number of selected component (if applicable): # gluster --version glusterfs 4.1.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:8b:a9 brd ff:ff:ff:ff:ff:ff inet 193.168.141.101/16 brd 193.168.255.255 scope global dynamic ens192 valid_lft 2591093sec preferred_lft 2591093sec inet6 fe80::250:56ff:fe9c:8ba9/64 scope link valid_lft forever preferred_lft forever 3: ens224: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:53:58 brd ff:ff:ff:ff:ff:ff How reproducible: Steps to Reproduce: 1.rxe_cfg start 2.rxe_cfg add ens192 3.gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force Actual results: volume create: 11: failed: Failed to create volume files Expected results: Success to create volume Additional info: [2019-06-04 07:36:45.966125] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-glusterd: Started running glusterd version 4.1.8 (args: glusterd --xlator-option *.upgrade=on -N) [2019-06-04 07:36:45.970884] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:36:45.970900] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:36:45.970906] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:36:45.973455] E [rpc-transport.c:284:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/4.1.8/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-06-04 07:36:45.973468] W [rpc-transport.c:288:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-06-04 07:36:45.973473] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:36:45.973478] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:36:45.976348] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:36:45.977372] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:36:45.989706] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option upgrade on 9: option event-threads 1 10: option ping-timeout 0 11: option transport.socket.read-fail-log off 12: option transport.socket.keepalive-interval 2 13: option transport.socket.keepalive-time 10 14: option transport-type rdma 15: option working-directory /var/lib/glusterd 16: end-volume 17: +------------------------------------------------------------------------------+ [2019-06-04 07:36:46.005401] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:36:46.006879] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/lib64/libpthread.so.0(+0x7dd5) [0x7f55547bbdd5] -->glusterd(glusterfs_sigwaiter+0xe5) [0x55c659e7dd65] -->glusterd(cleanup_and_exit+0x6b) [0x55c659e7db8b] ) 0-: received signum (15), shutting down [2019-06-04 07:36:46.006997] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007004] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007008] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: GlusterD svc cli, Num: 1238463, Ver: 2, Port: 0 [2019-06-04 07:36:46.007061] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007066] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007070] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: Gluster Handshake, Num: 14398633, Ver: 2, Port: 0 [2019-06-04 07:37:18.784525] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.8 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2019-06-04 07:37:18.787926] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:37:18.787944] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:37:18.787950] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:37:18.814752] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device] [2019-06-04 07:37:18.814780] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device [2019-06-04 07:37:18.814786] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2019-06-04 07:37:18.814844] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:37:18.814852] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:37:19.617049] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:37:19.617342] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:37:19.626546] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-06-04 07:37:19.626791] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:37:20.874611] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:37:20.889571] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:37:20.889588] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:37:20.889601] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files [2019-06-04 07:38:49.194175] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:38:49.211380] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:38:49.211407] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:38:49.211433] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files --- Additional comment from guolei on 2019-06-04 07:58:44 UTC --- Test is ok on glusterfs3.12.9 ,failed on glusterfs3.13.2 and later version. generate_client_volfiles (glusterd_volinfo_t *volinfo, glusterd_client_type_t client_type) { int i = 0; int ret = -1; char filepath[PATH_MAX] = {0,}; char *types[] = {NULL, NULL, NULL}; dict_t *dict = NULL; xlator_t *this = NULL; gf_transport_type type = GF_TRANSPORT_TCP; this = THIS; enumerate_transport_reqs (volinfo->transport_type, types); dict = dict_new (); if (!dict) goto out; for (i = 0; types[i]; i++) { memset (filepath, 0, sizeof (filepath)); ret = dict_set_str (dict, "client-transport-type", types[i]); if (ret) goto out; type = transport_str_to_type (types[i]); ret = dict_set_uint32 (dict, "trusted-client", client_type); if (ret) goto out; if (client_type == GF_CLIENT_TRUSTED) { ret = glusterd_get_trusted_client_filepath (filepath, volinfo, type); } else if (client_type == GF_CLIENT_TRUSTED_PROXY) { glusterd_get_gfproxy_client_volfile (volinfo, filepath, PATH_MAX); <---------------------------- Maybe this is the problem? transport type should be passed to glusterd_get_gfproxy_client_volfile .Or filepath is NULL. ret = dict_set_str (dict, "gfproxy-client", "on"); } else { ret = glusterd_get_client_filepath (filepath, volinfo, type); } if (ret) { gf_msg (this->name, GF_LOG_ERROR, EINVAL, GD_MSG_INVALID_ENTRY, "Received invalid transport-type"); goto out; } * ret = generate_single_transport_client_volfile (volinfo, filepath, dict);* if (ret) goto out; } /* Generate volfile for rebalance process */ glusterd_get_rebalance_volfile (volinfo, filepath, PATH_MAX); ret = build_rebalance_volfile (volinfo, filepath, dict); if (ret) { gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_VOLFILE_CREATE_FAIL, "Failed to create rebalance volfile for %s", volinfo->volname); goto out; } out: if (dict) dict_unref (dict); gf_msg_trace ("glusterd", 0, "Returning %d", ret); return ret; } void glusterd_get_gfproxy_client_volfile (glusterd_volinfo_t *volinfo, char *path, int path_len) { char workdir[PATH_MAX] = {0, }; glusterd_conf_t *priv = THIS->private; GLUSTERD_GET_VOLUME_DIR (workdir, volinfo, priv); switch (volinfo->transport_type) { case GF_TRANSPORT_TCP: snprintf (path, path_len, "%s/trusted-%s.tcp-gfproxy-fuse.vol", workdir, volinfo->volname); break; case GF_TRANSPORT_RDMA: snprintf (path, path_len, "%s/trusted-%s.rdma-gfproxy-fuse.vol", workdir, volinfo->volname); break; default: break; } } --- Additional comment from Atin Mukherjee on 2019-06-10 12:18:36 UTC --- Since type GF_TRANSPORT_BOTH_TCP_RDMA isn't handled in the function. Poornima - Was this intentionally done or a bug? I feel it's the latter. Looking at glusterd_get_dummy_client_filepath () we just need to club GF_TRANSPORT_TCP & GF_TRANSPORT_BOTH_TCP_RDMA in the same place. Please confirm. --- Additional comment from Sanju on 2019-06-10 17:17:57 UTC --- Looking at the code, I feel we missed handle GF_TRANSPORT_BOTH_TCP_RDMA. As we have provided choice to create volume using tcp,rdma we should handle GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile(). This issue exists in the latest master too. Thanks, Sanju --- Additional comment from Worker Ant on 2019-06-11 04:25:25 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-06-17 10:31:00 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) merged (#5) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 [Bug 1716812] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:17:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:17:47 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1721105 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 [Bug 1721105] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:19:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:19:11 +0000 Subject: [Bugs] [Bug 1721105] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22880 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:19:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:19:12 +0000 Subject: [Bugs] [Bug 1721105] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22880 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#1) for review on release-6 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:19:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:19:25 +0000 Subject: [Bugs] [Bug 1721106] New: Failed to create volume which transport_type is "tcp, rdma" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 Bug ID: 1721106 Summary: Failed to create volume which transport_type is "tcp,rdma" Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, guol-fnst at cn.fujitsu.com, pgurusid at redhat.com, srakonde at redhat.com Depends On: 1716812 Blocks: 1721105 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1716812 +++ Description of problem: gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force volume create: 11: failed: Failed to create volume files Version-Release number of selected component (if applicable): # gluster --version glusterfs 4.1.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:8b:a9 brd ff:ff:ff:ff:ff:ff inet 193.168.141.101/16 brd 193.168.255.255 scope global dynamic ens192 valid_lft 2591093sec preferred_lft 2591093sec inet6 fe80::250:56ff:fe9c:8ba9/64 scope link valid_lft forever preferred_lft forever 3: ens224: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:53:58 brd ff:ff:ff:ff:ff:ff How reproducible: Steps to Reproduce: 1.rxe_cfg start 2.rxe_cfg add ens192 3.gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force Actual results: volume create: 11: failed: Failed to create volume files Expected results: Success to create volume Additional info: [2019-06-04 07:36:45.966125] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-glusterd: Started running glusterd version 4.1.8 (args: glusterd --xlator-option *.upgrade=on -N) [2019-06-04 07:36:45.970884] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:36:45.970900] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:36:45.970906] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:36:45.973455] E [rpc-transport.c:284:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/4.1.8/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-06-04 07:36:45.973468] W [rpc-transport.c:288:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-06-04 07:36:45.973473] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:36:45.973478] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:36:45.976348] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:36:45.977372] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:36:45.989706] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option upgrade on 9: option event-threads 1 10: option ping-timeout 0 11: option transport.socket.read-fail-log off 12: option transport.socket.keepalive-interval 2 13: option transport.socket.keepalive-time 10 14: option transport-type rdma 15: option working-directory /var/lib/glusterd 16: end-volume 17: +------------------------------------------------------------------------------+ [2019-06-04 07:36:46.005401] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:36:46.006879] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/lib64/libpthread.so.0(+0x7dd5) [0x7f55547bbdd5] -->glusterd(glusterfs_sigwaiter+0xe5) [0x55c659e7dd65] -->glusterd(cleanup_and_exit+0x6b) [0x55c659e7db8b] ) 0-: received signum (15), shutting down [2019-06-04 07:36:46.006997] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007004] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007008] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: GlusterD svc cli, Num: 1238463, Ver: 2, Port: 0 [2019-06-04 07:36:46.007061] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007066] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007070] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: Gluster Handshake, Num: 14398633, Ver: 2, Port: 0 [2019-06-04 07:37:18.784525] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.8 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2019-06-04 07:37:18.787926] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:37:18.787944] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:37:18.787950] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:37:18.814752] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device] [2019-06-04 07:37:18.814780] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device [2019-06-04 07:37:18.814786] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2019-06-04 07:37:18.814844] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:37:18.814852] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:37:19.617049] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:37:19.617342] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:37:19.626546] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-06-04 07:37:19.626791] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:37:20.874611] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:37:20.889571] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:37:20.889588] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:37:20.889601] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files [2019-06-04 07:38:49.194175] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:38:49.211380] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:38:49.211407] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:38:49.211433] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files --- Additional comment from guolei on 2019-06-04 07:58:44 UTC --- Test is ok on glusterfs3.12.9 ,failed on glusterfs3.13.2 and later version. generate_client_volfiles (glusterd_volinfo_t *volinfo, glusterd_client_type_t client_type) { int i = 0; int ret = -1; char filepath[PATH_MAX] = {0,}; char *types[] = {NULL, NULL, NULL}; dict_t *dict = NULL; xlator_t *this = NULL; gf_transport_type type = GF_TRANSPORT_TCP; this = THIS; enumerate_transport_reqs (volinfo->transport_type, types); dict = dict_new (); if (!dict) goto out; for (i = 0; types[i]; i++) { memset (filepath, 0, sizeof (filepath)); ret = dict_set_str (dict, "client-transport-type", types[i]); if (ret) goto out; type = transport_str_to_type (types[i]); ret = dict_set_uint32 (dict, "trusted-client", client_type); if (ret) goto out; if (client_type == GF_CLIENT_TRUSTED) { ret = glusterd_get_trusted_client_filepath (filepath, volinfo, type); } else if (client_type == GF_CLIENT_TRUSTED_PROXY) { glusterd_get_gfproxy_client_volfile (volinfo, filepath, PATH_MAX); <---------------------------- Maybe this is the problem? transport type should be passed to glusterd_get_gfproxy_client_volfile .Or filepath is NULL. ret = dict_set_str (dict, "gfproxy-client", "on"); } else { ret = glusterd_get_client_filepath (filepath, volinfo, type); } if (ret) { gf_msg (this->name, GF_LOG_ERROR, EINVAL, GD_MSG_INVALID_ENTRY, "Received invalid transport-type"); goto out; } * ret = generate_single_transport_client_volfile (volinfo, filepath, dict);* if (ret) goto out; } /* Generate volfile for rebalance process */ glusterd_get_rebalance_volfile (volinfo, filepath, PATH_MAX); ret = build_rebalance_volfile (volinfo, filepath, dict); if (ret) { gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_VOLFILE_CREATE_FAIL, "Failed to create rebalance volfile for %s", volinfo->volname); goto out; } out: if (dict) dict_unref (dict); gf_msg_trace ("glusterd", 0, "Returning %d", ret); return ret; } void glusterd_get_gfproxy_client_volfile (glusterd_volinfo_t *volinfo, char *path, int path_len) { char workdir[PATH_MAX] = {0, }; glusterd_conf_t *priv = THIS->private; GLUSTERD_GET_VOLUME_DIR (workdir, volinfo, priv); switch (volinfo->transport_type) { case GF_TRANSPORT_TCP: snprintf (path, path_len, "%s/trusted-%s.tcp-gfproxy-fuse.vol", workdir, volinfo->volname); break; case GF_TRANSPORT_RDMA: snprintf (path, path_len, "%s/trusted-%s.rdma-gfproxy-fuse.vol", workdir, volinfo->volname); break; default: break; } } --- Additional comment from Atin Mukherjee on 2019-06-10 12:18:36 UTC --- Since type GF_TRANSPORT_BOTH_TCP_RDMA isn't handled in the function. Poornima - Was this intentionally done or a bug? I feel it's the latter. Looking at glusterd_get_dummy_client_filepath () we just need to club GF_TRANSPORT_TCP & GF_TRANSPORT_BOTH_TCP_RDMA in the same place. Please confirm. --- Additional comment from Sanju on 2019-06-10 17:17:57 UTC --- Looking at the code, I feel we missed handle GF_TRANSPORT_BOTH_TCP_RDMA. As we have provided choice to create volume using tcp,rdma we should handle GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile(). This issue exists in the latest master too. Thanks, Sanju --- Additional comment from Worker Ant on 2019-06-11 04:25:25 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-06-17 10:31:00 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) merged (#5) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 [Bug 1716812] Failed to create volume which transport_type is "tcp,rdma" https://bugzilla.redhat.com/show_bug.cgi?id=1721105 [Bug 1721105] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:19:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:19:25 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1721106 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 [Bug 1721106] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:19:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:19:25 +0000 Subject: [Bugs] [Bug 1721105] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1721106 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 [Bug 1721106] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:24:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:24:15 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #4 from Amar Tumballi --- We have dropped support for encryption translator from codebase from glusterfs-5.x releases, as it was not maintained. Please do a `gluster volume reset test_vol features.encrypt` and then upgrading. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 11:24:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:24:35 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Assignee|vbellur at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:00 +0000 Subject: [Bugs] [Bug 1721109] New: Failed to create volume which transport_type is "tcp, rdma" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721109 Bug ID: 1721109 Summary: Failed to create volume which transport_type is "tcp,rdma" Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, guol-fnst at cn.fujitsu.com, pgurusid at redhat.com, srakonde at redhat.com Depends On: 1716812 Blocks: 1721105, 1721106 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1716812 +++ Description of problem: gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force volume create: 11: failed: Failed to create volume files Version-Release number of selected component (if applicable): # gluster --version glusterfs 4.1.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:8b:a9 brd ff:ff:ff:ff:ff:ff inet 193.168.141.101/16 brd 193.168.255.255 scope global dynamic ens192 valid_lft 2591093sec preferred_lft 2591093sec inet6 fe80::250:56ff:fe9c:8ba9/64 scope link valid_lft forever preferred_lft forever 3: ens224: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:9c:53:58 brd ff:ff:ff:ff:ff:ff How reproducible: Steps to Reproduce: 1.rxe_cfg start 2.rxe_cfg add ens192 3.gluster volume create 11 transport tcp,rdma 193.168.141.101:/tmp/11 193.168.141.101:/tmp/12 force Actual results: volume create: 11: failed: Failed to create volume files Expected results: Success to create volume Additional info: [2019-06-04 07:36:45.966125] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-glusterd: Started running glusterd version 4.1.8 (args: glusterd --xlator-option *.upgrade=on -N) [2019-06-04 07:36:45.970884] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:36:45.970900] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:36:45.970906] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:36:45.973455] E [rpc-transport.c:284:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/4.1.8/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-06-04 07:36:45.973468] W [rpc-transport.c:288:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-06-04 07:36:45.973473] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:36:45.973478] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:36:45.976348] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:36:45.977372] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:36:45.989706] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option upgrade on 9: option event-threads 1 10: option ping-timeout 0 11: option transport.socket.read-fail-log off 12: option transport.socket.keepalive-interval 2 13: option transport.socket.keepalive-time 10 14: option transport-type rdma 15: option working-directory /var/lib/glusterd 16: end-volume 17: +------------------------------------------------------------------------------+ [2019-06-04 07:36:46.005401] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:36:46.006879] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/lib64/libpthread.so.0(+0x7dd5) [0x7f55547bbdd5] -->glusterd(glusterfs_sigwaiter+0xe5) [0x55c659e7dd65] -->glusterd(cleanup_and_exit+0x6b) [0x55c659e7db8b] ) 0-: received signum (15), shutting down [2019-06-04 07:36:46.006997] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007004] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007008] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: GlusterD svc cli, Num: 1238463, Ver: 2, Port: 0 [2019-06-04 07:36:46.007061] E [rpcsvc.c:1536:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap [2019-06-04 07:36:46.007066] E [rpcsvc.c:1662:rpcsvc_program_unregister] 0-rpc-service: portmap unregistration of program failed [2019-06-04 07:36:46.007070] E [rpcsvc.c:1708:rpcsvc_program_unregister] 0-rpc-service: Program unregistration failed: Gluster Handshake, Num: 14398633, Ver: 2, Port: 0 [2019-06-04 07:37:18.784525] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.8 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2019-06-04 07:37:18.787926] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-06-04 07:37:18.787944] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-06-04 07:37:18.787950] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-06-04 07:37:18.814752] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device] [2019-06-04 07:37:18.814780] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device [2019-06-04 07:37:18.814786] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2019-06-04 07:37:18.814844] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-06-04 07:37:18.814852] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-06-04 07:37:19.617049] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31202 [2019-06-04 07:37:19.617342] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 79e7e129-d041-48b6-b1d0-746c55d148fc [2019-06-04 07:37:19.626546] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-06-04 07:37:19.626791] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-06-04 07:37:20.874611] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:37:20.889571] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:37:20.889588] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:37:20.889601] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files [2019-06-04 07:38:49.194175] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.8/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-06-04 07:38:49.211380] E [MSGID: 106068] [glusterd-volgen.c:1034:volgen_write_volfile] 0-management: failed to create volfile [2019-06-04 07:38:49.211407] E [glusterd-volgen.c:6727:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles [2019-06-04 07:38:49.211433] E [MSGID: 106122] [glusterd-syncop.c:1482:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost : Failed to create volume files --- Additional comment from guolei on 2019-06-04 07:58:44 UTC --- Test is ok on glusterfs3.12.9 ,failed on glusterfs3.13.2 and later version. generate_client_volfiles (glusterd_volinfo_t *volinfo, glusterd_client_type_t client_type) { int i = 0; int ret = -1; char filepath[PATH_MAX] = {0,}; char *types[] = {NULL, NULL, NULL}; dict_t *dict = NULL; xlator_t *this = NULL; gf_transport_type type = GF_TRANSPORT_TCP; this = THIS; enumerate_transport_reqs (volinfo->transport_type, types); dict = dict_new (); if (!dict) goto out; for (i = 0; types[i]; i++) { memset (filepath, 0, sizeof (filepath)); ret = dict_set_str (dict, "client-transport-type", types[i]); if (ret) goto out; type = transport_str_to_type (types[i]); ret = dict_set_uint32 (dict, "trusted-client", client_type); if (ret) goto out; if (client_type == GF_CLIENT_TRUSTED) { ret = glusterd_get_trusted_client_filepath (filepath, volinfo, type); } else if (client_type == GF_CLIENT_TRUSTED_PROXY) { glusterd_get_gfproxy_client_volfile (volinfo, filepath, PATH_MAX); <---------------------------- Maybe this is the problem? transport type should be passed to glusterd_get_gfproxy_client_volfile .Or filepath is NULL. ret = dict_set_str (dict, "gfproxy-client", "on"); } else { ret = glusterd_get_client_filepath (filepath, volinfo, type); } if (ret) { gf_msg (this->name, GF_LOG_ERROR, EINVAL, GD_MSG_INVALID_ENTRY, "Received invalid transport-type"); goto out; } * ret = generate_single_transport_client_volfile (volinfo, filepath, dict);* if (ret) goto out; } /* Generate volfile for rebalance process */ glusterd_get_rebalance_volfile (volinfo, filepath, PATH_MAX); ret = build_rebalance_volfile (volinfo, filepath, dict); if (ret) { gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_VOLFILE_CREATE_FAIL, "Failed to create rebalance volfile for %s", volinfo->volname); goto out; } out: if (dict) dict_unref (dict); gf_msg_trace ("glusterd", 0, "Returning %d", ret); return ret; } void glusterd_get_gfproxy_client_volfile (glusterd_volinfo_t *volinfo, char *path, int path_len) { char workdir[PATH_MAX] = {0, }; glusterd_conf_t *priv = THIS->private; GLUSTERD_GET_VOLUME_DIR (workdir, volinfo, priv); switch (volinfo->transport_type) { case GF_TRANSPORT_TCP: snprintf (path, path_len, "%s/trusted-%s.tcp-gfproxy-fuse.vol", workdir, volinfo->volname); break; case GF_TRANSPORT_RDMA: snprintf (path, path_len, "%s/trusted-%s.rdma-gfproxy-fuse.vol", workdir, volinfo->volname); break; default: break; } } --- Additional comment from Atin Mukherjee on 2019-06-10 12:18:36 UTC --- Since type GF_TRANSPORT_BOTH_TCP_RDMA isn't handled in the function. Poornima - Was this intentionally done or a bug? I feel it's the latter. Looking at glusterd_get_dummy_client_filepath () we just need to club GF_TRANSPORT_TCP & GF_TRANSPORT_BOTH_TCP_RDMA in the same place. Please confirm. --- Additional comment from Sanju on 2019-06-10 17:17:57 UTC --- Looking at the code, I feel we missed handle GF_TRANSPORT_BOTH_TCP_RDMA. As we have provided choice to create volume using tcp,rdma we should handle GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile(). This issue exists in the latest master too. Thanks, Sanju --- Additional comment from Worker Ant on 2019-06-11 04:25:25 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-06-17 10:31:00 UTC --- REVIEW: https://review.gluster.org/22851 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) merged (#5) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 [Bug 1716812] Failed to create volume which transport_type is "tcp,rdma" https://bugzilla.redhat.com/show_bug.cgi?id=1721105 [Bug 1721105] Failed to create volume which transport_type is "tcp,rdma" https://bugzilla.redhat.com/show_bug.cgi?id=1721106 [Bug 1721106] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:00 +0000 Subject: [Bugs] [Bug 1716812] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716812 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1721109 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721109 [Bug 1721109] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:00 +0000 Subject: [Bugs] [Bug 1721105] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721105 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1721109 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721109 [Bug 1721109] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:00 +0000 Subject: [Bugs] [Bug 1721106] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1721109 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721109 [Bug 1721109] Failed to create volume which transport_type is "tcp,rdma" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:44 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22882 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:25:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:25:45 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22882 (encryption/crypt: remove from volume file) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:26:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:26:36 +0000 Subject: [Bugs] [Bug 1655901] glusterfsd 5.1 and 5.2 crashes in socket.so In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655901 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |urgent Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x, | |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-17 11:26:36 --- Comment #5 from Amar Tumballi --- Fixed with https://review.gluster.org/#/q/I911b0e0b2060f7f41ded0b05db11af6f9b7c09c5 (in glusterfs-5.4 and beyond, and glusterfs-6.1 and beyond). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:28:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:28:56 +0000 Subject: [Bugs] [Bug 1657202] Possible memory leak in 5.1 brick process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657202 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x, | |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-17 11:28:56 --- Comment #2 from Amar Tumballi --- robdewit, apologies for delay in getting back on this. Yes, there were some serious memory leaks which got fixed in glusterfs-5.5 timeframe, and glusterfs-6.1 time... We recommend you to upgrade and test the newer version to get the fixes. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:30:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:30:56 +0000 Subject: [Bugs] [Bug 1662557] glusterfs process crashes, causing "Transport endpoint not connected". In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662557 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(rob.dewit at coosto. | |com) --- Comment #9 from Amar Tumballi --- Hi robdewit, We did fix some of the crashes by glusterfs-5.5 (from 5.3->5.5), please upgrade and let us know. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:31:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:31:21 +0000 Subject: [Bugs] [Bug 1385249] /etc/sysconfig is redhat specific and does not exist in debian or arch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1385249 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(kkeithle at redhat.c | |om) | --- Comment #2 from Kaleb KEITHLEY --- (In reply to Amar Tumballi from comment #1) > I see that sysconfig dir is now used only in glusterfs.spec file. Should > this issue be closed? It's still wrong in the glusterd.service(.in) file, so probably not. the glusterd.service(.in) on debian/ubuntu/arch is modified during .deb package building. (E.g. line 31 of https://github.com/gluster/glusterfs-debian/blob/bionic-glusterfs-6/debian/rules) That's fine for people that install from .deb packages. But anyone who just builds+installs from source will have a broken glusterd.service file. It would be better to really fix it (in autoconf/configure) and then the package build edit step can be removed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 11:37:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:37:19 +0000 Subject: [Bugs] [Bug 1662557] glusterfs process crashes, causing "Transport endpoint not connected". In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662557 robdewit changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Flags|needinfo?(rob.dewit at coosto. | |com) | Last Closed| |2019-06-17 11:37:19 --- Comment #10 from robdewit --- Hi Amar, We've been running the 6.1 release for some time now and there have been no crashes since then. Thanks! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:39:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:39:35 +0000 Subject: [Bugs] [Bug 1657202] Possible memory leak in 5.1 brick process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657202 --- Comment #3 from robdewit --- Hi Amar, We've been running the 6.1 release for some time now and the memory consumption is back at the previous level. Close to 1GB, but not more than that. Thanks! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:40:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:40:35 +0000 Subject: [Bugs] [Bug 1720993] tests/features/subdir-mount.t is failing for brick_mux regrssion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720993 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-17 11:40:35 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22877 (tests: subdir-mount.t is failing for brick_mux regrssion) merged (#2) on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 11:48:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:48:51 +0000 Subject: [Bugs] [Bug 1721106] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22881 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 11:48:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 11:48:53 +0000 Subject: [Bugs] [Bug 1721106] Failed to create volume which transport_type is "tcp, rdma" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721106 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22881 (glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile) posted (#2) for review on release-5 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 12:05:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:05:56 +0000 Subject: [Bugs] [Bug 1672076] chrome / chromium crash on gluster, sqlite issue? In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672076 --- Comment #2 from Michael J. Chudobiak --- It is no longer an issue on Fedora 30, with: chromium-73.0.3683.86-2.fc30.x86_64 glusterfs-6.2-1.fc30.x86_64 It works fine now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 12:15:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:15:13 +0000 Subject: [Bugs] [Bug 1686461] Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686461 --- Comment #3 from ryan at magenta.tv --- Hi Amar, Thanks! Do you know when/if these will be backported into the 4.1 branch? Best, Ryan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 12:19:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:19:37 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Vivek Das changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |vdas at redhat.com Blocks| |1696809 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 12:19:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:19:40 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack for | |dev&qe approved in-flight | |RHGS3.5 BZs Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 Rule Engine Rule| |327 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 17 12:29:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:29:59 +0000 Subject: [Bugs] [Bug 1719290] Glusterfs mount helper script not working with IPv6 because of regular expression or man is wrong In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719290 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |srakonde at redhat.com Version|5 |mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 12:41:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:41:22 +0000 Subject: [Bugs] [Bug 1719290] Glusterfs mount helper script not working with IPv6 because of regular expression or man is wrong In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719290 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22884 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 12:41:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 12:41:23 +0000 Subject: [Bugs] [Bug 1719290] Glusterfs mount helper script not working with IPv6 because of regular expression or man is wrong In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1719290 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22884 (core: use multiple servers while mounting a volume using ipv6) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 13:00:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 13:00:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22885 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 13:00:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 13:00:52 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #700 from Worker Ant --- REVIEW: https://review.gluster.org/22885 (core: fedora 29 compiler warnings) posted (#1) for review on master by Sheetal Pamecha -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 17 14:49:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 17 Jun 2019 14:49:11 +0000 Subject: [Bugs] [Bug 1633318] health check fails on restart from crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633318 Joe Julian changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-06-17 14:49:11 --- Comment #2 from Joe Julian --- I'll just close this. I filed this 10 months ago and have turned off health checking and upgraded several times since then. I am quite sure that no more than one brick instance was running at the time. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 02:49:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 02:49:46 +0000 Subject: [Bugs] [Bug 1716097] infra: create suse-packing@lists.nfs-ganesha.org alias In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716097 --- Comment #3 from Marc Dequ?nes (Duck) --- Is this ok? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 03:55:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 03:55:36 +0000 Subject: [Bugs] [Bug 1686461] Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686461 --- Comment #4 from Amar Tumballi --- https://review.gluster.org/#/q/I4157a7ec7d5ec9c2948b2bbc1e4cb8317f28d6b8 is the patch I am talking about. We had not ported the changes to 4.1 branch then. Considering we are getting good feedback about 6.x release (both performance, and stability), I would request you to consider an upgrade if that is possible. Because 4.1 would go out of active support in a month after glusterfs-7.0 release. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 03:59:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 03:59:38 +0000 Subject: [Bugs] [Bug 1710744] [FUSE] Endpoint is not connected after "Found anomalies" error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #4 from Amar Tumballi --- 2 requests: 1. By upgrading to glusterfs-6.x version you won't even hit the 'dht' (or distribute) code in this scenario. So, the log should go away, and if the reason for such log is the cause of crash, that should get avoided too. 2. please send 'thread apply all bt full' from the coredump. That should help us to see what caused the problem. (You can send the bt to me directly). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 04:04:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 04:04:02 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Rochelle changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rallan at redhat.com Flags| |needinfo?(khiremat at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 04:05:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 04:05:48 +0000 Subject: [Bugs] [Bug 1716875] Inode Unref Assertion failed: inode->ref In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716875 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |anoopcs at redhat.com, | |atumball at redhat.com, | |gdeschner at redhat.com, | |pgurusid at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 04:21:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 04:21:08 +0000 Subject: [Bugs] [Bug 1494654] Failure to compile glusterfs with glibc 2.25, exempt sys/sysmacro.h from pragma poisoning. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1494654 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22886 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 04:21:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 04:21:09 +0000 Subject: [Bugs] [Bug 1494654] Failure to compile glusterfs with glibc 2.25, exempt sys/sysmacro.h from pragma poisoning. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1494654 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22886 (Compat: fix an Pragma poisoning error) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 04:49:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 04:49:37 +0000 Subject: [Bugs] [Bug 1721353] New: Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721353 Bug ID: 1721353 Summary: Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: I suspect the code coverage tool with centos7 is not covering all the details. Check https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/libglusterfs/src/glusterfs/stack.h.gcov.html for example, and you can see that it says, 17/41 functions are covered. But if you notice there are only 17 inline functions, and all of them are actually covered. If it reported it properly, we should have had 100% coverage there. With that detail, I hope having newer version would get this sorted. Also note, we recently fixed all the issues with python3 in regression runs too, so moving to fedora should help us identify issues sooner with python3 (if any). Version-Release number of selected component (if applicable): master How reproducible: 100% Expected results: Nightly line-coverage runs to run on fedora systems. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 05:02:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 05:02:05 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(khiremat at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 05:14:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 05:14:04 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Rochelle changed: What |Removed |Added ---------------------------------------------------------------------------- QA Contact|rhinduja at redhat.com |rallan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 05:15:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 05:15:34 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Rochelle changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ON_QA |VERIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 05:17:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 05:17:15 +0000 Subject: [Bugs] [Bug 1655333] OSError: [Errno 116] Stale file handle due to rotated files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655333 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sunkumar at redhat.c |needinfo?(mrxlazuardin at gmai |om) |l.com) --- Comment #3 from Sunny Kumar --- I think this should solve this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1694820. Patch is merged upstream. Can you please verify and let us know whether this solved this problem. -Sunny -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 05:33:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 05:33:25 +0000 Subject: [Bugs] [Bug 1711400] Dispersed volumes leave open file descriptors on nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711400 --- Comment #1 from Ashish Pandey --- Hi, I tried to reproduce this issue on latest master but could not see any issue with rising number of open fd's. There was nothing suspicious on my setup. Could you please try to reproduce it with latest release of gluster and let us know if you can see the issue? --- Ashish -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 06:23:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 06:23:14 +0000 Subject: [Bugs] [Bug 1721353] Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721353 --- Comment #1 from Amar Tumballi --- Ok, when I used the lcov tool with the same commands as that of lcov.sh from build-jobs repo, I got below numbers for stack.h (which I used as an example above). on current builder : stack.h - lines(262/497 - 52.7%), functions(17/41 - 41.5%) on fedora29 (local): stack.h - lines(94/111 - 84.7%), functions(6/7 - 85.7%) I hope just by running the regression on fedora, we would get more up-to-date information, and more coverage details. Just note that I suspect this to be more of an header file specific details, and even then, up-to-date information is better than stale info. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 06:47:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 06:47:48 +0000 Subject: [Bugs] [Bug 1615307] Error disabling sockopt IPV6_V6ONLY In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615307 Pavel Znamensky changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(kompastver at gmail. | |com) | --- Comment #2 from Pavel Znamensky --- Looks like issue still persists in v5.5: in cli.log and heal.log: /var/log/glusterfs/cli.log:[2019-06-16 00:29:26.797461] W [socket.c:3367:socket_connect] 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not supported" /var/log/glusterfs/glfsheal-st.log:[2019-06-16 00:31:53.930463] W [socket.c:3367:socket_connect] 0-gfapi: Error disabling sockopt IPV6_V6ONLY: "Operation not supported" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 06:53:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 06:53:10 +0000 Subject: [Bugs] [Bug 1560564] autoreconf aborts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1560564 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com, | |kkeithle at redhat.com, | |ndevos at redhat.com, | |sacharya at redhat.com Assignee|vbellur at redhat.com |spamecha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 07:26:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:26:23 +0000 Subject: [Bugs] [Bug 1566352] posix-acl not synchronized between glusterfs clients In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1566352 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 07:26:23 --- Comment #3 from Amar Tumballi --- https://review.gluster.org/#/c/glusterfs/+/19867/ fixes the part which caused this issue. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 07:31:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:31:30 +0000 Subject: [Bugs] [Bug 1721353] Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721353 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com --- Comment #2 from Kotresh HR --- I have also witnessed the small difference when I was working to improve lcov of glusterd-georep.c. On fedora 30, I used to see 70.1 % but on centos 69.9 %. I didn't spend time debugging that though. Didn't expect it was platform dependent. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:36:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:36:49 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Avihai changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ybenshim at redhat.com Flags| |needinfo?(ybenshim at redhat.c | |om) --- Comment #15 from Avihai --- Yossi, please upgrade our(Raanana site) gluster to latest upstream (V6) and see if this issue reproduces. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 07:44:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:44:29 +0000 Subject: [Bugs] [Bug 1721385] New: glusterfs-libs: usage of inet_addr() may impact IPv6 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721385 Bug ID: 1721385 Summary: glusterfs-libs: usage of inet_addr() may impact IPv6 Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: rkothiya at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, mchangir at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, rkothiya at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com Depends On: 1698435 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698435 [Bug 1698435] glusterfs-libs: usage of inet_addr() may impact IPv6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:46:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:46:02 +0000 Subject: [Bugs] [Bug 1721385] glusterfs-libs: usage of inet_addr() may impact IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721385 --- Comment #1 from Rinku --- Description of problem: usr/lib64/libglusterfs.so.0.0.1 on x86_64 uses function inet_addr, which may impact IPv6 support -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:49:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:49:25 +0000 Subject: [Bugs] [Bug 1672076] chrome / chromium crash on gluster, sqlite issue? In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672076 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 07:49:25 --- Comment #3 from Amar Tumballi --- Thanks for the update Michael. Very helpful. Will close the bug with CURRENTRELEASE then. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:54:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:54:20 +0000 Subject: [Bugs] [Bug 1721385] glusterfs-libs: usage of inet_addr() may impact IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22866 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:54:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:54:21 +0000 Subject: [Bugs] [Bug 1721385] glusterfs-libs: usage of inet_addr() may impact IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22866 (core: replace inet_addr with inet_pton) posted (#2) for review on master by Rinku Kothiya -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:58:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:58:00 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22887 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 07:58:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 07:58:01 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #57 from Worker Ant --- REVIEW: https://review.gluster.org/22887 (lcov: add more tests to glfsxmp-coverage) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 08:39:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 08:39:18 +0000 Subject: [Bugs] [Bug 1597798] 'mv' of directory on encrypted volume fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1597798 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |CANTFIX Assignee|vbellur at redhat.com |bugs at gluster.org Last Closed| |2019-06-18 08:39:18 --- Comment #3 from Amar Tumballi --- Hi Chris, With glusterfs-6.0, we have removed encryption feature of glusterfs, and hence this bug can't be worked on further. Hence we will be closing the bug with CANTFIX/WONTFIX. Please note that you can encrypt the protocol layer with tls, but volume encryption is not supported any more, and we recommend one to secure at rest data using features like dmcrypt etc. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 08:43:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 08:43:21 +0000 Subject: [Bugs] [Bug 1603576] glusterfs dying with SIGSEGV In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1603576 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- Glenn, Carlos, apologies for delay in getting to this. Can you upgrade to glusterfs-6.2 and above? And see if the issue is still happening? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 08:49:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 08:49:01 +0000 Subject: [Bugs] [Bug 1622814] kvm lock problem In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622814 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(dm at belkam.com) --- Comment #18 from Amar Tumballi --- Dmitry, Apologies for delay. Did you get a chance to upgrade to glusterfs-6.x and try if the issue is fixed? I would request you to do try upgrading and testing. We noticed many issues when ovirt community tried to use glusterfs-5.2 version, and since then fixed many issues seen with ovirt community. Would like to hear feedback. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 08:49:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 08:49:22 +0000 Subject: [Bugs] [Bug 1622814] kvm lock problem In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622814 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:02:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:02:28 +0000 Subject: [Bugs] [Bug 1663583] Geo-replication fails to open logfile "/var/log/glusterfs/cli.log" on slave. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663583 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-06-18 09:02:28 --- Comment #2 from Amar Tumballi --- https://review.gluster.org/22865 should fix the issue. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 09:03:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:03:13 +0000 Subject: [Bugs] [Bug 1622814] kvm lock problem In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622814 Dmitry Melekhov changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(moagrawa at redhat.c | |om) | |needinfo?(dm at belkam.com) | --- Comment #19 from Dmitry Melekhov --- Hello! We decided that problem is in arbiter, so we replaced arbiter with 3rd node and have no such problem. Currently we are running 5.6. Sorry, I don't know is there such problem with current arbiter code or not. Thank you! I think it is better to close this issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:12:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:12:02 +0000 Subject: [Bugs] [Bug 1666634] nfs client cannot compile files on dispersed volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1666634 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Priority|unspecified |low CC| |aspandey at redhat.com, | |atumball at redhat.com Component|protocol |disperse Flags| |needinfo?(hxj_lucky at 163.com | |) --- Comment #1 from Amar Tumballi --- Is the issue reproducible with glusterfs-6.x version? Asking because disperse volume got many improvements since 3.9.0 time. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:18:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:18:18 +0000 Subject: [Bugs] [Bug 1622814] kvm lock problem In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622814 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ksubrahm at redhat.com, | |pkarampu at redhat.com, | |rkavunga at redhat.com Fixed In Version| |glusterfs-5.5 (!arbiter) Resolution|--- |WORKSFORME Last Closed| |2019-06-18 09:18:18 --- Comment #20 from Amar Tumballi --- Thanks for the update Dmitry! First of all, as it is working for you, we are happy. I will close the bug for now. Will discuss with the team about if any such issues are fixed in Arbiter since 4.x time frame. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:23:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:23:44 +0000 Subject: [Bugs] [Bug 1689981] OSError: [Errno 1] Operation not permitted - failing with socket files? In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689981 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com, | |khiremat at redhat.com, | |sunkumar at redhat.com Assignee|bugs at gluster.org |sacharya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:29:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:29:21 +0000 Subject: [Bugs] [Bug 1694637] Geo-rep: Rename to an existing file name destroys its content on slave In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694637 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 09:29:21 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 09:32:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:32:26 +0000 Subject: [Bugs] [Bug 1696633] GlusterFs v4.1.5 Tests from /tests/bugs/ module failing on Intel In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696633 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- All of our regressions (per patch) happens on intel arch (x64). So, it is surprising to see something failing. Feel free to test it using latest upstream code/releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:33:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:33:32 +0000 Subject: [Bugs] [Bug 1696721] geo-replication failing after upgrade from 5.5 to 6.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696721 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- Hi Chad, did you get a chance to try Sunny's suggestion? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 09:38:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:38:38 +0000 Subject: [Bugs] [Bug 1716455] OS X error -50 when creating sub-folder on Samba share when using Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716455 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |anoopcs at redhat.com, | |atumball at redhat.com, | |gdeschner at redhat.com, | |pgurusid at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:40:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:40:53 +0000 Subject: [Bugs] [Bug 1720733] glusterfs 4.1.7 client crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720733 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(danny.lee at appian. | |com) --- Comment #1 from Amar Tumballi --- Appreciate if you can provide output of 'thread apply all bt full' from `$ gdb -c ` Also, there were many stability fixes which happened in glusterfs in glusterfs-5 and glusterfs-6 series. It would be great if you can upgrade to latest. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:47:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:47:56 +0000 Subject: [Bugs] [Bug 1529768] Disk size is incorrect according to df when an arbiter brick and data brick live on the same server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529768 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-06-18 09:47:56 --- Comment #3 from Amar Tumballi --- Hi Ben, When the feature got introduced, and later when we upgraded, we found that there were multiple issues with 'shared-brick-count' option. We did fix them up by glusterfs-5.x timeframe, and haven't faced any issue ever since. Closing this as WORKSFORME to highlight that we have not seen it in newer releases. Please upgrade to glusterfs-6.x and beyond and see if things are working for you. (Feel free to reopen if not). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 09:59:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 09:59:48 +0000 Subject: [Bugs] [Bug 1721435] New: DHT: Internal xattrs visible on the mount Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721435 Bug ID: 1721435 Summary: DHT: Internal xattrs visible on the mount Product: GlusterFS Version: mainline Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1721357 Description of problem: The new pass-thru feature which is enabled by default in master causes internal dht xattrs to be visible on the mount point. Version-Release number of selected component (if applicable): How reproducible: Consistently Steps to Reproduce: 1. Install release 3.12 and create a 1x3 volume 2. Fuse mount the volume and create some directories (dir-1 to dir-10) and files on the volume 3. Upgrade the node to the latest master and fuse mount the volume 4. Run getfattr -e hex -m . -d /mnt/fuse/dir* Actual results: # file: dir-10 security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000 trusted.glusterfs.dht.mds=0x00000000 user.dirtest=0x64697274657374 # file: dir-2 security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000 trusted.glusterfs.dht.mds=0x00000000 user.dirtest=0x64697274657374 # file: dir-3 security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000 trusted.glusterfs.dht.mds=0x00000000 user.dirtest=0x64697274657374 ... Expected results: trusted.glusterfs.dht.mds is an internal xattr and should not be visible on the mount point. Additional info: This was introduced by the pass-through feature. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:00:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:00:12 +0000 Subject: [Bugs] [Bug 1721435] DHT: Internal xattrs visible on the mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721435 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:02:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:02:12 +0000 Subject: [Bugs] [Bug 1640109] Default ACL cannot be removed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640109 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 10:02:12 --- Comment #3 from Amar Tumballi --- Hi Homma, we are focusing on glusterfs-6.0 and beyond for further validation of bugs, as this release and beyond has many stability fixes. Please upgrade to glusterfs-6.x and we would be happy to help further. https://review.gluster.org/#/c/glusterfs/+/21411/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:02:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:02:51 +0000 Subject: [Bugs] [Bug 1599275] Default ACL cannot be removed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1599275 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 10:02:51 --- Comment #3 from Amar Tumballi --- https://review.gluster.org/#/c/glusterfs/+/21411/ fixes the issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:12:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:12:46 +0000 Subject: [Bugs] [Bug 1721441] New: geo-rep: Fix permissions for CLI_LOG in non-root setup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Bug ID: 1721441 Summary: geo-rep: Fix permissions for CLI_LOG in non-root setup Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: While setting up a non-root geo-rep session, we are unable to set appropriate permission for gluster log dir. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Setup mountbroker 2. validate permission for gluster log dir for permission. Expected results: Should set appropriate permission for gluster log dir. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:13:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:13:05 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for CLI_LOG in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:20:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:20:19 +0000 Subject: [Bugs] [Bug 1540478] Change quota option of many volumes concurrently, some commit operation failed. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1540478 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- Reopened the patch (it was abandon'd due to inactivity). Needs a rebase, as there are merge conflicts. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:23:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:23:29 +0000 Subject: [Bugs] [Bug 1529842] Read-only listxattr syscalls seem to translate to non-read-only FOPs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529842 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |urgent CC| |khiremat at redhat.com, | |sacharya at redhat.com, | |sunkumar at redhat.com Assignee|vbellur at redhat.com |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:26:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:26:03 +0000 Subject: [Bugs] [Bug 1319045] memory increase of glusterfsd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1319045 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(vpolakis at gmail.co | |m) --- Comment #15 from Amar Tumballi --- Been a while, Can we try the tests with latest glusterfs releases? We made some of the critical enhancements to memory related issues. Would like to hear more on how glusterfs-6.x or upstream/master works for your usecase. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:26:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:26:39 +0000 Subject: [Bugs] [Bug 1721435] DHT: Internal xattrs visible on the mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721435 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22889 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:26:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:26:40 +0000 Subject: [Bugs] [Bug 1721435] DHT: Internal xattrs visible on the mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721435 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22889 (cluster/dht: Strip out dht xattrs) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:28:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:28:49 +0000 Subject: [Bugs] [Bug 1476992] inode table lru list leak with glusterfs fuse mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1476992 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-18 10:28:49 --- Comment #6 from Amar Tumballi --- We did fix the issues in latest releases: please use glusterfs-6.x release Patch: https://review.gluster.org/19778 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:29:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:29:22 +0000 Subject: [Bugs] [Bug 1489610] glusterfind saves var data under $prefix instead of localstatedir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489610 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sunkumar at redhat.com Assignee|bugs at gluster.org |sacharya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:31:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:31:33 +0000 Subject: [Bugs] [Bug 1430360] glusterfsd segfault in trash_truncate_stat_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1430360 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Assignee|bugs at gluster.org |anoopcs at redhat.com Last Closed|2017-11-07 10:41:28 |2019-06-18 10:31:33 --- Comment #6 from Amar Tumballi --- Closing with WORKSFORME as the option. Please try with latest releases (glusterfs-6.x +) and see if this is fixed for you. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:38:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:38:52 +0000 Subject: [Bugs] [Bug 1654138] Optimize for virt store fails with distribute volume type In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654138 SATHEESARAN changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1721457 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721457 [Bug 1721457] [Dalton] Optimize for virt store fails with distribute volume type -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:39:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:39:13 +0000 Subject: [Bugs] [Bug 1654138] Optimize for virt store fails with distribute volume type In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654138 SATHEESARAN changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1721457 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721457 [Bug 1721457] [Dalton] Optimize for virt store fails with distribute volume type -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:54:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:54:29 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #701 from Worker Ant --- REVIEW: https://review.gluster.org/22879 (core: fedora 30 compiler warnings) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 10:56:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:56:12 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|geo-rep: Fix permissions |geo-rep: Fix permissions |for CLI_LOG in non-root |for GEOREP_DIR in non-root |setup |setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 10:59:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 10:59:44 +0000 Subject: [Bugs] [Bug 1721462] New: Quota limits not honored writes allowed past quota limit. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721462 Bug ID: 1721462 Summary: Quota limits not honored writes allowed past quota limit. Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: quota Severity: high Assignee: bugs at gluster.org Reporter: kiyer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: On a six node cluster created a distributed-replicated enabled quota set hard and soft time out to zero. Created files and directories from mount such that the limit is reached. Performed an add-brick and started rebalance after which we killed one of the bricks and started the volume while rebalance and self heal is in progress.Created some files and directories write allowed past quota limit. Version-Release number of selected component (if applicable): Whatever is the latest version in upstream How reproducible: constantly Steps to Reproduce: 1.Enable quota on the volume. 2.Set hard and soft time out to zero. 3.Create some files and directories from mount point so that the limits are reached. 4.Perform add-brick operation on the volume. 5.Start rebalance on the volume. 6.While rebalance is running, kill one of the bricks of the volume and start after a while. 7.While rebalance + self heal is in progress. 8.Create some more files and directories from the mount point until limit is hit. Actual results: Writes allowed past quota limit. Expected results: Writes shouldn't be allowed past quota limit. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 11:06:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:06:55 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22890 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 11:06:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:06:56 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22890 (geo-rep: Fix permissions for GEOREP_DIR in non-root setup) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 11:08:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:08:19 +0000 Subject: [Bugs] [Bug 1531457] hard Link file A to B error if A is just created In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1531457 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22891 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 11:08:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:08:20 +0000 Subject: [Bugs] [Bug 1531457] hard Link file A to B error if A is just created In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1531457 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22891 (zero_fill_stat: use only ctime to determine the zero'd stat) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 11:22:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:22:32 +0000 Subject: [Bugs] [Bug 1721474] New: posix: crash in posix_cs_set_state on fallocate Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721474 Bug ID: 1721474 Summary: posix: crash in posix_cs_set_state on fallocate Product: GlusterFS Version: mainline Status: ASSIGNED Component: posix Assignee: spalai at redhat.com Reporter: spalai at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Crash in posix_cs_set_state when fallocate is called. Crash link: https://build.gluster.org/job/centos7-regression/6513/ Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: - enable cloudsync - Issue fallocate on a file. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 11:22:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:22:46 +0000 Subject: [Bugs] [Bug 1710744] [FUSE] Endpoint is not connected after "Found anomalies" error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 --- Comment #5 from Pavel Znamensky --- Amar, thank you for the reply. We're not going to upgrade to glusterfs-6.x in the nearest future. But we'll keep in mind. As for threads backtraces, I've just sent them to you. Thanks. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 11:23:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:23:23 +0000 Subject: [Bugs] [Bug 1716626] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716626 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rkavunga at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 11:26:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:26:22 +0000 Subject: [Bugs] [Bug 1721474] posix: crash in posix_cs_set_state on fallocate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721474 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22892 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 11:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 11:32:14 +0000 Subject: [Bugs] [Bug 1721474] posix: crash in posix_cs_set_state on fallocate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721474 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1721477 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721477 [Bug 1721477] posix: crash in posix_cs_set_state on fallocate -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 12:09:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 12:09:47 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #702 from Worker Ant --- REVIEW: https://review.gluster.org/22772 (glusterd-volgen.c: remove BD xlator from the graph) merged (#15) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 13:29:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 13:29:39 +0000 Subject: [Bugs] [Bug 1529842] Read-only listxattr syscalls seem to translate to non-read-only FOPs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529842 --- Comment #1 from Aravinda VK --- I tested in my setup with fresh volume and changelog enabled. The issue is not reproducible. Please try again and provide the reproducer. [root at f29 ~]# gluster volume create gv1 replica 3 f29.sonne:/bricks/gv1/b1 f29.sonne:/bricks/gv1/b2 f29.sonne:/bricks/gv1/b3 force [root at f29 ~]# gluster volume start gv1 [root at f29 ~]# gluster volume set gv1 changelog.changelog on [root at f29 ~]# mount -t glusterfs localhost:gv1 /mnt/gv1 [root at f29 ~]# echo "Hello World" > /mnt/gv1/myfile [root at f29 ~]# sleep 20 # Sleep 20 to wait for changelog rollover [root at f29 ~]# for f in `ls ./gluster-changelog-parser /bricks/gv1/b1/.glusterfs/changelogs/CHANGELOG.*`;do echo $f;./gluster-changelog-parser $f; done /bricks/gv1/b1/.glusterfs/changelogs/CHANGELOG.1560862886 E d4b3e7c7-8ef1-4327-8117-0a01af0bf1ed CREATE 33188 0 0 00000000-0000-0000-0000-000000000001/myfile D d4b3e7c7-8ef1-4327-8117-0a01af0bf1ed [root at f29 ~]# getfattr -d -m . /mnt/gv1/myfile getfattr: Removing leading '/' from absolute path names # file: mnt/gv1/myfile security.selinux="system_u:object_r:fusefs_t:s0" [root at f29 ~]# sleep 20 [root at f29 ~]# for f in `ls ./gluster-changelog-parser /bricks/gv1/b1/.glusterfs/changelogs/CHANGELOG.*`;do echo $f;./gluster-changelog-parser $f; done /bricks/gv1/b1/.glusterfs/changelogs/CHANGELOG.1560862886 E d4b3e7c7-8ef1-4327-8117-0a01af0bf1ed CREATE 33188 0 0 00000000-0000-0000-0000-000000000001/myfile D d4b3e7c7-8ef1-4327-8117-0a01af0bf1ed I used gluster-changelog-parser(https://github.com/aravindavk/gluster-changelog-parser) to read Changelogs. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 14:48:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 14:48:24 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 --- Comment #16 from Yosi Ben Shimon --- Gluster upgraded to version 6.3: glusterfs-libs-6.3-1.el7.x86_64 glusterfs-fuse-6.3-1.el7.x86_64 glusterfs-client-xlators-6.3-1.el7.x86_64 glusterfs-api-6.3-1.el7.x86_64 glusterfs-cli-6.3-1.el7.x86_64 glusterfs-6.3-1.el7.x86_64 glusterfs-server-6.3-1.el7.x86_64 Tried to reproduce this scenario and all went fine. >From the VDSM log: 2019-06-18 17:37:31,886+0300 INFO (jsonrpc/2) [storage.Mount] mounting gluster01.scl.lab.tlv.redhat.com:/storage_local_ge6_volume_3 at /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage__local__ge6__volume__3 (mount:204) 2019-06-18 17:37:32,395+0300 DEBUG (check/loop) [storage.check] START check '/dev/c5f7e0ee-b117-4f62-8d2d-bcda1f61bd08/metadata' (delay=0.00) (check:289) 2019-06-18 17:37:32,435+0300 DEBUG (jsonrpc/2) [storage.Mount] /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage__local__ge6__volume__3 mounted: 0.55 seconds (utils:454) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 15:51:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 15:51:26 +0000 Subject: [Bugs] [Bug 1721590] New: tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721590 Bug ID: 1721590 Summary: tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume -restart.t is failing Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Run test case tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t 2. 3. Actual results: test case tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing Expected results: test case should not fail Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 15:51:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 15:51:58 +0000 Subject: [Bugs] [Bug 1721590] tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721590 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 15:56:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 15:56:44 +0000 Subject: [Bugs] [Bug 1721590] tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721590 --- Comment #1 from Mohit Agrawal --- .t is failing and throwing error transport endpoint is not connected at the time of running stat command on mount point after just start the volume. In logs it is showing the client is not able to establish a connection with server so at the running stat command it is failing. To avoid the error needs to wait after just starting the volume. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 16:01:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:01:09 +0000 Subject: [Bugs] [Bug 1721590] tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721590 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22893 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 16:01:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:01:10 +0000 Subject: [Bugs] [Bug 1721590] tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721590 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22893 (test: test bug-1040275-brick-uid-reset-on-volume-restart.t is failing) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1721601] New: [SHD] : logs of one volume are going to log file of other volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721601 Bug ID: 1721601 Summary: [SHD] : logs of one volume are going to log file of other volume Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: aspandey at redhat.com, bmekala at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1721351 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721351 [Bug 1721351] [SHD] : logs of one volume are going to log file of other volume -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 16:40:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:40:33 +0000 Subject: [Bugs] [Bug 1707731] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22894 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 16:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:40:34 +0000 Subject: [Bugs] [Bug 1707731] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22894 (geo-rep: Upgrading config file to new version) posted (#1) for review on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Jun 18 16:41:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:41:54 +0000 Subject: [Bugs] [Bug 1720733] glusterfs 4.1.7 client crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720733 Danny Lee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(danny.lee at appian. | |com) | --- Comment #2 from Danny Lee --- (In reply to Amar Tumballi from comment #1) > Appreciate if you can provide output of 'thread apply all bt full' from `$ > gdb -c ` > > > Also, there were many stability fixes which happened in glusterfs in > glusterfs-5 and glusterfs-6 series. It would be great if you can upgrade to > latest. Sadly, we lost the core(In reply to Amar Tumballi from comment #1) > Appreciate if you can provide output of 'thread apply all bt full' from `$ > gdb -c ` > > > Also, there were many stability fixes which happened in glusterfs in > glusterfs-5 and glusterfs-6 series. It would be great if you can upgrade to > latest. Sadly, we corrupted our core dump and we restarted the site so a good portion of our logs were removed, so we don't really have much for debugging. We weren't sure if there was anything in the stacktrace that could be used to tell us why it crashed. We usually upgrade to the latest long-term release unless there is a CVE or there is a good chance that a critical bug has been fixed in the short term releases, and not in the long term release (which hasn't happened yet). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 16:51:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:51:18 +0000 Subject: [Bugs] [Bug 1721601] [SHD] : logs of one volume are going to log file of other volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721601 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22895 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 16:51:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 16:51:19 +0000 Subject: [Bugs] [Bug 1721601] [SHD] : logs of one volume are going to log file of other volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721601 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22895 (glusterd/shd: Change shd logfile to a unique name) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 17:37:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 17:37:27 +0000 Subject: [Bugs] [Bug 1593542] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593542 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Jun 18 21:28:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 18 Jun 2019 21:28:14 +0000 Subject: [Bugs] [Bug 1721686] New: Remove usage of obsolete function usleep() Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721686 Bug ID: 1721686 Summary: Remove usage of obsolete function usleep() Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: vbellur at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Excerpt from man 3 usleep: "4.3BSD, POSIX.1-2001. POSIX.1-2001 declares this function obsolete; use nanosleep(2) instead. POSIX.1-2008 removes the specification of usleep()." Alternate functions like nanosleep() can be used instead of usleep(). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 02:59:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 02:59:19 +0000 Subject: [Bugs] [Bug 1566935] Dbase file access issue In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1566935 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-19 02:59:19 --- Comment #22 from Amar Tumballi --- We had other reports saying with glusterfs-6.2 and higher versions of fedora, this usecase works properly. Please upgrade and let us know. (ref: bz1672076) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 03:04:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:04:40 +0000 Subject: [Bugs] [Bug 1721601] [SHD] : logs of one volume are going to log file of other volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721601 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Assignee|bugs at gluster.org |rkavunga at redhat.com --- Comment #2 from Atin Mukherjee --- Please have a public description of the bug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 03:11:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:11:57 +0000 Subject: [Bugs] [Bug 1190877] Options incorrectly parsed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1190877 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-19 03:11:57 --- Comment #1 from Amar Tumballi --- This is fixed in latest releases. (https://review.gluster.org/#/c/glusterfs/+/21295/) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 03:20:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:20:57 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-19 03:20:57 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22890 (geo-rep: Fix permissions for GEOREP_DIR in non-root setup) merged (#2) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 03:21:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:21:53 +0000 Subject: [Bugs] [Bug 1222678] backupvolfile-server, backup-volfile-servers options in /etc/fstab / list of volfile-server options on command line ignored when mounting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1222678 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2016-08-30 12:55:06 |2019-06-19 03:21:53 --- Comment #4 from Amar Tumballi --- I had below entry in /etc/fstab > example.com:/demo /mnt/glusterfs glusterfs backup-volfile-servers=tumballi.in:local 0 0 where only 'local' was a proper valid hostname where the volume demo was hosted. It did work when I did `mount /mnt/glusterfs`. It took some time, (~20second in this case) but it worked fine. Considering a lot of time is passed since the bug was open, I would say, you may need to upgrade to higher version to see that it is fixed. (I don't have exact patch link which fixed this). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 03:21:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:21:54 +0000 Subject: [Bugs] [Bug 1216965] GlusterFS 3.6.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1216965 Bug 1216965 depends on bug 1222678, which changed state. Bug 1222678 Summary: backupvolfile-server, backup-volfile-servers options in /etc/fstab / list of volfile-server options on command line ignored when mounting https://bugzilla.redhat.com/show_bug.cgi?id=1222678 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 03:30:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 03:30:53 +0000 Subject: [Bugs] [Bug 1433829] Nodeid changed due to write-behind option chagned online will lead to un-expected umount by kernel In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1433829 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Thanks George for an awesome detailed bug report. Apologies that this got missed from some of our attention. While changing option and continuing to work properly is the expectation we have, and it does work fine even on latest fedora29 and fedora30. Not seen any issues. Can you check if the issue still persist for you in your distro? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 04:25:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:25:56 +0000 Subject: [Bugs] [Bug 1721783] New: ctime changes: tar still complains file changed as we read it if uss is enabled Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 Bug ID: 1721783 Summary: ctime changes: tar still complains file changed as we read it if uss is enabled Product: GlusterFS Version: mainline Status: NEW Component: ctime Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, jthottan at redhat.com, khiremat at redhat.com, nchilaka at redhat.com, pasik at iki.fi, rabhat at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vdas at redhat.com Depends On: 1709301, 1720290 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1720290 +++ Description of problem: ================== With uss enabled on gluster volume, when compressing a directory of files into a tar ball, I still see tar complaining "file changed as we read it" How reproducible: ================== always on my testbed Steps to Reproduce: =============== 1.created a 4x3 volume 2. enabled uss and quotas 3. mounted volume on 4 clients and started to untar kernel image and again tarball the image. When tarballing the files back, i see the above issue consistently Volume Name: nfnas Type: Distributed-Replicate Volume ID: 61b5239a-e275-4a1a-b02e-65625c4dc3fd Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick1/nfnas Brick2: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick1/nfnas Brick3: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick1/nfnas Brick4: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick2/nfnas Brick5: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick2/nfnas Brick6: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick1/nfnas Brick7: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick3/nfnas Brick8: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick3/nfnas Brick9: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick3/nfnas Brick10: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick4/nfnas Brick11: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick4/nfnas Brick12: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick4/nfnas Options Reconfigured: diagnostics.client-log-level: DEBUG performance.stat-prefetch: on features.uss: disable features.quota-deem-statfs: on features.inode-quota: on features.quota: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off --- Additional comment from Worker Ant on 2019-06-13 17:21:21 UTC --- REVIEW: https://review.gluster.org/22861 (uss: Fix tar issue with ctime and uss enabled) posted (#1) for review on master by Kotresh HR --- Additional comment from Worker Ant on 2019-06-17 10:29:50 UTC --- REVIEW: https://review.gluster.org/22861 (uss: Fix tar issue with ctime and uss enabled) merged (#3) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709301 [Bug 1709301] ctime changes: tar still complains file changed as we read it if uss is enabled https://bugzilla.redhat.com/show_bug.cgi?id=1720290 [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 04:25:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:25:56 +0000 Subject: [Bugs] [Bug 1720290] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720290 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1721783 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 [Bug 1721783] ctime changes: tar still complains file changed as we read it if uss is enabled -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 04:26:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:26:37 +0000 Subject: [Bugs] [Bug 1721783] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 04:28:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:28:04 +0000 Subject: [Bugs] [Bug 1721783] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Version|mainline |6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 04:32:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:32:17 +0000 Subject: [Bugs] [Bug 1721783] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22896 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 04:32:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 04:32:18 +0000 Subject: [Bugs] [Bug 1721783] ctime changes: tar still complains file changed as we read it if uss is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721783 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22896 (uss: Fix tar issue with ctime and uss enabled) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 06:28:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 06:28:13 +0000 Subject: [Bugs] [Bug 1721435] DHT: Internal xattrs visible on the mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721435 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-19 06:28:13 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22889 (cluster/dht: Strip out dht xattrs) merged (#2) on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 06:57:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 06:57:31 +0000 Subject: [Bugs] [Bug 1721842] New: Spelling errors in 6.3 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721842 Bug ID: 1721842 Summary: Spelling errors in 6.3 Product: GlusterFS Version: 6 Hardware: All OS: All Status: NEW Component: core Severity: medium Assignee: bugs at gluster.org Reporter: pmatthaei at debian.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1582100 --> https://bugzilla.redhat.com/attachment.cgi?id=1582100&action=edit Patch 1 Hello, please apply the attached patches to fix some spelling errors in the source code / command output. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 06:58:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 06:58:18 +0000 Subject: [Bugs] [Bug 1721842] Spelling errors in 6.3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721842 --- Comment #1 from Patrick Matth?i --- Created attachment 1582101 --> https://bugzilla.redhat.com/attachment.cgi?id=1582101&action=edit Patch 2 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 06:58:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 06:58:38 +0000 Subject: [Bugs] [Bug 1721842] Spelling errors in 6.3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721842 --- Comment #2 from Patrick Matth?i --- Created attachment 1582102 --> https://bugzilla.redhat.com/attachment.cgi?id=1582102&action=edit Patch 3 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 07:11:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 07:11:48 +0000 Subject: [Bugs] [Bug 1718562] flock failure (regression) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 --- Comment #3 from Susant Kumar Palai --- The issue is reproducible always. Will update once I find the RCA. Susant -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 08:40:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 08:40:43 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Avihai changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sabose at redhat.com |needinfo- needinfo- |) | |needinfo?(ybenshim at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 09:09:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 09:09:56 +0000 Subject: [Bugs] [Bug 1718562] flock failure (regression) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718562 --- Comment #4 from Jaco Kroon --- I've managed to implement a workaround for this in php/bash (C/C++ will be similar). This "work around" is perhaps how locking should have been implemented in the first place on our end (lock files gets removed post use). The code uses a small(ish 1s) timeout per flock() call due to the bug, a more global timeout would be better but given the bug here doesn't work as well as can be done. Recursion can (and should) be eliminated but I haven't spent a lot of time on this (getting it out the door was more urgent than making it optimal). This code does have the single advantage that lock files gets removed post use again (it's based on discussions with other parties). The other option for folks running into this is to look at dotlockfile(1) which doesn't rely on flock() but has other major timing gap issues (retries are atomic, but waiting is a simple sleep + retry, so if other processes grabs locks at the wrong time the invoking process could starve/fail without the need to do so). Bash: #! /bin/sh function getlock() { local fd="$1" local lockfile="$2" local waittime="$3" eval "exec $fd>\"\${lockfile}\"" || return $? local inum=$(stat -c%i - <&3) local lwait="-w1" [ "${waittime}" -le 0 ] && lwait=-n while ! flock -w1 -x 3; do if [ "$(stat -c%i "${lockfile}" 2>/dev/null)" != "${inum}" ]; then eval "exec $fd>\"\${lockfile}\"" || return $? local inum=$(stat -c%i - <&3) continue fi (( waittime-- )) if [ $waittime -le 0 ]; then eval "exec $fd<&-" return 1 fi done if [ "$(stat -c%i "${lockfile}" 2>/dev/null)" != "${inum}" ]; then eval "exec $fd<&-" getlock "$fd" "$lockfile" "${waittime}" return $? fi return 0 } function releaselock() { local fd="$1" local lockfile="$2" rm "${lockfile}" eval "exec $fd<&-" } PHP: filename = $filename; $lock->fp = fopen($filename, "w"); if (!$lock->fp) return NULL; $lstat = fstat($lock->fp); if (!$lstat) { fclose($lock->fp); return NULL; } pcntl_signal(SIGALRM, function() {}, false); pcntl_alarm(1); while (!flock($lock->fp, LOCK_EX)) { pcntl_alarm(0); clearstatcache(true, $filename); $nstat = stat($filename); if (!$nstat || $nstat['ino'] != $lstat['ino']) { fclose($lock->fp); $lock->fp = fopen($filename, "w"); if (!$lock->fp) return NULL; $lstat = fstat($lock->fp); if (!$lstat) { fclose($lock->fp); return NULL; } } if (--$lockwait < 0) { fclose($lock->fp); return NULL; } pcntl_alarm(1); } pcntl_alarm(0); clearstatcache(true, $filename); $nstat = stat($filename); if (!$nstat || $nstat['ino'] != $lstat['ino']) { fclose($lock->fp); return getlock($filename, $lockwait); } return $lock; } function releaselock($lock) { unlink($lock->filename); fclose($lock->fp); } ?> -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 09:13:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 09:13:46 +0000 Subject: [Bugs] [Bug 1683594] nfs ltp ftest* fstat gets mismatch size as except after turn on md-cache In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683594 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22273 (md-cache: only update generation for inode at upcall and NULL stat) merged (#2) on master by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 11:10:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 11:10:55 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22898 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Jun 19 11:10:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 11:10:56 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) posted (#1) for review on master by David Spisla -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Jun 19 11:26:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 11:26:35 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #9 from david.spisla at iternity.com --- I have send a patch to gerrit. @Amar if there is any other place in the WORM Xlator whcih can cause an segfault please tell me. I will swrite some patches soon. At the moment the worm_create_cbk is the only one callback function in this xlator -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Jun 19 11:44:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 11:44:13 +0000 Subject: [Bugs] [Bug 1651445] [RFE] storage.reserve option should take size of disk as input instead of percentage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22900 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 11:44:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 11:44:14 +0000 Subject: [Bugs] [Bug 1651445] [RFE] storage.reserve option should take size of disk as input instead of percentage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22900 (posix: modify storage.reserve option to take size and percent) posted (#1) for review on master by Sheetal Pamecha -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 13:08:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 13:08:33 +0000 Subject: [Bugs] [Bug 1717953] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717953 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-06-12 09:00:12 |2019-06-19 13:08:33 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22856 (extras/hooks: Install and package newly added post add-brick hook script) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 13:08:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 13:08:33 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Bug 1718227 depends on bug 1717953, which changed state. Bug 1717953 Summary: SELinux context labels are missing for newly added bricks using add-brick command https://bugzilla.redhat.com/show_bug.cgi?id=1717953 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 15:53:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 15:53:24 +0000 Subject: [Bugs] [Bug 1722187] New: Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722187 Bug ID: 1722187 Summary: Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: ryan at magenta.tv CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1582317 --> https://bugzilla.redhat.com/attachment.cgi?id=1582317&action=edit Glusterd --debug output Description of problem: Glusterd service fails with signal 11 after installing MLNX_OFED packages when RDMA used for transport. Version-Release number of selected component (if applicable): Gluster 6.1 How reproducible: 100% on 2/2 nodes Steps to Reproduce: 1. Install Glusterfs-server & glusterfs-rdma (6.1) 2. Install MLNX_OFED packages with './mlnxofedinstall --all' Installing with --all flag installs the following packages: libibverbs, libibumad, librdmacm, mft, mstflint, diagnostic tools, OpenSM, ib-bonding, MVAPICH, Open MPI, MPI tests, MPI selector, perftest, sdpnetstat and libsdp srptools, rdstools, static and dynamic libraries 3.Create Gluster volume with RDMA transport 4. Restart Glusterd service Actual results: Service fails with segmentation fault and core dumps Expected results: Service starts successfully Additional info: Debug log attached Core dump on this link: https://drive.google.com/file/d/10TNUtnTjpXGe1AaJzW4CAg9dTAe6hX_U/view?usp=sharing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 16:00:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 16:00:39 +0000 Subject: [Bugs] [Bug 1721686] Remove usage of obsolete function usleep() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721686 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |vbellur at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 16:02:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 16:02:37 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ryan at magenta.tv --- Comment #11 from ryan at magenta.tv --- We are also seeing this issue, with memory slowly increasing over time until all system resources (SWAP and RAM) are exhausted. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 16:06:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 16:06:41 +0000 Subject: [Bugs] [Bug 1319045] memory increase of glusterfsd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1319045 ryan at magenta.tv changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ryan at magenta.tv --- Comment #16 from ryan at magenta.tv --- Hi Amar, We're seeing issues with Glusterfsd memory consumption too. I'll try and test this issue against 6.1 within the next week. Best, Ryan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Jun 19 16:12:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 16:12:49 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 Guenther Deschner changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |abokovoy at redhat.com, | |asn at redhat.com, | |jrivera at redhat.com, | |jstephen at redhat.com, | |lmohanty at redhat.com, | |madam at redhat.com, | |sbose at redhat.com, | |ssorce at redhat.com Component|gluster-smb |samba Version|mainline |30 Assignee|bugs at gluster.org |gdeschner at redhat.com Product|GlusterFS |Fedora QA Contact| |extras-qa at fedoraproject.org --- Comment #7 from Guenther Deschner --- Converting this bug to Samba (where it belongs) and to Fedora because there we can provide an updated package containing a bugfix. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Jun 19 17:19:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 19 Jun 2019 17:19:08 +0000 Subject: [Bugs] [Bug 1716455] OS X error -50 when creating sub-folder on Samba share when using Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716455 --- Comment #2 from Guenther Deschner --- This has been addressed in Samba upstream and will be part of Samba 4.9.10 and 4.10.6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 04:39:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:39:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22901 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 04:39:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:39:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #703 from Worker Ant --- REVIEW: https://review.gluster.org/22901 (fix template file after clang-format) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 04:44:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:44:35 +0000 Subject: [Bugs] [Bug 1419870] inode_dump is not generated in statedump for gluster/nfs process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1419870 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-06-20 04:44:35 --- Comment #5 from Amar Tumballi --- https://review.gluster.org/22846 (is merged now) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:44:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:44:35 +0000 Subject: [Bugs] [Bug 1427394] inode_dump is not generated in statedump for gluster/nfs process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1427394 Bug 1427394 depends on bug 1419870, which changed state. Bug 1419870 Summary: inode_dump is not generated in statedump for gluster/nfs process https://bugzilla.redhat.com/show_bug.cgi?id=1419870 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:47:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:47:33 +0000 Subject: [Bugs] [Bug 1515748] Callbacks should be sent to only those clients which register for upcall events In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1515748 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(skoduri at redhat.co | |m) --- Comment #5 from Amar Tumballi --- Soumya, do you think we can close this? I see that these patches are not merged :-/ What should be our next step? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:49:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:49:57 +0000 Subject: [Bugs] [Bug 1716760] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716760 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:52:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:52:59 +0000 Subject: [Bugs] [Bug 1654205] Regression tests for non-root geo-replication is not available. It should be added. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654205 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|POST |NEW CC| |atumball at redhat.com, | |khiremat at redhat.com, | |sunkumar at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:53:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:53:51 +0000 Subject: [Bugs] [Bug 1648205] Thin-arbiter: Have the state of volume in memory and use it for shd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648205 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-20 04:53:51 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:57:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:57:02 +0000 Subject: [Bugs] [Bug 1365085] Enable accessing internals of transport through meta In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1365085 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 04:57:02 --- Comment #1 from Amar Tumballi --- Not picked up in a long time, closing as DEFERRED to indicate the status. Will come back to this after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:59:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:59:06 +0000 Subject: [Bugs] [Bug 1367265] Excessing logging - 'trying duplicate remote fd set' on fuse mount logfile - after rebalance completion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1367265 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 04:59:06 --- Comment #7 from Amar Tumballi --- Not updated in last 3 years. Marking as DEFERRED to indicate the status. Will pick it up if found to be critical in next couple of months. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:59:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:59:07 +0000 Subject: [Bugs] [Bug 1367283] Excessing logging - 'trying duplicate remote fd set' on fuse mount logfile - after rebalance completion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1367283 Bug 1367283 depends on bug 1367265, which changed state. Bug 1367265 Summary: Excessing logging - 'trying duplicate remote fd set' on fuse mount logfile - after rebalance completion https://bugzilla.redhat.com/show_bug.cgi?id=1367265 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:00:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:00:59 +0000 Subject: [Bugs] [Bug 1335010] posix/locks: Make flush work on destination post lock-migration In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335010 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 05:00:59 --- Comment #6 from Amar Tumballi --- Marking as DEFERRED to indicate the status. Will revisit after couple of months to check if this is critical. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:05:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:05:03 +0000 Subject: [Bugs] [Bug 1394063] glfd offset should not be updated for positional read/write operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1394063 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|ASSIGNED |NEW CC| |atumball at redhat.com, | |jthottan at redhat.com, | |pgurusid at redhat.com, | |skoduri at redhat.com Assignee|rtalur at redhat.com |bugs at gluster.org QA Contact|sdharane at redhat.com | Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:08:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:08:29 +0000 Subject: [Bugs] [Bug 1501029] setting storage.owner-gid should also change the mode to have setgid In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1501029 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-5.x Resolution|--- |CURRENTRELEASE Severity|unspecified |medium Last Closed| |2019-06-20 05:08:29 --- Comment #6 from Amar Tumballi --- https://review.gluster.org/18955 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:12:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:12:20 +0000 Subject: [Bugs] [Bug 1523094] to detect EOF, avoid sending readv fop to backend when offset is within byte range of file size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1523094 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 05:12:20 --- Comment #2 from Amar Tumballi --- marking as DEFERRED to indicate the status. Will revisit it after couple of months. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:13:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:13:21 +0000 Subject: [Bugs] [Bug 1537065] [Disperse]: client side heal using getfattr command throwing ENOTCONN and wrong status In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1537065 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 05:13:21 --- Comment #2 from Amar Tumballi --- marking as DEFERRED to indicate the status. Would pick this up later. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:14:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:14:57 +0000 Subject: [Bugs] [Bug 1541032] Races in network communications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1541032 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |UPSTREAM Last Closed| |2019-06-20 05:14:57 --- Comment #1 from Amar Tumballi --- https://github.com/gluster/glusterfs/issues/391 tracks the same. Will keep it open there, and please check it in github. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:15:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:15:18 +0000 Subject: [Bugs] [Bug 1518150] GlusterFS not available for Fedora 27 Modular Server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1518150 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |EOL Last Closed| |2019-06-20 05:15:18 --- Comment #2 from Amar Tumballi --- Fedora27 is EOL. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:17:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:17:10 +0000 Subject: [Bugs] [Bug 1489513] read-ahead underperforms expectations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489513 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-06-20 05:17:10 --- Comment #1 from Amar Tumballi --- read-ahead is proven to not perform as good as kernel read-ahead and there is a bug to disable it by default. (ref: bz1676479) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:19:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:19:38 +0000 Subject: [Bugs] [Bug 1507002] Read-only option is ignored and volume mounted in r/w mode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1507002 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-06-20 05:19:38 --- Comment #1 from Amar Tumballi --- with glusterfs-6.x version: # glusterfs --volfile-server local --volfile-id demo --read-only /mnt/glusterfs # mount | grep '/mnt/glusterfs' local:demo on /mnt/glusterfs type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:19:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:19:39 +0000 Subject: [Bugs] [Bug 1496964] Read-only option is ignored and volume mounted in r/w mode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1496964 Bug 1496964 depends on bug 1507002, which changed state. Bug 1507002 Summary: Read-only option is ignored and volume mounted in r/w mode https://bugzilla.redhat.com/show_bug.cgi?id=1507002 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:19:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:19:39 +0000 Subject: [Bugs] [Bug 1507006] Read-only option is ignored and volume mounted in r/w mode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1507006 Bug 1507006 depends on bug 1507002, which changed state. Bug 1507002 Summary: Read-only option is ignored and volume mounted in r/w mode https://bugzilla.redhat.com/show_bug.cgi?id=1507002 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:19:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:19:40 +0000 Subject: [Bugs] [Bug 1507007] Read-only option is ignored and volume mounted in r/w mode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1507007 Bug 1507007 depends on bug 1507002, which changed state. Bug 1507002 Summary: Read-only option is ignored and volume mounted in r/w mode https://bugzilla.redhat.com/show_bug.cgi?id=1507002 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:22:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:22:08 +0000 Subject: [Bugs] [Bug 1599203] Compilation failure due to python-devel dependency than configure time failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1599203 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-06-20 05:22:08 --- Comment #1 from Amar Tumballi --- glupy is now removed from the codebase. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:23:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:23:00 +0000 Subject: [Bugs] [Bug 1595088] Gluster clients seem to be reading blocks from server multiple times In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1595088 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-06-20 05:23:00 --- Comment #1 from Amar Tumballi --- marking DEFERRED to indicate the status. Will revisit this later. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 05:31:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:31:50 +0000 Subject: [Bugs] [Bug 1717827] tests/geo-rep: Add test case to validate non-root geo-replication setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22902 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:31:51 +0000 Subject: [Bugs] [Bug 1717827] tests/geo-rep: Add test case to validate non-root geo-replication setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22902 (tests : test caes for non-root geo-rep setup) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:34:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:34:30 +0000 Subject: [Bugs] [Bug 1654205] Regression tests for non-root geo-replication is not available. It should be added. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654205 --- Comment #2 from Sunny Kumar --- Basic test for non-root can be tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:36:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:36:53 +0000 Subject: [Bugs] [Bug 1722331] New: geo-rep: Fix permissions for GEOREP_DIR in non-root setup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722331 Bug ID: 1722331 Summary: geo-rep: Fix permissions for GEOREP_DIR in non-root setup Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: geo-replication Assignee: sunkumar at redhat.com Reporter: sunkumar at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1721441 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1721441 +++ Description of problem: While setting up a non-root geo-rep session, we are unable to set appropriate permission for gluster log dir. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Setup mountbroker 2. validate permission for gluster log dir for permission. Expected results: Should set appropriate permission for gluster log dir. --- Additional comment from Worker Ant on 2019-06-18 11:06:56 UTC --- REVIEW: https://review.gluster.org/22890 (geo-rep: Fix permissions for GEOREP_DIR in non-root setup) posted (#1) for review on master by Sunny Kumar --- Additional comment from Worker Ant on 2019-06-19 03:20:57 UTC --- REVIEW: https://review.gluster.org/22890 (geo-rep: Fix permissions for GEOREP_DIR in non-root setup) merged (#2) on master by Kotresh HR Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:36:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:36:53 +0000 Subject: [Bugs] [Bug 1721441] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721441 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1722331 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1722331 [Bug 1722331] geo-rep: Fix permissions for GEOREP_DIR in non-root setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:36:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:36:56 +0000 Subject: [Bugs] [Bug 1722331] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722331 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:40:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:40:56 +0000 Subject: [Bugs] [Bug 1722331] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722331 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Sunny Kumar --- Upstream Patch: https://review.gluster.org/22890 --- Comment #3 from Sunny Kumar --- Upstream Patch: https://review.gluster.org/22890 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:41:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:41:16 +0000 Subject: [Bugs] [Bug 1654205] Regression tests for non-root geo-replication is not available. It should be added. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654205 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-06-20 05:41:16 --- Comment #3 from Amar Tumballi --- *** This bug has been marked as a duplicate of bug 1717827 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:41:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:41:16 +0000 Subject: [Bugs] [Bug 1717827] tests/geo-rep: Add test case to validate non-root geo-replication setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717827 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sacharya at redhat.com --- Comment #2 from Amar Tumballi --- *** Bug 1654205 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 05:41:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 05:41:21 +0000 Subject: [Bugs] [Bug 1722331] geo-rep: Fix permissions for GEOREP_DIR in non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722331 --- Comment #4 from Sunny Kumar --- Upstream Patch: https://review.gluster.org/22890 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 08:38:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:38:02 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22904 (graph/shd: Use glusterfs_graph_deactivate to free the xl rec) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 08:38:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:38:01 +0000 Subject: [Bugs] [Bug 1716695] Fix memory leaks that are present even after an xlator fini [client side xlator] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1716695 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22904 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 08:49:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:49:19 +0000 Subject: [Bugs] [Bug 1515748] Callbacks should be sent to only those clients which register for upcall events In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1515748 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(skoduri at redhat.co | |m) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 08:50:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:50:00 +0000 Subject: [Bugs] [Bug 1515748] Callbacks should be sent to only those clients which register for upcall events In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1515748 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #6 is|1 |0 private| | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 08:52:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:52:48 +0000 Subject: [Bugs] [Bug 1515748] Callbacks should be sent to only those clients which register for upcall events In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1515748 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Status|NEW |CLOSED Resolution|--- |DEFERRED Assignee|skoduri at redhat.com |bugs at gluster.org Last Closed| |2019-06-20 08:52:48 --- Comment #7 from Amar Tumballi --- Thanks for the update Soumya. Will mark it as DEFERRED, and we can pick up when we get a chance. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 08:55:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:55:10 +0000 Subject: [Bugs] [Bug 1722390] New: "All subvolumes are down" when all bricks are online Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722390 Bug ID: 1722390 Summary: "All subvolumes are down" when all bricks are online Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: medium Assignee: bugs at gluster.org Reporter: ryan at magenta.tv CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1582578 --> https://bugzilla.redhat.com/attachment.cgi?id=1582578&action=edit Samba client log Description of problem: Gluster VFS client logs print ' E [MSGID: 108006] [afr-common.c:5413:__afr_handle_child_down_event] 0-mcv01-replicate-8: All subvolumes are down. Going offline until at least one of them comes back up.' repeatedly. When checking 'gluster vol status' all bricks are online. There are no obvious errors in the glusterd logs. The errors go back months, from when we had a failed storage array, where all the bricks had to be removed and replaced. Volume is a distributed-replicated (x2) type. In a few of the brick logs, there are a few RPC errors. Other than the logs being flooded with this message, the cluster seems to be operating normally. Iperf tests between the nodes does not flag any issues, with 0 TCP retries on all. Version-Release number of selected component (if applicable): Gluster 4.1.8 Samba 4.9.4 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 08:55:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 08:55:32 +0000 Subject: [Bugs] [Bug 1722390] "All subvolumes are down" when all bricks are online In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722390 --- Comment #1 from ryan at magenta.tv --- Created attachment 1582579 --> https://bugzilla.redhat.com/attachment.cgi?id=1582579&action=edit Brick log -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 11:52:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 11:52:04 +0000 Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption is on in the version from 3.13.0 to 4.1.5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648169 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22882 (encryption/crypt: remove from volume file) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 11:52:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 11:52:48 +0000 Subject: [Bugs] [Bug 1721474] posix: crash in posix_cs_set_state on fallocate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721474 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-20 11:52:48 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22892 (posix: fix crash in posix_cs_set_state) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 11:54:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 11:54:06 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |kkeithle at redhat.com --- Comment #4 from Kaleb KEITHLEY --- Since Eli has an archlinux email address I'd venture this is a Debian/Ubuntu/Arch issue and the Debian packaging bits need something similar to the 'gluster' user and group creation in the %pre section of the glusterfs.spec(.in) The GlusterFS packaging files for Debian are at https://github.com/gluster/glusterfs-debian if someone would like to send a PR for such a change. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 12:11:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 12:11:26 +0000 Subject: [Bugs] [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22907 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 12:11:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 12:11:27 +0000 Subject: [Bugs] [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22907 (cluster/ec: Prevent double pre-op xattrops) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 12:40:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 12:40:32 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(eschwartz at archlin | |ux.org) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 13:54:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 13:54:30 +0000 Subject: [Bugs] [Bug 1722507] New: Incorrect reporting of type/gfid mismatch Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722507 Bug ID: 1722507 Summary: Incorrect reporting of type/gfid mismatch Product: GlusterFS Version: mainline Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org Blocks: 1715447 Target Milestone: --- Classification: Community Description of problem: When checking for type and gfid mismatch, if the type or gfid is unknown because of missing gfid handle and the gfid xattr, it will be reported as type or gfid mismatch and the heal will not complete. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1715447 [Bug 1715447] Files in entry split-brain with "type mismatch" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 14:04:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 14:04:30 +0000 Subject: [Bugs] [Bug 1722507] Incorrect reporting of type/gfid mismatch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722507 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22908 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 14:04:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 14:04:31 +0000 Subject: [Bugs] [Bug 1722507] Incorrect reporting of type/gfid mismatch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722507 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22908 (cluster/afr: Fix incorrect reporting of gfid & type mismatch) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 14:24:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 14:24:44 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Eli Schwartz changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(eschwartz at archlin | |ux.org) | --- Comment #5 from Eli Schwartz --- Yeah, I added those bits right after submitting this ticket: https://git.archlinux.org/svntogit/community.git/commit/trunk?h=packages/glusterfs&id=89503fa1665343a1724e7dc0dc3733b57f4e92c9 I still figure it probably makes sense to ship the paired files. e.g. what happens if you use sudo make install, that does not run fedora %pre either. If nothing else, a sysusers.d file serves as documentation for what to do, and for distros that do expect to use them, it's always best to have a canonical one. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 14:33:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 14:33:28 +0000 Subject: [Bugs] [Bug 1598900] tmpfiles snippet tries to create folder owned by nonexistent user In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598900 Eli Schwartz changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ndevos at redhat.com | |) --- Comment #6 from Eli Schwartz --- BTW: Arch Linux decided to take advantage of sysusers.d for account creation, because it is more convenient than running lots of post-install shellscript fragments and because it can recover from a wiped passwd db in time to make the corresponding tmpfiles.d succeed. I am curious why Fedora guidelines say not to use them. It looks like that page is very, very old, its current form dates to 2013-04-17, and systemd added the sysusers.d file format and systemd-sysusers tool with systemd 215, released 2014-07-03. Perhaps it has simply never been updated since then? I know systemd ships with its own sysusers.d snippets internally, and according to https://rpmfind.net/linux/RPM/fedora/devel/rawhide/x86_64/s/systemd-242-3.git7a6d834.fc31.x86_64.html it also packages them. The specfile at https://src.fedoraproject.org/rpms/systemd/blob/master/f/systemd.spec does not seem to delete the sysusers.d files, but it does *additionally* include %pre macros which execute useradd/groupadd. This would indicate, perhaps, that glusterfs should both (optionally) install a sysusers.d file, and (mandatory) run useradd in %pre. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 15:17:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:17:17 +0000 Subject: [Bugs] [Bug 1722541] New: stale shd process files leading to heal timing out and heal deamon not coming up for all volumes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722541 Bug ID: 1722541 Summary: stale shd process files leading to heal timing out and heal deamon not coming up for all volumes Product: GlusterFS Version: mainline Status: NEW Component: replicate Keywords: Regression Severity: high Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, nchilaka at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1721802 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1721802 +++ Description of problem: ====================== Description of problem: ======================= I have a 3 node brickmux enabled cluster 3 volumes exist as below 12x(6+2) ecvol named cvlt-ecv 2 1x3 afr vols, namely testvol and logvol IOs are being done on cvlt-ecv volume(just DDs and appends) Two of the nodes have been upgraded over past few days. As part of upgrading the last node of a 3 node cluster to 6.0.5(including kernel), I did a node reboot. Post that the bricks were not coming up due to some bad entries in fstab and on resolving them I also noticed that the cluster went to rejected state. When check the cksums of the cvlt-ecv volume, I noticed a difference in the cksum value b/w n3(node being upgraded) when compared to n1 and n2 Hence to fix that we deleted all the cvlt-ecv directory under /var/lib/glusterd so that glusterd will heal them. Did a restart of glusterd and the peer rejected issue was fixed. However, we noticed that the shd was not showing online for the 2 afr volumes. Tried to do restart of glusterd( including deleting glusterfsd,shd,fs procs) But the shd is not coming up for the 2 afr volumes based on the logs we noticed that the /var/run/gluster/testvol and logvol have stale pid entries still existing and hence blocking the shd start on these volumes I went ahead and deleted the old stale pid files and shd came up on all the volumes. While I thought it was a one off thing, However I now see the same behavior in another node too, which is quite concerning, as we see below problems 1) manual index heal command is timing out 2) heal deamon is not running on the other volumes(as stale pidfile exists in /var/run/gluster) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1721802 [Bug 1721802] stale shd process files leading to heal timing out and heal deamon not coming up for all volumes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:19:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:19:41 +0000 Subject: [Bugs] [Bug 1722541] stale shd process files leading to heal timing out and heal deamon not coming up for all volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722541 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22909 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:19:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:19:42 +0000 Subject: [Bugs] [Bug 1722541] stale shd process files leading to heal timing out and heal deamon not coming up for all volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722541 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22909 (shd/mux: Fix race between mux_proc unlink and stop) posted (#2) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:28:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:28:23 +0000 Subject: [Bugs] [Bug 1722546] New: do not assert in inode_unref if the inode table cleanup has started Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722546 Bug ID: 1722546 Summary: do not assert in inode_unref if the inode table cleanup has started Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: There is a good chance that, the inode on which unref came has already been zero refed and added to the purge list. This can happen when inode table is being destroyed (glfs_fini is something which destroys the inode table). Consider a directory 'a' which has a file 'b'. Now as part of inode table destruction zero refing of inodes does not happen from leaf to the root. It happens in the order inodes are present in the list. So, in this example, the dentry of 'b' would have its parent set to the inode of 'a'. So if 'a' gets zero refed first (as part of inode table cleanup) and then 'b' has to zero refed, then dentry_unset is called on the dentry of 'b' and it further goes on to call inode_unref on b's parent which is 'a'. In this situation, GF_ASSERT would be called as the refcount of 'a' has been already set to zero. So, return the inode (in the function inode_unref without doing anything) if the inode table cleanup has already started and inode's refcount is zero. Version-Release number of selected component (if applicable): How reproducible: This might happen when glfs_fini is called from a gfapi based process. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:30:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:30:07 +0000 Subject: [Bugs] [Bug 1722546] do not assert in inode_unref if the inode table cleanup has started In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722546 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22650 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:30:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:30:08 +0000 Subject: [Bugs] [Bug 1722546] do not assert in inode_unref if the inode table cleanup has started In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722546 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22650 (* core: do not assert in inode_unref if the inode table cleanup has started) posted (#4) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 15:55:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 15:55:42 +0000 Subject: [Bugs] [Bug 1711400] Dispersed volumes leave open file descriptors on nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711400 --- Comment #2 from Scott Hubbard --- We are using the Docker version of Gluster, and the latest version is 4.1.7. The last updated image was 4 months ago. https://hub.docker.com/r/gluster/gluster-centos/tags. I see there is also a glusterd2-nightly Docker image. I presume this is the version you used? Is this a drop-in replacement for the gluster-centos Docker image? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 17:40:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 17:40:53 +0000 Subject: [Bugs] [Bug 1722598] New: dump the min and max latency of each xlator in statedump Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722598 Bug ID: 1722598 Summary: dump the min and max latency of each xlator in statedump Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When latency monitoring is enabled at per xlator level for a gluster process (via the signal SIGUSR2), taking the statedump wont dump the minimum and maximum latencies seen for each xlator (for each fop). Dumping that information helps in debugging performance and latency related issues as the statedump helps in identifying the xlator where the latency is seen. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 17:42:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 17:42:23 +0000 Subject: [Bugs] [Bug 1722598] dump the min and max latency of each xlator in statedump In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722598 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22910 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 17:42:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 17:42:24 +0000 Subject: [Bugs] [Bug 1722598] dump the min and max latency of each xlator in statedump In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722598 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22910 (statedump: dump the minimum and maximum latency seen by each xlator) posted (#1) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 20:04:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 20:04:48 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 Fedora Update System changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |MODIFIED --- Comment #8 from Fedora Update System --- FEDORA-2019-8015e5dc40 has been submitted as an update to Fedora 30. https://bodhi.fedoraproject.org/updates/FEDORA-2019-8015e5dc40 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 04:39:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 04:39:37 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #704 from Worker Ant --- REVIEW: https://review.gluster.org/22901 (fix template file after clang-format) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 21:10:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 21:10:40 +0000 Subject: [Bugs] [Bug 1529842] Read-only listxattr syscalls seem to translate to non-read-only FOPs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529842 --- Comment #2 from nh2 --- Did you use the same version as I was using, 3.12.3? Unfortunately I won't be able to put time into re-reproducing this, as we switched to Ceph a year ago. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 21:24:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 21:24:51 +0000 Subject: [Bugs] [Bug 1702131] The source file is left in EC volume after rename when glusterfsd out of service In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702131 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-20 21:24:51 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22602 (ec-heal: check file's gfid when deleting stale name) merged (#4) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 21:29:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 21:29:23 +0000 Subject: [Bugs] [Bug 1665361] Alerts for offline nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665361 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|narekuma at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Jun 20 23:00:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 23:00:59 +0000 Subject: [Bugs] [Bug 1721686] Remove usage of obsolete function usleep() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721686 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22911 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Jun 20 23:01:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 20 Jun 2019 23:01:00 +0000 Subject: [Bugs] [Bug 1721686] Remove usage of obsolete function usleep() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721686 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22911 (Replace usleep() with nanosleep()) posted (#1) for review on master by Vijay Bellur -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 03:36:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 03:36:31 +0000 Subject: [Bugs] [Bug 1722698] New: DHT: severe memory leak in dht rename Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722698 Bug ID: 1722698 Summary: DHT: severe memory leak in dht rename Product: GlusterFS Version: mainline Status: NEW Component: distribute Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, tdesala at redhat.com Depends On: 1722512 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1722512 +++ Description of problem: The dht rename codepath has a severe leak if it needs to create a linkto file. Version-Release number of selected component (if applicable): How reproducible: Consistently Steps to Reproduce: 1. Create a 2x3 distribute replicate volume and fuse mount it. 2. Create 2 directories, dir1 and dir1-new in the root of the volume 3. Find 2 filenames which will hash to different subvols when created in these directories. For example, in my setup dir1/file-1 and dir1-new/newfile-1 hash to different subvols. This is necessary as the leak is in the path which creates a linkto file. 4. Run the following script and watch the memory usage for the mount process using top. Actual results: Memory rises steadily. Statedumps show that the number of active inodes keeps increasing. Expected results: Memory should not increase as there is a single file on the volume. Additional info: This is a regression introduced by https://code.engineering.redhat.com/gerrit/#/c/154933/ in RHGS 3.4.2 --- Additional comment from RHEL Product and Program Management on 2019-06-20 14:02:20 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from RHEL Product and Program Management on 2019-06-20 14:05:13 UTC --- This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP. --- Additional comment from Nithya Balachandran on 2019-06-20 14:10:03 UTC --- (In reply to Nithya Balachandran from comment #0) > Description of problem: > > The dht rename codepath has a severe leak if it needs to create a linkto > file. > Version-Release number of selected component (if applicable): > > > How reproducible: > Consistently > > Steps to Reproduce: > 1. Create a 2x3 distribute replicate volume and fuse mount it. > 2. Create 2 directories, dir1 and dir1-new in the root of the volume > 3. Find 2 filenames which will hash to different subvols when created in > these directories. For example, in my setup dir1/file-1 and > dir1-new/newfile-1 hash to different subvols. This is necessary as the leak > is in the path which creates a linkto file. > 4. Run the following script and watch the memory usage for the mount process > using top. > > Forgot to mention the script in the description: while (true); do for in in {1..20000}; do touch /mnt/fuse1/dir1/file-1; mv -f /mnt/fuse1/dir1/file-1 /mnt/fuse1/dir1-new/newfile-1; done ;rm -rf /mnt/fuse1/dir1-new/*; done Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1722512 [Bug 1722512] DHT: severe memory leak in dht rename -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 03:36:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 03:36:58 +0000 Subject: [Bugs] [Bug 1722698] DHT: severe memory leak in dht rename In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722698 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|Regression | Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 03:38:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 03:38:30 +0000 Subject: [Bugs] [Bug 1722698] DHT: severe memory leak in dht rename In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722698 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22912 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 03:38:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 03:38:32 +0000 Subject: [Bugs] [Bug 1722698] DHT: severe memory leak in dht rename In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722698 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22912 (cluster/dht: Fixed a memleak in dht_rename_cbk) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 04:21:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 04:21:26 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-21 04:21:26 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 04:28:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 04:28:09 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22913 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 04:28:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 04:28:10 +0000 Subject: [Bugs] [Bug 1718227] SELinux context labels are missing for newly added bricks using add-brick command In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718227 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22913 (extras/hooks: Add SELinux label on new bricks during add-brick) posted (#1) for review on release-6 by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 05:15:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:15:44 +0000 Subject: [Bugs] [Bug 1722708] New: WORM: Segmentation Fault if bitrot stub do signature Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722708 Bug ID: 1722708 Summary: WORM: Segmentation Fault if bitrot stub do signature Product: GlusterFS Version: 5 Status: NEW Component: bitrot Severity: high Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: atumball at redhat.com, bugs at gluster.org, pasik at iki.fi, risjain at redhat.com, vpandey at redhat.com Target Milestone: --- Classification: Community Docs Contact: bugs at gluster.org +++ This bug was initially created as a clone of Bug #1717757 +++ Description of problem: Setup: 2-Node VM Cluster with a Replica 2 Volume After doing several "wild" write and delete operations from a Win Client, one of the brick crashes. The crash report says: [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] pending frames: frame : type(0) op(0) frame : type(0) op(23) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-06-05 09:05:05 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7f89faa7264c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7f89faa7cd26] /lib64/libc.so.6(+0x361a0)[0x7f89f9c391a0] /usr/lib64/glusterfs/5.5/xlator/features/bitrot-stub.so(+0x13441)[0x7f89f22ae441] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x22636)[0x7f89f1e68636] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/libglusterfs.so.0(syncop_fsetxattr+0x26b)[0x7f89faab319b] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0xa901)[0x7f89f1c3d901] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x11b66)[0x7f89f1e57b66] /usr/lib64/glusterfs/5.5/xlator/features/access-control.so(+0xaebe)[0x7f89f208febe] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0xb081)[0x7f89f1e51081] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x8c23)[0x7f89f1c3bc23] /usr/lib64/glusterfs/5.5/xlator/features/read-only.so(+0x4e30)[0x7f89f1a2de30] /usr/lib64/glusterfs/5.5/xlator/features/leases.so(+0xa444)[0x7f89f181b444] /usr/lib64/glusterfs/5.5/xlator/features/upcall.so(+0x10a68)[0x7f89f1600a68] /usr/lib64/libglusterfs.so.0(default_create_resume+0x212)[0x7f89fab10132] /usr/lib64/libglusterfs.so.0(call_resume_wind+0x2cf)[0x7f89faa97e5f] /usr/lib64/libglusterfs.so.0(call_resume+0x75)[0x7f89faa983a5] /usr/lib64/glusterfs/5.5/xlator/performance/io-threads.so(+0x6088)[0x7f89f13e7088] /lib64/libpthread.so.0(+0x7569)[0x7f89f9fc4569] /lib64/libc.so.6(clone+0x3f)[0x7f89f9cfb9af] --------- Version-Release number of selected component (if applicable): v5.5 Additional info: The backtrace shows that there is a Nulllpointer for *fd in br_stub_fsetxattr: Thread 1 (Thread 0x7f89f0099700 (LWP 2171)): #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" This results in a segmentation fault in line 1328 of bit-rot_stub.c : if (!IA_ISREG(fd->inode->ia_type)) goto wind; The bitrot-stub wants to signate a file but the corresponding fd is a Nullpointer. The full backtrace is attached!!! --- Additional comment from Amar Tumballi on 2019-06-06 06:57:25 UTC --- Not sure why this happened, because, for bitrot, a fsetxattr() call shouldn't come at all if fd is NULL. It should have been prevented at higher level itself. I found the reason after digging a bit. Ideally, in case of failure (here, worm_create_cbk() received -1, which means fd is NULL), one shouldn't consume fd and call fsetxattr(). If there is a need to do a xattr op in failure, then one should call setxattr with 'loc' passed in create() call. (you can store it in local). ---- #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" #1 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f89f1e68636 in pl_fsetxattr (frame=0x7f89b825ab48, this=0x7f89ec0194a0, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at posix.c:1566 _new = 0x7f89b846a6e8 old_THIS = 0x7f89ec0194a0 next_xl_fn = 0x7f89faaf9ec0 tmp_cbk = 0x7f89f1e56680 op_ret = op_errno = 0 lockinfo_buf = 0x0 len = 0 __FUNCTION__ = "pl_fsetxattr" #3 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #4 0x00007f89faab319b in syncop_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #5 0x00007f89f1c3d901 in worm_create_cbk (frame=frame at entry=0x7f89b8302fe8, cookie=, this=, op_ret=op_ret at entry=-1, op_errno=op_errno at entry=13, fd=fd at entry=0x0, inode=0x0, buf=0x0, preparent=0x0, postparent=0x0, xdata=0x0) at worm.c:492 ret = 0 priv = 0x7f89ec074b38 dict = 0x7f89b84e9ad8 __FUNCTION__ = "worm_create_cbk" ---- Hopefully this helps. --- Additional comment from Amar Tumballi on 2019-06-06 06:59:29 UTC --- Can you check if below works? diff --git a/xlators/features/read-only/src/worm.c b/xlators/features/read-only/src/worm.c index cc3d15b8b2..6b44eae966 100644 --- a/xlators/features/read-only/src/worm.c +++ b/xlators/features/read-only/src/worm.c @@ -431,7 +431,7 @@ worm_create_cbk(call_frame_t *frame, void *cookie, xlator_t *this, priv = this->private; GF_ASSERT(priv); - if (priv->worm_file) { + if (priv->worm_file && (op_ret >= 0)) { dict = dict_new(); if (!dict) { gf_log(this->name, GF_LOG_ERROR, ---- Great if you can confirm this. --- Additional comment from on 2019-06-06 07:08:49 UTC --- I will check it! --- Additional comment from on 2019-06-07 08:30:08 UTC --- @Amar I wrote a patch with debug logs and I will observe the bricks now. During this time I have some questions concerning your patch suggestion: 1. According to crash report from the brick locks, there was a failure in [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] just before the crash. What can be the reason for this? 2. If this LOOKUP for acls fails, is it problematic to do a setxattr with loc? If we skip setting xattr when fd is NULL the file on that brick won't have the necessary xattr like trusted.worm_file and other. See an example directly after the crash: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== On the healthy brick I got: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.afr.dirty=0sAAAAAAAAAAAAAAAA trusted.afr.test-client-0=0sAAAABAAAAAMAAAAA trusted.bit-rot.version=0sAgAAAAAAAABc+P64AAEhGQ== trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.glusterfs.mdata=0sAQAAAAAAAAAAAAAAAFz5AJEAAAAAMqdgMwAAAABcRwJEAAAAAAAAAAAAAAAAXPkAkQAAAAAAAAAA trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== trusted.start_time="1559822481" trusted.worm_file=0sMQA= After restarting the faulty brick a heal was triggered and afterwards the file on the faulty brick is heal.It should be ensured that the broken file gets all necessary xattr. What is the better way? Triggering a setxattr with loc in worm_create_cbk or do a heal afterwards? --- Additional comment from Amar Tumballi on 2019-06-07 08:35:33 UTC --- 1. permission denied is mostly probably a issue of missing permission (uid 2000, trying to create an entry in a directory with 755, owned by uid-0 (root)). 2. I think it is better to leave it to heal. If it is a create failure, we should anyways fail the operation is my opinion. --- Additional comment from on 2019-06-07 09:57:15 UTC --- Allrigth, I will stress the system for a while and if everything is stable I will commit the patch to gerrit --- Additional comment from Amar Tumballi on 2019-06-18 13:38:51 UTC --- Looks like we need to check for 'op_ret' in most of the places in WORM code. --- Additional comment from Worker Ant on 2019-06-19 11:10:56 UTC --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) posted (#1) for review on master by David Spisla --- Additional comment from on 2019-06-19 11:26:35 UTC --- I have send a patch to gerrit. @Amar if there is any other place in the WORM Xlator whcih can cause an segfault please tell me. I will swrite some patches soon. At the moment the worm_create_cbk is the only one callback function in this xlator --- Additional comment from Worker Ant on 2019-06-21 04:21:26 UTC --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:17:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:17:13 +0000 Subject: [Bugs] [Bug 1722709] New: WORM: Segmentation Fault if bitrot stub do signature Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722709 Bug ID: 1722709 Summary: WORM: Segmentation Fault if bitrot stub do signature Product: GlusterFS Version: 6 Status: NEW Component: bitrot Severity: high Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: atumball at redhat.com, bugs at gluster.org, pasik at iki.fi, risjain at redhat.com, vpandey at redhat.com Target Milestone: --- Classification: Community Docs Contact: bugs at gluster.org +++ This bug was initially created as a clone of Bug #1717757 +++ Description of problem: Setup: 2-Node VM Cluster with a Replica 2 Volume After doing several "wild" write and delete operations from a Win Client, one of the brick crashes. The crash report says: [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] pending frames: frame : type(0) op(0) frame : type(0) op(23) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-06-05 09:05:05 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7f89faa7264c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7f89faa7cd26] /lib64/libc.so.6(+0x361a0)[0x7f89f9c391a0] /usr/lib64/glusterfs/5.5/xlator/features/bitrot-stub.so(+0x13441)[0x7f89f22ae441] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x22636)[0x7f89f1e68636] /usr/lib64/libglusterfs.so.0(default_fsetxattr+0xce)[0x7f89faaf9f8e] /usr/lib64/libglusterfs.so.0(syncop_fsetxattr+0x26b)[0x7f89faab319b] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0xa901)[0x7f89f1c3d901] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x11b66)[0x7f89f1e57b66] /usr/lib64/glusterfs/5.5/xlator/features/access-control.so(+0xaebe)[0x7f89f208febe] /usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0xb081)[0x7f89f1e51081] /usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x8c23)[0x7f89f1c3bc23] /usr/lib64/glusterfs/5.5/xlator/features/read-only.so(+0x4e30)[0x7f89f1a2de30] /usr/lib64/glusterfs/5.5/xlator/features/leases.so(+0xa444)[0x7f89f181b444] /usr/lib64/glusterfs/5.5/xlator/features/upcall.so(+0x10a68)[0x7f89f1600a68] /usr/lib64/libglusterfs.so.0(default_create_resume+0x212)[0x7f89fab10132] /usr/lib64/libglusterfs.so.0(call_resume_wind+0x2cf)[0x7f89faa97e5f] /usr/lib64/libglusterfs.so.0(call_resume+0x75)[0x7f89faa983a5] /usr/lib64/glusterfs/5.5/xlator/performance/io-threads.so(+0x6088)[0x7f89f13e7088] /lib64/libpthread.so.0(+0x7569)[0x7f89f9fc4569] /lib64/libc.so.6(clone+0x3f)[0x7f89f9cfb9af] --------- Version-Release number of selected component (if applicable): v5.5 Additional info: The backtrace shows that there is a Nulllpointer for *fd in br_stub_fsetxattr: Thread 1 (Thread 0x7f89f0099700 (LWP 2171)): #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" This results in a segmentation fault in line 1328 of bit-rot_stub.c : if (!IA_ISREG(fd->inode->ia_type)) goto wind; The bitrot-stub wants to signate a file but the corresponding fd is a Nullpointer. The full backtrace is attached!!! --- Additional comment from Amar Tumballi on 2019-06-06 06:57:25 UTC --- Not sure why this happened, because, for bitrot, a fsetxattr() call shouldn't come at all if fd is NULL. It should have been prevented at higher level itself. I found the reason after digging a bit. Ideally, in case of failure (here, worm_create_cbk() received -1, which means fd is NULL), one shouldn't consume fd and call fsetxattr(). If there is a need to do a xattr op in failure, then one should call setxattr with 'loc' passed in create() call. (you can store it in local). ---- #0 br_stub_fsetxattr (frame=0x7f89b846a6e8, this=0x7f89ec015c00, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at bit-rot-stub.c:1328 ret = 0 val = 0 sign = 0x0 priv = 0x7f89ec07ed60 op_errno = 22 __FUNCTION__ = "br_stub_fsetxattr" #1 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f89f1e68636 in pl_fsetxattr (frame=0x7f89b825ab48, this=0x7f89ec0194a0, fd=0x0, dict=0x7f89b84e9ad8, flags=0, xdata=0x0) at posix.c:1566 _new = 0x7f89b846a6e8 old_THIS = 0x7f89ec0194a0 next_xl_fn = 0x7f89faaf9ec0 tmp_cbk = 0x7f89f1e56680 op_ret = op_errno = 0 lockinfo_buf = 0x0 len = 0 __FUNCTION__ = "pl_fsetxattr" #3 0x00007f89faaf9f8e in default_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #4 0x00007f89faab319b in syncop_fsetxattr () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #5 0x00007f89f1c3d901 in worm_create_cbk (frame=frame at entry=0x7f89b8302fe8, cookie=, this=, op_ret=op_ret at entry=-1, op_errno=op_errno at entry=13, fd=fd at entry=0x0, inode=0x0, buf=0x0, preparent=0x0, postparent=0x0, xdata=0x0) at worm.c:492 ret = 0 priv = 0x7f89ec074b38 dict = 0x7f89b84e9ad8 __FUNCTION__ = "worm_create_cbk" ---- Hopefully this helps. --- Additional comment from Amar Tumballi on 2019-06-06 06:59:29 UTC --- Can you check if below works? diff --git a/xlators/features/read-only/src/worm.c b/xlators/features/read-only/src/worm.c index cc3d15b8b2..6b44eae966 100644 --- a/xlators/features/read-only/src/worm.c +++ b/xlators/features/read-only/src/worm.c @@ -431,7 +431,7 @@ worm_create_cbk(call_frame_t *frame, void *cookie, xlator_t *this, priv = this->private; GF_ASSERT(priv); - if (priv->worm_file) { + if (priv->worm_file && (op_ret >= 0)) { dict = dict_new(); if (!dict) { gf_log(this->name, GF_LOG_ERROR, ---- Great if you can confirm this. --- Additional comment from on 2019-06-06 07:08:49 UTC --- I will check it! --- Additional comment from on 2019-06-07 08:30:08 UTC --- @Amar I wrote a patch with debug logs and I will observe the bricks now. During this time I have some questions concerning your patch suggestion: 1. According to crash report from the brick locks, there was a failure in [2019-06-05 09:05:05.137156] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-archive1-access-control: client: CTX_ID:fcab5e67-b9d9-4b72-8c15-f29de2084af3-GRAPH_ID:0-PID:189 16-HOST:fs-detlefh-c1-n2-PC_NAME:archive1-client-0-RECON_NO:-0, gfid: 494b42ad-7e40-4e27-8878-99387a80b5dc, req(uid:2000,gid:2000,perm:3,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:755,update d-fop:LOOKUP, acl:-) [Permission denied] just before the crash. What can be the reason for this? 2. If this LOOKUP for acls fails, is it problematic to do a setxattr with loc? If we skip setting xattr when fd is NULL the file on that brick won't have the necessary xattr like trusted.worm_file and other. See an example directly after the crash: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== On the healthy brick I got: # file: gluster/brick3/glusterbrick/test/data/BC/storage.log trusted.afr.dirty=0sAAAAAAAAAAAAAAAA trusted.afr.test-client-0=0sAAAABAAAAAMAAAAA trusted.bit-rot.version=0sAgAAAAAAAABc+P64AAEhGQ== trusted.gfid=0sag3y6RuoTgqAw//fx3ZB1Q== trusted.gfid2path.273f2255a25b2961="bd910b86-d51a-4006-a2c4-515ef5f1777a/storage.log" trusted.glusterfs.mdata=0sAQAAAAAAAAAAAAAAAFz5AJEAAAAAMqdgMwAAAABcRwJEAAAAAAAAAAAAAAAAXPkAkQAAAAAAAAAA trusted.pgfid.bd910b86-d51a-4006-a2c4-515ef5f1777a=0sAAAAAQ== trusted.start_time="1559822481" trusted.worm_file=0sMQA= After restarting the faulty brick a heal was triggered and afterwards the file on the faulty brick is heal.It should be ensured that the broken file gets all necessary xattr. What is the better way? Triggering a setxattr with loc in worm_create_cbk or do a heal afterwards? --- Additional comment from Amar Tumballi on 2019-06-07 08:35:33 UTC --- 1. permission denied is mostly probably a issue of missing permission (uid 2000, trying to create an entry in a directory with 755, owned by uid-0 (root)). 2. I think it is better to leave it to heal. If it is a create failure, we should anyways fail the operation is my opinion. --- Additional comment from on 2019-06-07 09:57:15 UTC --- Allrigth, I will stress the system for a while and if everything is stable I will commit the patch to gerrit --- Additional comment from Amar Tumballi on 2019-06-18 13:38:51 UTC --- Looks like we need to check for 'op_ret' in most of the places in WORM code. --- Additional comment from Worker Ant on 2019-06-19 11:10:56 UTC --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) posted (#1) for review on master by David Spisla --- Additional comment from on 2019-06-19 11:26:35 UTC --- I have send a patch to gerrit. @Amar if there is any other place in the WORM Xlator whcih can cause an segfault please tell me. I will swrite some patches soon. At the moment the worm_create_cbk is the only one callback function in this xlator --- Additional comment from Worker Ant on 2019-06-21 04:21:26 UTC --- REVIEW: https://review.gluster.org/22898 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:19:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:19:27 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22915 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:19:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:19:28 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #11 from Worker Ant --- REVIEW: https://review.gluster.org/22915 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) posted (#1) for review on release-5 by David Spisla -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:20:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:20:52 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22916 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:20:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:20:53 +0000 Subject: [Bugs] [Bug 1717757] WORM: Segmentation Fault if bitrot stub do signature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1717757 --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22916 (WORM-Xlator: Avoid performing fsetxattr if fd is NULL) posted (#1) for review on release-6 by David Spisla -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Fri Jun 21 05:25:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:25:28 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22918 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 05:25:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 05:25:29 +0000 Subject: [Bugs] [Bug 1718734] Memory leak in glusterfsd process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1718734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22918 (Detach iot_worker to release its resources) posted (#1) for review on master by Liguang Li -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 06:57:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 06:57:03 +0000 Subject: [Bugs] [Bug 1722740] New: [GSS] geo-replication sessions going faulty Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722740 Bug ID: 1722740 Summary: [GSS] geo-replication sessions going faulty Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Priority: high Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: amanzane at redhat.com, amukherj at redhat.com, avishwan at redhat.com, bkunal at redhat.com, ccalhoun at redhat.com, csaba at redhat.com, davie.desmet at unifiedpost.com, jfindysz at redhat.com, khiremat at redhat.com, olim at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com, vdas at redhat.com Depends On: 1712591 Blocks: 1696809 Target Milestone: --- Group: redhat Classification: Community Description of problem: gluster command not found. How reproducible: Always Steps to Reproduce: 1. Setup non-root geo-replication 2. Start geo-replication 2. Session goes to faulty Expected results: Session should not go faulty -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 07:20:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 07:20:05 +0000 Subject: [Bugs] [Bug 1722740] [GSS] geo-replication sessions going faulty In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722740 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Sunny Kumar --- Upstream Patch: https://review.gluster.org/#/c/glusterfs/+/22920/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 11:10:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:10:16 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-21 11:10:16 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22858 (posix/ctime: Fix ctime upgrade issue) merged (#5) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 11:11:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:11:28 +0000 Subject: [Bugs] [Bug 1722802] New: Incorrect power of two calculation in mem_pool_get_fn Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722802 Bug ID: 1722802 Summary: Incorrect power of two calculation in mem_pool_get_fn Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: atumball at redhat.com, bugs at gluster.org, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1722801 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1722801 +++ Description of problem: The method used to calculate the power of two value for a type is off by 1 causing twice the required amount of memory to be allocated. For example, cComparing the information for inode_t in statedumps from 3.4.4 and 3.5.0: 3.4.4: ------ pool-name=inode_t active-count=15408 sizeof-type=168 padded-sizeof=256 size=3944448 shared-pool=0x7fac27a7b468 -----=----- 3.5.0: ------ pool-name=inode_t active-count=2 sizeof-type=255 <--- actual sizeof inode_t is 168 padded-sizeof=512 <--- padded size is twice the required amount size=1024 shared-pool=0x7f1103b5b6d0 Version-Release number of selected component (if applicable): 3.5.0 How reproducible: Steps to Reproduce: 1. Create volume, fuse mount it and create some files and dirs on it 2. Take a statedump of the gluster mount process (kill -SIGUSR1 ) 3. Compare the sizeof-type and padded-sizeof values in the state releases. Actual results: The padded-sizeof is twice the smallest power of two value for sizeof-type + sizeof(obj header) Expected results: The padded-sizeof should be the smallest power of two value for sizeof-type + sizeof(obj header) Additional info: --- Additional comment from RHEL Product and Program Management on 2019-06-21 11:10:14 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1722801 [Bug 1722801] Incorrect power of two calculation in mem_pool_get_fn -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:15:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:15:28 +0000 Subject: [Bugs] [Bug 1722802] Incorrect power of two calculation in mem_pool_get_fn In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722802 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22921 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:15:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:15:29 +0000 Subject: [Bugs] [Bug 1722802] Incorrect power of two calculation in mem_pool_get_fn In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722802 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22921 (core: fix memory allocation issues) posted (#4) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:16:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:16:27 +0000 Subject: [Bugs] [Bug 1722805] New: Healing not proceeding during in-service upgrade on a disperse volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722805 Bug ID: 1722805 Summary: Healing not proceeding during in-service upgrade on a disperse volume Product: GlusterFS Version: 6 Hardware: All OS: Linux Status: NEW Component: ctime Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: amukherj at redhat.com, aspandey at redhat.com, bugs at gluster.org, jahernan at redhat.com, khiremat at redhat.com, kiyer at redhat.com, nchilaka at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, ubansal at redhat.com, vdas at redhat.com Depends On: 1713664, 1720201 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1713664 [Bug 1713664] Healing not proceeding during in-service upgrade on a disperse volume https://bugzilla.redhat.com/show_bug.cgi?id=1720201 [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:16:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:16:27 +0000 Subject: [Bugs] [Bug 1720201] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720201 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1722805 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1722805 [Bug 1722805] Healing not proceeding during in-service upgrade on a disperse volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 11:17:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:17:01 +0000 Subject: [Bugs] [Bug 1722805] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722805 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:20:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:20:27 +0000 Subject: [Bugs] [Bug 1722802] Incorrect power of two calculation in mem_pool_get_fn In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722802 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 11:37:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:37:17 +0000 Subject: [Bugs] [Bug 1722805] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722805 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22922 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 11:37:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 11:37:18 +0000 Subject: [Bugs] [Bug 1722805] Healing not proceeding during in-service upgrade on a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722805 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22922 (posix/ctime: Fix ctime upgrade issue) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 13:57:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 13:57:11 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22924 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 13:57:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 13:57:12 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #705 from Worker Ant --- REVIEW: https://review.gluster.org/22924 ([WIP]glusterd.h: align structs) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 21:47:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 21:47:28 +0000 Subject: [Bugs] [Bug 1722977] New: ESTALE change in fuse breaks get_real_filename implementation Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722977 Bug ID: 1722977 Summary: ESTALE change in fuse breaks get_real_filename implementation Product: GlusterFS Version: mainline Status: NEW Component: posix Severity: high Assignee: bugs at gluster.org Reporter: madam at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: The change of ENOENT to ESTALE in the fuse bridge in 59629f1da9dca670d5dcc6425f7f89b3e96b46bf has broken the get_real_filename implementation over fuse: get_real_filename is implemented as a virtual extended attribute to help Samba implement the case-insensitive but case preserving SMB protocol more efficiently. It is implemented as a getxattr call on the parent directory with the virtual key of "get_real_filename:" by looking for a spelling with different case for the provided file/dir name () and returning this correct spelling as a result if the entry is found. Originally (05aaec645a6262d431486eb5ac7cd702646cfcfb), the implementation used the ENOENT errno to return the authoritative answer that does not exist in any case folding. Now this implementation is actually a violation or misuse of the defined API for the getxattr call which returns ENOENT for the case that the dir that the call is made against does not exist and ENOATTR (or the synonym ENODATA) for the case that the xattr does not exist. This was not a problem until the gluster fuse-bridge was changed to do map ENOENT to ESTALE in 59629f1da9dca670d5dcc6425f7f89b3e96b46bf, after which we the getxattr call for get_real_filename returned an ESTALE instead of ENOENT breaking the expectation in Samba. (It is an independent problem that ESTALE should not leak out to user space but is intended to trigger retries between fuse and gluster. My theory is that the leaking happens because of the wrong use of ESTALE here: the parent directory exists in this case, and there is nothing stale....) But nevertheless, the semantics seem to be incorrect here and should be changed. Version-Release number of selected component (if applicable): master and version 6 How reproducible: Always. Steps to Reproduce: On a gluster fuse mount, run `getfattr -n glusterfs.get_real_filename:file-that-does-not-exist /path/to/fuse/mount/some-subdir`. Actual results: This shows the ESTALE error. Expected results: It shows ENONET. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 21:47:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 21:47:42 +0000 Subject: [Bugs] [Bug 1722977] ESTALE change in fuse breaks get_real_filename implementation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722977 Michael Adam changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |madam at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Jun 21 22:01:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 22:01:04 +0000 Subject: [Bugs] [Bug 1722977] ESTALE change in fuse breaks get_real_filename implementation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722977 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22925 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 22:01:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 22:01:05 +0000 Subject: [Bugs] [Bug 1722977] ESTALE change in fuse breaks get_real_filename implementation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722977 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22925 ([RFC] change get_real_filename implementation to use ENOATTR instead of ENOENT) posted (#1) for review on master by Michael Adam -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Jun 21 22:03:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 21 Jun 2019 22:03:50 +0000 Subject: [Bugs] [Bug 1722977] ESTALE change in fuse breaks get_real_filename implementation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722977 --- Comment #2 from Michael Adam --- Corresponding changes to Samba are already written and will be posted to Samba. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 22 05:06:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 22 Jun 2019 05:06:51 +0000 Subject: [Bugs] [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22907 (cluster/ec: Prevent double pre-op xattrops) merged (#4) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Jun 22 06:04:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 22 Jun 2019 06:04:26 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 Fedora Update System changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |ON_QA --- Comment #9 from Fedora Update System --- samba-4.10.5-1.fc30 has been pushed to the Fedora 30 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-8015e5dc40 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 24 04:48:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 04:48:34 +0000 Subject: [Bugs] [Bug 1720566] Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22929 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 24 04:48:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 04:48:35 +0000 Subject: [Bugs] [Bug 1720566] Can't rebalance GlusterFS volume because unix socket's path name is too long In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720566 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22929 (Test patch for the cluster.rc logdir changes) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 24 05:02:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 05:02:21 +0000 Subject: [Bugs] [Bug 1722541] stale shd process files leading to heal timing out and heal deamon not coming up for all volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722541 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-24 05:02:21 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22909 (shd/mux: Fix race between mux_proc unlink and stop) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 07:25:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 07:25:50 +0000 Subject: [Bugs] [Bug 1723280] New: windows cannot access mountpoint exportd from a disperse volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1723280 Bug ID: 1723280 Summary: windows cannot access mountpoint exportd from a disperse volume Product: GlusterFS Version: mainline Status: NEW Component: libgfapi Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: windows cannot access mountpoint exportd from a disperse volume, there are error messages at smbd log file as, [ec-helpers.c:400:ec_loc_gfid_check] 0-test1-disperse-0: Mismatching GFID's in loc [dht-common.c:1574:dht_revalidate_cbk] 0-test1-dht: Revalidate: subvolume test1-disperse-0 for /cifsshare (gfid = 3346d752-dd4e-42de-9829-8a0cc3a4e30b) returned -1 [Invalid argument] Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. create a 4+2 disperse volume, 2. export /cifsshare from smbd, 3. access the exported dir at windows 7 Actual results: win7 reports bad disk volume. Expected results: win7 access the exportd dir and lists files correctly. Additional info: -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 07:27:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 07:27:52 +0000 Subject: [Bugs] [Bug 1723280] windows cannot access mountpoint exportd from a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1723280 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22930 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 07:27:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 07:27:53 +0000 Subject: [Bugs] [Bug 1723280] windows cannot access mountpoint exportd from a disperse volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1723280 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22930 (gfapi: set right pargfid according to parent's inode) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 08:26:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 08:26:50 +0000 Subject: [Bugs] [Bug 1721385] glusterfs-libs: usage of inet_addr() may impact IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-24 08:26:50 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22866 (core: replace inet_addr with inet_pton) merged (#7) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 09:55:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 09:55:02 +0000 Subject: [Bugs] [Bug 1720633] Upcall: Avoid sending upcalls for invalid Inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1720633 --- Comment #2 from Pasi Karkkainen --- Hmm.. it seems merge to 6.x branch is "blocked" due to a build failure regression? https://review.gluster.org/#/c/glusterfs/+/22873/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 11:17:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 11:17:15 +0000 Subject: [Bugs] [Bug 1663519] Memory leak when smb.conf has "store dos attributes = yes" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663519 --- Comment #7 from ryan at magenta.tv --- Hello, I'm trying to gather more information of this memory balloning, by following the steps documented here: https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md#mempools Is there a way to find the PID of a Gluster VFS client based the Samba PID that is ballooning/showing the memory leak issue? Many thanks, Ryan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 24 11:53:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 11:53:38 +0000 Subject: [Bugs] [Bug 1722187] Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1722187 --- Comment #1 from ryan at magenta.tv --- After some more testing, I've found that: - Issue goes away if the MLNX_OFED package is uninstalled - Issue exists even if all Gluster volumes are set to TCP transport -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 12:19:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 12:19:11 +0000 Subject: [Bugs] [Bug 1721601] [SHD] : logs of one volume are going to log file of other volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1721601 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-06-24 12:19:11 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22895 (glusterd/shd: Change shd logfile to a unique name) merged (#5) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Jun 24 13:06:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 13:06:17 +0000 Subject: [Bugs] [Bug 1723402] New: Brick multiplexing is not working. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1723402 Bug ID: 1723402 Summary: Brick multiplexing is not working. Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bmekala at redhat.com, jmulligan at redhat.com, mmuench at redhat.com, rhs-bugs at redhat.com, rsevilla at redhat.com, rtalur at redhat.com, sankarshan at redhat.com, sarora at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Blocks: 1722509 Target Milestone: --- Group: private Classification: Community Brick multiplexing feature breaks when volumes have distinct user.heketi.id values configured. Expectation is to have bricks coming up as part of same process but it does with 1:1 brick vs process ratio. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Jun 24 14:36:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 24 Jun 2019 14:36:25 +0000 Subject: [Bugs] [Bug 1723455] New: volume set group description missing space leading to words being merged in help output Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1723455 Bug ID: 1723455 Summary: volume set group description missing space leading to words being merged in help output Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: cli Severity: low Assignee: kiyer at redhat.com Reporter: kiyer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: volume set group description missing space leading to words being merged. This can be seen easily in # gluster v help output. ################################################################################ # gluster v help gluster volume commands ======================== volume add-brick [ [arbiter ]] ... [force] - add brick to volume volume barrier {enable|disable} - Barrier/unbarrier file operations on a volume volume clear-locks kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path volume create [stripe ] [replica [arbiter ]] [disperse []] [disperse-data ] [redundancy ] [transport ] ... [force] - create a new volume of specified type with mentioned bricks volume delete - delete volume specified by volume geo-replication [] []::[] {\ create [[ssh-port n] [[no-verify] \ | [push-pem]]] [force] \ | start [force] \ | stop [force] \ | pause [force] \ | resume [force] \ | config [[[\!]