From bugzilla at redhat.com Mon Apr 1 02:41:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 02:41:14 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(amukherj at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 03:45:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 03:45:02 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-25 16:32:41 |2019-04-01 03:45:02 --- Comment #14 from Worker Ant --- REVIEW: https://review.gluster.org/22075 (mgmt/shd: Implement multiplexing in self heal daemon) merged (#25) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 04:35:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 04:35:10 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22455 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 04:35:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 04:35:11 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22455 (posix-acl: remove default functions, and use library fn instead) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:32:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:32:03 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22458 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:32:04 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22458 (tests: enhance the auth.allow test to validate all failures of 'login' module) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:09 +0000 Subject: [Bugs] [Bug 1694561] New: gfapi: do not block epoll thread for upcall notifications Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Bug ID: 1694561 Summary: gfapi: do not block epoll thread for upcall notifications Product: GlusterFS Version: 6 Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, pasik at iki.fi Depends On: 1693575 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1693575 +++ Description of problem: With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved. In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed. Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC --- Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335 --- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k --- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 [Bug 1693575] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:09 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694561 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 [Bug 1694561] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:32 +0000 Subject: [Bugs] [Bug 1694562] New: gfapi: do not block epoll thread for upcall notifications Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Bug ID: 1694562 Summary: gfapi: do not block epoll thread for upcall notifications Product: GlusterFS Version: 5 Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, pasik at iki.fi Depends On: 1693575 Blocks: 1694561 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1693575 +++ Description of problem: With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved. In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed. Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC --- Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335 --- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k --- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 [Bug 1693575] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694561 [Bug 1694561] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:32 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694562 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 [Bug 1694562] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:32 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694562 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 [Bug 1694562] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:50 +0000 Subject: [Bugs] [Bug 1694563] New: gfapi: do not block epoll thread for upcall notifications Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 Bug ID: 1694563 Summary: gfapi: do not block epoll thread for upcall notifications Product: GlusterFS Version: 4.1 Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, pasik at iki.fi Depends On: 1693575 Blocks: 1694561, 1694562 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1693575 +++ Description of problem: With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved. In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed. Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC --- Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335 --- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k --- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 [Bug 1693575] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694561 [Bug 1694561] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694562 [Bug 1694562] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:50 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694563 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 [Bug 1694563] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:50 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694563 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 [Bug 1694563] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 05:59:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 05:59:50 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694563 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 [Bug 1694563] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:09 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22459 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:10 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22459 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on release-6 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:17 +0000 Subject: [Bugs] [Bug 1694565] New: gfapi: do not block epoll thread for upcall notifications Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 Bug ID: 1694565 Summary: gfapi: do not block epoll thread for upcall notifications Product: Red Hat Gluster Storage Version: rhgs-3.5 Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Assignee: pgurusid at redhat.com Reporter: skoduri at redhat.com QA Contact: vdas at redhat.com CC: bugs at gluster.org, jthottan at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, skoduri at redhat.com, storage-qa-internal at redhat.com Depends On: 1693575 Blocks: 1694561, 1694562, 1694563 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1693575 +++ Description of problem: With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved. In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed. Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC --- Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335 --- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k --- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 [Bug 1693575] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694561 [Bug 1694561] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694562 [Bug 1694562] gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694563 [Bug 1694563] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:17 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694565 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 [Bug 1694565] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:17 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694565 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 [Bug 1694565] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:17 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694565 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 [Bug 1694565] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:02:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:02:17 +0000 Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694565 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 [Bug 1694565] gfapi: do not block epoll thread for upcall notifications -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:03:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:03:19 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22460 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:03:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:03:20 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22460 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on release-5 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:19:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:19:43 +0000 Subject: [Bugs] [Bug 1694565] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|pgurusid at redhat.com |skoduri at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 06:33:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:33:33 +0000 Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22461 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:33:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:33:34 +0000 Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22461 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on release-4.1 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 06:50:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 06:50:09 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nbalacha at redhat.com Flags| |needinfo?(hgowtham at redhat.c | |om) --- Comment #2 from Nithya Balachandran --- Please explain why this happens and how the workaround solves the issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 07:58:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 07:58:40 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(hgowtham at redhat.c | |om) --- Comment #3 from Atin Mukherjee --- > When we do a rolling upgrade of the cluster from 3.12, 4.1 or 5.5 to 6, the upgraded node goes into disconnected state. Isn't this only seen from 3.12 to 6 upgrade? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 08:01:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 08:01:25 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(hgowtham at redhat.c | |om) | |needinfo?(hgowtham at redhat.c | |om) | --- Comment #4 from hari gowtham --- Hi Nithya, The RCA for this is yet to be done. I didn't find anything fishy in the logs. As I had to move forward with the testing, I tried the usually way of flushing the iptables to check if it fixes the disconnects and it yes, it did connect the peers back. The reason why this is happening is yet to be discovered. Regards, Hari. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 08:04:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 08:04:41 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 --- Comment #5 from hari gowtham --- (In reply to Atin Mukherjee from comment #3) > > When we do a rolling upgrade of the cluster from 3.12, 4.1 or 5.5 to 6, the upgraded node goes into disconnected state. > > Isn't this only seen from 3.12 to 6 upgrade? No, Atin. I issue happened with all the versions. It could as well be some network issue with the machines I tried it on. Not sure of it. The point to note here is: some times just a glusterd restart fixed and in some scenarios it needed a iptables flush followed with glusterd restart. But I found that the iptables flush with glusterd restart fixed it in every scenario i tried. I could find time to debug this further. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 08:22:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 08:22:32 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 --- Comment #6 from Atin Mukherjee --- FYI.. I tested the rolling upgrade from glusterfs 3.12 latest to glusterfs-6 with out any issues. Can some one else please try as well? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 08:41:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 08:41:27 +0000 Subject: [Bugs] [Bug 1694565] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694565 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Soumya Koduri --- Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/166586/1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 09:03:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:03:01 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22441 (tests: add statedump to playground) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:04:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:04:00 +0000 Subject: [Bugs] [Bug 1694610] New: glusterd leaking memory when issued gluster vol status all tasks continuosly Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Bug ID: 1694610 Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, nchilaka at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1691164 Blocks: 1686255 Target Milestone: --- Classification: Community Description of problem: glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB. Version-Release number of selected component (if applicable): glusterfs-3.12.2 How reproducible: 1/1 Steps to Reproduce: 1. On a six node cluster with brick-multiplexing enabled 2. Created 150 disperse volumes and 250 replica volumes and started them 3. Taken memory footprint from all the nodes 4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds Actual results: Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB Expected results: glusterd memory shouldn't leak Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686255 [Bug 1686255] glusterd leaking memory when issued gluster vol status all tasks continuosly https://bugzilla.redhat.com/show_bug.cgi?id=1691164 [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:04:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:04:00 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694610 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:05:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:05:13 +0000 Subject: [Bugs] [Bug 1694612] New: glusterd leaking memory when issued gluster vol status all tasks continuosly Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 Bug ID: 1694612 Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, nchilaka at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1691164 Blocks: 1686255, 1694610 Target Milestone: --- Classification: Community Description of problem: glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB. Version-Release number of selected component (if applicable): glusterfs-3.12.2 How reproducible: 1/1 Steps to Reproduce: 1. On a six node cluster with brick-multiplexing enabled 2. Created 150 disperse volumes and 250 replica volumes and started them 3. Taken memory footprint from all the nodes 4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds Actual results: Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB Expected results: glusterd memory shouldn't leak Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686255 [Bug 1686255] glusterd leaking memory when issued gluster vol status all tasks continuosly https://bugzilla.redhat.com/show_bug.cgi?id=1691164 [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly https://bugzilla.redhat.com/show_bug.cgi?id=1694610 [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:05:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:05:13 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1694612 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:05:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:05:13 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1694612 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:07:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:07:40 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 --- Comment #1 from Sanju --- Root cause: There's a leak of a key setting in the dictionary priv->glusterd_txn_opinfo in every volume status all transaction when cli fetches the list of volume names as part of the first transaction. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:07:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:07:54 +0000 Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 --- Comment #1 from Sanju --- Root cause: There's a leak of a key setting in the dictionary priv->glusterd_txn_opinfo in every volume status all transaction when cli fetches the list of volume names as part of the first transaction. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:12:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:12:51 +0000 Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22466 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:12:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:12:52 +0000 Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22466 (glusterd: fix txn-id mem leak) posted (#1) for review on release-5 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:14:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:14:31 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22467 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:14:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:14:32 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22467 (glusterd: fix txn-id mem leak) posted (#1) for review on release-6 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 09:26:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:26:48 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 09:29:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:29:54 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22468 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 09:29:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:29:55 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 --- Comment #15 from Worker Ant --- REVIEW: https://review.gluster.org/22468 (client/fini: return fini after rpc cleanup) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 09:37:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 09:37:41 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Netbulae changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(amukherj at redhat.c | |om) | |needinfo?(info at netbulae.com | |) | --- Comment #26 from Netbulae --- (In reply to Atin Mukherjee from comment #24) > [2019-03-18 11:29:01.000279] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] > 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.14 > > Why did we get a disconnect. Was glusterd service at *.14 not running? > > [2019-03-18 11:29:01.000330] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] > 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.15 > [2019-03-18 11:29:01.002495] E [rpc-clnt.c:346:saved_frames_unwind] (--> > /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fb4beddbfbb] (--> > /lib64/libgfrpc.so.0(+0xce11)[0x7fb4beba4e11] (--> > /lib64/libgfrpc.so.0(+0xcf2e)[0x7fb4beba4f2e] (--> > /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7fb4beba6531] (--> > /lib64/libgfrpc.so.0(+0xf0d8)[0x7fb4beba70d8] ))))) 0-glusterfs: forced > unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at > 2019-03-18 11:13:29.445101 (xid=0x2) > > The above log seems to be the culprit here. > > [2019-03-18 11:29:01.002517] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk] > 0-mgmt: failed to fetch volume file (key:/ssd9) > > And the above log is the after effect. > > > I have few questions: > > 1. Does the mount fail everytime? Yes. It also stays the same when we switch the primary storage domain to another one. > 2. Do you see any change in the behaviour when the primary volfile server is > changed? No I have different primary volfile server across volumes to spread the load a bit more. Same effect always. > 3. What are the gluster version in the individual peers? All nodes and servers are on 3.12.15 > > (Keeping the needinfo intact for now, but request Sahina to get us these > details to work on). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 10:03:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 10:03:52 +0000 Subject: [Bugs] [Bug 1694637] New: Geo-rep: Rename to an existing file name destroys its content on slave Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694637 Bug ID: 1694637 Summary: Geo-rep: Rename to an existing file name destroys its content on slave Product: GlusterFS Version: 5 OS: Linux Status: NEW Component: geo-replication Severity: high Assignee: bugs at gluster.org Reporter: homma at allworks.co.jp CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Renaming a file to an existing file name on master results in an empty file on slave. Version-Release number of selected component (if applicable): glusterfs 5.5-1.el7 from centos-gluster5 repository How reproducible: Always Steps to Reproduce: 1. On geo-rep master, create a temporary files and rename them to existing files repeatedly: for n in {0..9}; do for i in {0..9}; do printf "%04d\n" $n > file$i.tmp; mv file$i.tmp file$i; done; done 2. List the created files on master and slave. Actual results: On master $ ls -l total 6 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file0 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file1 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file2 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file3 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file4 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file5 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file6 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file7 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file8 -rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file9 On slave $ ls -l total 1 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file0 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file0.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file1 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file1.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file2 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file2.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file3 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file3.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file4 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file4.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file5 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file5.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file6 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file6.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file7 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file7.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file8 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file8.tmp -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file9 -rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file9.tmp Expected results: Files are successfully renamed with correct contents on slave. Additional info: I have a 2-node replicated volume on master, and a single-node volume on slave. Master volume: Volume Name: www Type: Replicate Volume ID: bc99bbd2-20f9-4440-b51e-a1e105adfdf3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: fs01.localdomain:/glusterfs/www/brick1/brick Brick2: fs02.localdomain:/glusterfs/www/brick1/brick Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet storage.build-pgfid: on server.manage-gids: on network.ping-timeout: 3 geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on Slave volume: Volume Name: www Type: Distribute Volume ID: 026a58f5-9696-4d9e-9674-74771526e880 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: fs21.localdomain:/glusterfs/www/brick1/brick Options Reconfigured: storage.build-pgfid: on server.manage-gids: on network.ping-timeout: 3 transport.address-family: inet nfs.disable: on Many messages as follows appear in gsyncd.log on master: [2019-04-01 09:08:06.994154] I [master(worker /glusterfs/www/brick1/brick):813:fix_possible_entry_failures] _GMaster: Entry not present on master. Fixing gfid mismatch in slave. Deleting the entry retry_count=1 entry=({'stat': {}, 'entry1': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0', 'gfid': '54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'link': None, 'entry': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'RENAME'}, 17, {'slave_isdir': False, 'gfid_mismatch': True, 'slave_name': None, 'slave_gfid': 'df891073-b19c-481c-9916-f96790ff4d31', 'name_mismatch': False, 'dst': True}) [2019-04-01 09:08:07.33778] I [master(worker /glusterfs/www/brick1/brick):813:fix_possible_entry_failures] _GMaster: Entry not present on master. Fixing gfid mismatch in slave. Deleting the entry retry_count=1 entry=({'uid': 1000, 'gfid': 'c2836641-1000-48b0-865e-2c9ea6815baf', 'gid': 1000, 'mode': 4294934964, 'entry': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'CREATE'}, 17, {'slave_isdir': False, 'gfid_mismatch': True, 'slave_name': None, 'slave_gfid': '54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'name_mismatch': False, 'dst': False}) [2019-04-01 09:08:07.319814] I [master(worker /glusterfs/www/brick1/brick):904:fix_possible_entry_failures] _GMaster: Fixing ENOENT error in slave. Create parent directory on slave. retry_count=1 entry=({'stat': {'atime': 1554109682.6345513, 'gid': 1000, 'mtime': 1554109682.6455512, 'mode': 33204, 'uid': 1000}, 'entry1': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0', 'gfid': '5755b878-9ba6-4da4-aa27-28cf6defd06e', 'link': None, 'entry': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'RENAME'}, 2, {'slave_isdir': False, 'gfid_mismatch': False, 'slave_name': None, 'slave_gfid': None, 'name_mismatch': False, 'dst': False}) [2019-04-01 09:08:13.855005] E [master(worker /glusterfs/www/brick1/brick):784:log_failures] _GMaster: ENTRY FAILED data=({'uid': 1000, 'gfid': '5755b878-9ba6-4da4-aa27-28cf6defd06e', 'gid': 1000, 'mode': 4294934964, 'entry': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'CREATE'}, 17, {'slave_isdir': False, 'gfid_mismatch': True, 'slave_name': None, 'slave_gfid': '54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'name_mismatch': False, 'dst': False}) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 10:13:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 10:13:42 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22469 (cluster/afr: Send inodelk/entrylk with non-zero lk-owner) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 10:13:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 10:13:41 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22469 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 12:14:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 12:14:53 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(info at netbulae.com | |) --- Comment #27 from Atin Mukherjee --- Since I'm unable to reproduce this even after multiple attempts, the only possibility I have to make some progress on this is by asking you to test different combinations, I understand that this might make you frustrated but I have no other way at this point of time to pinpoint this. In my local setup I tried every possible options to simulate this but had no success. As I explained in comment 24, it seems like that client couldn't get the volfile from glusterd running in *.15 instance. However since there's no log entry in INFO mode in glusterd which could indicate the possibility of this failure can I request you to do the following if possible? 1. Run 'pkill glusterd; glusterd ' on *.15 node 2. Attempt to mount the client. 3. Find out the failure log of 'failed to fetch volume file' and see the timestamp. From glusterd log map this timestamp and send us the snippet of the log entries around this timestamp. 4. Run gluster v info command from all the nodes and paste back the output 5. Provide the output of 'gluster v get all cluster.op-version' from one of the nodes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 12:54:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 12:54:54 +0000 Subject: [Bugs] [Bug 1694139] Error waiting for job 'heketi-storage-copy-job' to complete on one-node k3s deployment. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694139 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(it.sergm at gmail.co | |m) --- Comment #1 from Atin Mukherjee --- Could you elaborate the problem bit more? Are you seeing volume mount failing or something wrong with the clustering? From the quick scan through of the bug report, I don't see anything problematic from glusterd end. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 12:55:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 12:55:50 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-01 12:55:50 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 13:10:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 13:10:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22471 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 13:10:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 13:10:21 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #605 from Worker Ant --- REVIEW: https://review.gluster.org/22471 (build: conditional rpcbind for gnfs in glusterd.service) posted (#1) for review on master by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 13:39:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 13:39:22 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |srakonde at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-04-01 13:39:22 --- Comment #7 from Sanju --- I've tested rolling upgrade from 3.12 to 6, but haven't seen any issue. The cluster is in a healthy state and all peers are in connected state. Based on my experience and comment 6, I'm closing this as not a bug. Please, feel free to re-open the bug if you face it. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 14:30:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 14:30:01 +0000 Subject: [Bugs] [Bug 1690254] Volume create fails with "Commit failed" message if volumes is created using 3 nodes with glusterd restarts on 4th node. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690254 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |amukherj at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-04-01 14:30:01 --- Comment #1 from Atin Mukherjee --- The current behavior is as per design. Please remember in GD1, every nodes have to participate in the transaction and the commit phase should succeed irrespective of if the bricks are hosted on m out of n nodes in the trusted storage pool. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 14:33:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 14:33:42 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |amukherj at redhat.com Assignee|bugs at gluster.org |risjain at redhat.com --- Comment #1 from Atin Mukherjee --- This looks like a bug and should be an easy fix. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 17:45:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 17:45:28 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22473 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 17:45:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 17:45:30 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #606 from Worker Ant --- REVIEW: https://review.gluster.org/22473 ([WIP][RFC]mem-pool: set ptr to 0x0 after free'ed.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 19:01:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 19:01:58 +0000 Subject: [Bugs] [Bug 1694820] New: Issue in heavy rename workload Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Bug ID: 1694820 Summary: Issue in heavy rename workload Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: This problem only exists in heavy RENAME workload where parallel rename are frequent or doing RENAME with existing destination. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Run frequent RENAME on master mount and check for sync in slave. Ex - while true; do uuid="`uuidgen`"; echo "some data" > "test$uuid"; mv "test$uuid" "test" -f; done Actual results: Does not syncs renames properly and creates multiples files in slave. Expected results: Should sync renames. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 19:02:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 19:02:18 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 1 19:10:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 19:10:44 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22474 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 1 19:10:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 01 Apr 2019 19:10:45 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22474 (geo-rep: fix rename with existing gfid) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 04:37:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 04:37:39 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 asender at testlabs.com.au changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |asender at testlabs.com.au --- Comment #11 from asender at testlabs.com.au --- You can try this simple test to reproduce the problem. On Master [svc_sp_st_script at hplispnfs30079 conf]$ touch test.txt [svc_sp_st_script at hplispnfs30079 conf]$ vi test.txt a b c d [svc_sp_st_script at hplispnfs30079 conf]$ ll test.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt On Slave [root at hplispnfs40079 conf]# ll test.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt [root at hplispnfs40079 conf]# cat test.txt a b c d On Master [svc_sp_st_script at hplispnfs30079 conf]$ mv test.txt test-moved.txt [svc_sp_st_script at hplispnfs30079 conf]$ ll test-moved.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test-moved.txt On Slave File is not deleted, test-moved.txt does not exist and is not replicated. [root at hplispnfs40079 conf]# ll testfile -rw-r----- 1 svc_sp_st_script domain users 6 Apr 2 14:52 testfile -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 04:38:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 04:38:35 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #12 from asender at testlabs.com.au --- I also tried setting use_tarssh:true but this did not change the behavior. [root at hplispnfs30079 conf]# gluster volume geo-replication common hplispnfs40079::common config access_mount:false allow_network: change_detector:changelog change_interval:5 changelog_archive_format:%Y%m changelog_batch_size:727040 changelog_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/changes-${local_id}.log changelog_log_level:INFO checkpoint:0 chnagelog_archive_format:%Y%m cli_log_file:/var/log/glusterfs/geo-replication/cli.log cli_log_level:INFO connection_timeout:60 georep_session_working_dir:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/ gluster_cli_options: gluster_command:gluster gluster_command_dir:/usr/sbin gluster_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/mnt-${local_id}.log gluster_log_level:INFO gluster_logdir:/var/log/glusterfs gluster_params:aux-gfid-mount acl gluster_rundir:/var/run/gluster glusterd_workdir:/var/lib/glusterd gsyncd_miscdir:/var/lib/misc/gluster/gsyncd ignore_deletes:false isolated_slaves: log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/gsyncd.log log_level:INFO log_rsync_performance:false master_disperse_count:1 master_replica_count:1 max_rsync_retries:10 meta_volume_mnt:/var/run/gluster/shared_storage pid_file:/var/run/gluster/gsyncd-common-hplispnfs40079-common.pid remote_gsyncd:/usr/libexec/glusterfs/gsyncd replica_failover_interval:1 rsync_command:rsync rsync_opt_existing:true rsync_opt_ignore_missing_args:true rsync_options: rsync_ssh_options: slave_access_mount:false slave_gluster_command_dir:/usr/sbin slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-${master_node}-${master_brick_id}.log slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-mbr-${master_node}-${master_brick_id}.log slave_gluster_log_level:INFO slave_gluster_params:aux-gfid-mount acl slave_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/gsyncd.log slave_log_level:INFO slave_timeout:120 special_sync_mode: ssh_command:ssh ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem ssh_port:22 state_file:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/monitor.status state_socket_unencoded: stime_xattr_prefix:trusted.glusterfs.bb691a2e-801c-435b-a905-11ad249d43a7.ab3b208f-8cd1-4a2d-bf56-4a98434605c5 sync_acls:true sync_jobs:3 sync_xattrs:true tar_command:tar use_meta_volume:true use_rsync_xattrs:false use_tarssh:true working_dir:/var/lib/misc/gluster/gsyncd/common_hplispnfs40079_common/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 05:09:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 05:09:18 +0000 Subject: [Bugs] [Bug 1694920] New: Inconsistent locking in presence of disconnects Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694920 Bug ID: 1694920 Summary: Inconsistent locking in presence of disconnects Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: protocol Severity: high Priority: high Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bkunal at redhat.com, ccalhoun at redhat.com, james.c.buckley at vumc.org, kdhananj at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, ravishankar at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1689375 Target Milestone: --- Group: redhat Classification: Community -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 05:32:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 05:32:05 +0000 Subject: [Bugs] [Bug 1694925] New: GF_LOG_OCCASSIONALLY API doesn't log at first instance Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Bug ID: 1694925 Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance Product: GlusterFS Version: mainline Status: NEW Component: logging Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every 42nd iterations which isn't effective as in some cases we might not have the code flow hitting the same log for as many as 42 times and we'd end up suppressing the log. Version-Release number of selected component (if applicable): Mainline How reproducible: Always -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 05:35:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 05:35:14 +0000 Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22475 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 05:35:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 05:35:15 +0000 Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 06:38:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:38:07 +0000 Subject: [Bugs] [Bug 1694943] New: parallel-readdir slows down directory listing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694943 Bug ID: 1694943 Summary: parallel-readdir slows down directory listing Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: While running tests with the upstream master (HEAD at commit dfa255ae7f2dab4fb3d84c67a0452c5b32455877), I noticed that enabling parallel-readdir seems to increase the time taken for a directory listing: Numbers from a pure distribute 3 brick volume: Volume Name: pvol Type: Distribute Volume ID: c39c8c16-82d3-4b0b-8050-9c3d22c800ea Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: server1:/mnt/bricks/fsgbench0002/brick-0 Brick2: server1:/mnt/bricks/fsgbench0003/brick-0 Brick3: server1:/mnt/bricks/fsgbench0004/brick-0 Options Reconfigured: transport.address-family: inet nfs.disable: on The volume was mounted on /mnt/nithya and I created 10K directories and 10K files in the volume root: With readdir-ahead enabled: ---------------------------- [root at server2 nithya]# time ll |wc -l 20001 real 0m11.434s user 0m0.116s sys 0m0.241s [root at server2 nithya]# time ll |wc -l 20001 real 0m6.825s user 0m0.111s sys 0m0.265s With readdir-ahead and parallel-readdir enabled: ------------------------------------------------ [root at server2 nithya]# time ll |wc -l 20001 real 0m15.609s user 0m0.148s sys 0m0.379s [root at server2 nithya]# time ll |wc -l 20001 real 0m9.930s user 0m0.107s sys 0m0.295s -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 06:38:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:38:27 +0000 Subject: [Bugs] [Bug 1694943] parallel-readdir slows down directory listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694943 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |rgowdapp at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 06:48:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:48:08 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com Depends On| |1583018 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1583018 [Bug 1583018] changelog: Changelog is not capturing rename of files -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:48:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:48:08 +0000 Subject: [Bugs] [Bug 1583018] changelog: Changelog is not capturing rename of files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1583018 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1660225 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 [Bug 1660225] geo-rep does not replicate mv or rename of file -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:48:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:48:46 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #13 from Kotresh HR --- This issue is fixed in upstream and 5.x and 6.x series Patch: https://review.gluster.org/#/c/glusterfs/+/20093/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:49:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:49:40 +0000 Subject: [Bugs] [Bug 1694139] Error waiting for job 'heketi-storage-copy-job' to complete on one-node k3s deployment. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694139 it.sergm at gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(it.sergm at gmail.co | |m) | --- Comment #2 from it.sergm at gmail.com --- The thing is i don't see any exception there also and heketidbstorage volume is manually mounting(no files inside), but still its not working with k3s and pod seems cannot mount needed stuff before starting. K3s itself works fine and can deploy stuff with no errors. I could be wrong, but there is only one volume listed from gluster pod: [root at k3s-gluster /]# gluster volume list heketidbstorage but regarding to pod's error there should be more: 3d12h Warning FailedMount Pod Unable to mount volumes for pod "heketi-storage-copy-job-qzpr7_kube-system(36e1b013-5200-11e9-a826-227e2ba50104)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"heketi-storage-copy-job-qzpr7". list of unmounted volumes=[heketi-storage]. list of unattached volumes=[heketi-storage heketi-storage-secret default-token-98jvk] Here is the list of volumes on gluster pod: [root at k3s-gluster /]# df -h Filesystem Size Used Avail Use% Mounted on overlay 9.8G 6.9G 2.5G 74% / udev 3.9G 0 3.9G 0% /dev /dev/vda2 9.8G 6.9G 2.5G 74% /run tmpfs 798M 1.3M 797M 1% /run/lvm tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount /dev/mapper/vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7 2.0G 33M 2.0G 2% /var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7 [root at k3s-gluster /]# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/loop2: TYPE="squashfs" /dev/vda1: PARTUUID="258e4699-a592-442c-86d7-3d7ee4a0dfb7" /dev/vda2: UUID="b394d2be-6b9e-11e8-82ca-22c5fe683ae4" TYPE="ext4" PARTUUID="97104384-f79f-4a39-b3d4-56d717673a18" /dev/vdb: UUID="RUR8Cw-eVYg-H26e-yQ4g-7YCe-NzNg-ocJazb" TYPE="LVM2_member" /dev/mapper/vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7: UUID="ab0e969f-ae85-459c-914f-b008aeafb45e" TYPE="xfs" Also here what i've found on main node - host ip is NULL (btw i've changed topology before with external and private ips - nothing changed for this): root at k3s-gluster:~# cat /var/log/glusterfs/cli.log.1 [2019-03-29 08:54:07.634136] I [cli.c:773:main] 0-cli: Started running gluster with version 4.1.7 [2019-03-29 08:54:07.678012] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-29 08:54:07.678105] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2019-03-29 08:54:07.678268] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0 [2019-03-29 08:54:07.721606] I [cli-rpc-ops.c:1169:gf_cli_create_volume_cbk] 0-cli: Received resp to create volume [2019-03-29 08:54:07.721773] I [input.c:31:cli_batch] 0-: Exiting with: 0 [2019-03-29 08:54:07.817416] I [cli.c:773:main] 0-cli: Started running gluster with version 4.1.7 [2019-03-29 08:54:07.861767] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-29 08:54:07.861943] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2019-03-29 08:54:07.862016] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0 [2019-03-29 08:54:08.009116] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk] 0-cli: Received resp to start volume [2019-03-29 08:54:08.009314] I [input.c:31:cli_batch] 0-: Exiting with: 0 [2019-03-29 14:18:51.209759] I [cli.c:773:main] 0-cli: Started running gluster with version 4.1.7 [2019-03-29 14:18:51.256846] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-29 14:18:51.256985] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2019-03-29 14:18:51.257093] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0 [2019-03-29 14:18:51.259408] I [cli-rpc-ops.c:875:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0 [2019-03-29 14:18:51.259587] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0 [2019-03-29 14:18:51.260102] I [cli-rpc-ops.c:875:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0 [2019-03-29 14:18:51.260143] I [input.c:31:cli_batch] 0-: Exiting with: 0 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 06:49:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:49:42 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #14 from Kotresh HR --- Workaround: The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes. It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to convert single distribute to two distribute volume or upgrade to later versions if it can't be waited until next 4.1.x release. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:55:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:55:56 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22476 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:55:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:55:56 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #15 from Worker Ant --- REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in changelog) posted (#1) for review on release-4.1 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 06:57:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 06:57:53 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 --- Comment #8 from Nithya Balachandran --- (In reply to Sanju from comment #7) > I've tested rolling upgrade from 3.12 to 6, but haven't seen any issue. The > cluster is in a healthy state and all peers are in connected state. Based on > my experience and comment 6, I'm closing this as not a bug. Please, feel > free to re-open the bug if you face it. > > Thanks, > Sanju What about the upgrades from the other versions? This BZ refers to upgrades to release 6 from 3.12, 4 and 5. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 08:15:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 08:15:51 +0000 Subject: [Bugs] [Bug 1694976] New: On Fedora 29 GlusterFS 4.1 repo has bad/missing rpm signs Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694976 Bug ID: 1694976 Summary: On Fedora 29 GlusterFS 4.1 repo has bad/missing rpm signs Product: GlusterFS Version: 4.1 OS: Linux Status: NEW Component: unclassified Severity: high Assignee: bugs at gluster.org Reporter: bence at noc.elte.hu CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: On fedora 29 upgrade glusterfs from 4.1.7 to 4.1.8 failed becase fo missing/bad rpm signiatures. Version-Release number of selected component (if applicable): GlusterFS 4.1.8 on Fedora 29 How reproducible: Installed glusterfs 4.1.7 earlier from https://download.gluster.org/pub/gluster/glusterfs/4.1/ repo. Now upgrade to 4.1.8 failes Steps to Reproduce: 1. Install Fedora 29 Workstation 2. Disable glusterfs* packages from base/updates 3. Setup repo glusterfs-41-fedora as follows: cat /etc/yum.repos.d/glusterfs-41-fedora.repo [glusterfs-fedora] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/4.1/LATEST/Fedora/fedora-$releasever/$basearch/ enabled=1 skip_if_unavailable=1 gpgcheck=1 gpgkey=https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub 4. install glusterfs client packages 4.1.7 version 5. upgrade to 4.1.8 released on 2019-03-28 11:21 Actual results: # dnf update Last metadata expiration check: 0:00:23 ago on Tue 02 Apr 2019 09:55:04 AM CEST. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Upgrading glusterfs x86_64 4.1.8-1.fc29 glusterfs-fedora 618 k glusterfs-api x86_64 4.1.8-1.fc29 glusterfs-fedora 82 k glusterfs-cli x86_64 4.1.8-1.fc29 glusterfs-fedora 189 k glusterfs-client-xlators x86_64 4.1.8-1.fc29 glusterfs-fedora 942 k glusterfs-fuse x86_64 4.1.8-1.fc29 glusterfs-fedora 126 k glusterfs-libs x86_64 4.1.8-1.fc29 glusterfs-fedora 379 k Transaction Summary ================================================================================ Upgrade 6 Packages Total download size: 2.3 M Is this ok [y/N]: y Downloading Packages: (1/6): glusterfs-api-4.1.8-1.fc29.x86_64.rpm 64 kB/s | 82 kB 00:01 (2/6): glusterfs-cli-4.1.8-1.fc29.x86_64.rpm 145 kB/s | 189 kB 00:01 (3/6): glusterfs-4.1.8-1.fc29.x86_64.rpm 375 kB/s | 618 kB 00:01 (4/6): glusterfs-fuse-4.1.8-1.fc29.x86_64.rpm 290 kB/s | 126 kB 00:00 (5/6): glusterfs-client-xlators-4.1.8-1.fc29.x8 1.7 MB/s | 942 kB 00:00 (6/6): glusterfs-libs-4.1.8-1.fc29.x86_64.rpm 1.3 MB/s | 379 kB 00:00 -------------------------------------------------------------------------------- Total 1.2 MB/s | 2.3 MB 00:01 warning: /var/cache/dnf/glusterfs-fedora-80772cffdd565d3f/packages/glusterfs-4.1.8-1.fc29.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID c2f8238c: NOKEY GlusterFS is a clustered file-system capable of 2.9 kB/s | 1.7 kB 00:00 Importing GPG key 0x78FA6D97: Userid : "Gluster Packager " Fingerprint: EED3 351A FD72 E543 7C05 0F03 88F6 CDEE 78FA 6D97 From : http://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Is this ok [y/N]: y Key imported successfully Import of key(s) didn't help, wrong key(s)? Public key for glusterfs-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Public key for glusterfs-api-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-api-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Public key for glusterfs-cli-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-cli-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Public key for glusterfs-client-xlators-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-client-xlators-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Public key for glusterfs-fuse-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-fuse-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub Public key for glusterfs-libs-4.1.8-1.fc29.x86_64.rpm is not installed. Failing package is: glusterfs-libs-4.1.8-1.fc29.x86_64 GPG Keys are configured as: https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: GPG check FAILED Expected results: Upgrades to 4.1.8 Additional info: Centos 7 from GlusterFS 4.1 repo upgardes as expected as of 01/04/2019. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 10:17:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 10:17:39 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 --- Comment #9 from Sanju --- (In reply to Nithya Balachandran from comment #8) > What about the upgrades from the other versions? This BZ refers to upgrades > to release 6 from 3.12, 4 and 5. I did test upgrade to release 6 from 4 and 5. Haven't seen any issue. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 10:57:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 10:57:31 +0000 Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-02 10:57:31 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 12:19:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 12:19:19 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22469 (cluster/afr: Send inodelk/entrylk with non-zero lk-owner) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 12:42:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 12:42:46 +0000 Subject: [Bugs] [Bug 1482909] RFE : Enable glusterfs md cache for nfs-ganesha In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1482909 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1695072 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695072 [Bug 1695072] Doc changes for [RFE]nfs-ganesha: optimize FSAL_GLUSTER upcall mechanism -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 2 13:21:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 13:21:23 +0000 Subject: [Bugs] [Bug 1695099] New: The number of glusterfs processes keeps increasing, using all available resources Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 Bug ID: 1695099 Summary: The number of glusterfs processes keeps increasing, using all available resources Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: christian.ihle at drift.oslo.kommune.no CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: During normal operations, CPU and memory usage gradually increase to 100%, being used by a large number of glusterfs processes. The result is slowness and resource starvation. Issue startet happening with GlusterFS 5.2, but did not improve with 5.5. Did not see this issue in 3.12. Version-Release number of selected component (if applicable): GlusterFS 5.2 and 5.5 Heketi 8.0.0 CentOS 7.6 How reproducible: Users of the cluster hit this issue pretty often by creating and deleting volumes quickly, from Kubernetes (using Heketi to control GlusterFS). Sometimes we hit 100% resource usage several times a day. Steps to Reproduce: 1. Create volume 2. Delete volume 3. Repeat quickly Actual results: CPU usage and memory usage increase, and the number of glusterfs processes increases. I have to login to each node in the cluster and kill old processes to make nodes responsive again, otherwise the nodes eventually freeze from resource starvation. Expected results: CPU and memory usage should only spike shortly, and not continue to increase, and there should be only one glusterfs process. Additional info: I found some issues that look similar: * https://github.com/gluster/glusterfs/issues/625 * https://github.com/heketi/heketi/issues/1439 Log output from a time where resource usage increased (/var/log/glusterfs/glusterd.log): [2019-04-01 12:07:23.377715] W [MSGID: 101095] [xlator.c:180:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/5.5/xlator/nfs/server.so: cannot open shared object file: Ingen slik fil eller filkatalog [2019-04-01 12:07:23.684561] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh --volname=vol_45653f46dbc8953f876a009b4ea8dd26 [2019-04-01 12:07:26.931683] I [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2019-04-01 12:07:26.932340] I [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600 [2019-04-01 12:07:26.932667] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped [2019-04-01 12:07:26.932707] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped [2019-04-01 12:07:26.932731] I [MSGID: 106599] [glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-04-01 12:07:26.963055] I [MSGID: 106568] [glusterd-proc-mgmt.c:92:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 16020 [2019-04-01 12:07:27.963708] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped [2019-04-01 12:07:27.963951] I [MSGID: 106567] [glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd service [2019-04-01 12:07:28.985311] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2019-04-01 12:07:28.985478] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped [2019-04-01 12:07:28.989024] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2019-04-01 12:07:28.989098] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped [2019-04-01 12:07:29.299841] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_45653f46dbc8953f876a009b4ea8dd26 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [2019-04-01 12:07:29.338437] E [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe69c3) [0x7fd04cc469c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_45653f46dbc8953f876a009b4ea8dd26 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [2019-04-01 12:07:52.658922] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh --volname=vol_c5112b1e28a7bbc96640a8572009c6f0 --last=no [2019-04-01 12:07:52.679220] E [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe69c3) [0x7fd04cc469c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh --volname=vol_c5112b1e28a7bbc96640a8572009c6f0 --last=no [2019-04-01 12:07:52.681081] I [MSGID: 106542] [glusterd-utils.c:8440:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 27595 [2019-04-01 12:07:53.732699] I [MSGID: 106599] [glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-04-01 12:07:53.791560] I [MSGID: 106568] [glusterd-proc-mgmt.c:92:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 18583 [2019-04-01 12:07:53.791857] I [MSGID: 106143] [glusterd-pmap.c:389:pmap_registry_remove] 0-pmap: removing brick /var/lib/heketi/mounts/vg_799fbf11286fbf497605bbe58c3e9dfa/brick_08bfe132dad6099ab387555298466ca3/brick on port 49162 [2019-04-01 12:07:53.822032] I [MSGID: 106006] [glusterd-svc-mgmt.c:356:glusterd_svc_common_rpc_notify] 0-management: glustershd has disconnected from glusterd. [2019-04-01 12:07:54.792497] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped [2019-04-01 12:07:54.792736] I [MSGID: 106567] [glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd service [2019-04-01 12:07:55.812655] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2019-04-01 12:07:55.812837] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped [2019-04-01 12:07:55.816580] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2019-04-01 12:07:55.816672] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped [2019-04-01 12:07:59.829927] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh --volname=vol_c5112b1e28a7bbc96640a8572009c6f0 [2019-04-01 12:07:59.951300] I [MSGID: 106495] [glusterd-handler.c:3118:__glusterd_handle_getwd] 0-glusterd: Received getwd req [2019-04-01 12:07:59.967584] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post --volname=vol_c5112b1e28a7bbc96640a8572009c6f0 [2019-04-01 12:07:53.732626] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped [2019-04-01 12:07:53.732677] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped Examples of errors about deleted volumes from /var/log/glusterfs/glustershd.log - we get gigabytes of these every day: [2019-04-02 09:57:08.997572] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 10-vol_3be8a34875cc37098593d4bc8740477b-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.033441] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 26-vol_2399e6ef0347ac569a0b1211f1fd109d-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.036003] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 40-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.077109] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 2-vol_bca47201841f5b50d341eb2bedf5cd46-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.103495] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 24-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.455818] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 30-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.511070] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 14-vol_cf3700764dfdce40d60b89fde7e1a643-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2019-04-02 09:57:09.490714] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 0-vol_c5112b1e28a7bbc96640a8572009c6f0-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. Example of concurrent glusterfs processes on a node: root 4559 16.8 6.5 14882048 1060288 ? Ssl Apr01 206:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 6507 14.7 6.1 14250324 998280 ? Ssl Apr01 178:33 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 6743 0.0 1.2 4780344 201708 ? Ssl Apr01 0:35 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 7660 17.0 6.3 14859244 1027432 ? Ssl Apr01 206:32 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 7789 0.1 1.5 5390364 250200 ? Ssl Apr01 1:08 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 9259 16.4 6.3 14841432 1029512 ? Ssl Apr01 198:12 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 12394 14.0 5.6 13549044 918424 ? Ssl Apr01 167:46 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 14980 9.2 4.7 11657716 778876 ? Ssl Apr01 110:10 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 16032 8.2 4.4 11040436 716020 ? Ssl Apr01 97:39 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 23961 6.3 3.7 9807736 610408 ? Ssl Apr01 62:03 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 25560 2.8 3.0 8474704 503488 ? Ssl Apr01 27:33 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 26293 3.2 1.2 4812208 200896 ? Ssl 09:26 0:35 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 28205 1.3 1.8 5992016 300012 ? Ssl Apr01 13:31 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 29186 1.4 2.1 6669800 352440 ? Ssl Apr01 13:59 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 30485 0.9 0.6 3527080 101552 ? Ssl 09:35 0:05 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 31171 1.0 0.6 3562360 104908 ? Ssl 09:35 0:05 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd root 32086 0.6 0.3 2925412 54852 ? Ssl 09:35 0:03 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option *replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name glustershd -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 20:14:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 20:14:09 +0000 Subject: [Bugs] [Bug 1695327] New: regression test fails with brick mux enabled. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695327 Bug ID: 1695327 Summary: regression test fails with brick mux enabled. Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: The test "tests/bitrot/bug-1373520.t" fails with the following error when it is run with brick-multiplexing enabled. [root at workstation glusterfs]# prove -rfv tests/bitrot/bug-1373520.t tests/bitrot/bug-1373520.t .. 1..31 ok 1, LINENUM:8 ok 2, LINENUM:9 ok 3, LINENUM:12 ok 4, LINENUM:13 ok 5, LINENUM:14 ok 6, LINENUM:15 ok 7, LINENUM:16 volume set: failed: Volume patchy is not of replicate type ok 8, LINENUM:23 ok 9, LINENUM:24 ok 10, LINENUM:25 ok 11, LINENUM:28 ok 12, LINENUM:29 ok 13, LINENUM:32 ok 14, LINENUM:33 ok 15, LINENUM:36 ok 16, LINENUM:38 ok 17, LINENUM:41 getfattr: Removing leading '/' from absolute path names ok 18, LINENUM:47 ok 19, LINENUM:48 ok 20, LINENUM:49 ok 21, LINENUM:50 ok 22, LINENUM:52 ok 23, LINENUM:53 ok 24, LINENUM:54 ok 25, LINENUM:55 ok 26, LINENUM:58 ok 27, LINENUM:61 ok 28, LINENUM:67 stat: cannot stat '/d/backends/patchy5/FILE1': No such file or directory stat: cannot stat '/d/backends/patchy5/FILE1': No such file or directory not ok 29 Got "0" instead of "512", LINENUM:70 FAILED COMMAND: 512 path_size /d/backends/patchy5/FILE1 ok 30, LINENUM:71 not ok 31 Got "0" instead of "512", LINENUM:72 FAILED COMMAND: 512 path_size /d/backends/patchy5/HL_FILE1 Failed 2/31 subtests Test Summary Report ------------------- tests/bitrot/bug-1373520.t (Wstat: 0 Tests: 31 Failed: 2) Failed tests: 29, 31 Files=1, Tests=31, 218 wallclock secs ( 0.03 usr 0.01 sys + 2.50 cusr 3.52 csys = 6.06 CPU) Result: FAIL Version-Release number of selected component (if applicable): How reproducible: Run the above testcase with brick multiplexing enabled. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 20:26:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 20:26:28 +0000 Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695327 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22481 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 20:26:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 20:26:29 +0000 Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695327 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22481 (tests/bitrot: enable self-heal daemon before accessing the files) posted (#1) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:37:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:37:15 +0000 Subject: [Bugs] [Bug 1695390] New: GF_LOG_OCCASSIONALLY API doesn't log at first instance Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Bug ID: 1695390 Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance Product: GlusterFS Version: 6 Status: NEW Component: logging Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Depends On: 1694925 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1694925 +++ Description of problem: GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every 42nd iterations which isn't effective as in some cases we might not have the code flow hitting the same log for as many as 42 times and we'd end up suppressing the log. Version-Release number of selected component (if applicable): Mainline How reproducible: Always --- Additional comment from Worker Ant on 2019-04-02 05:35:15 UTC --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-04-02 10:57:31 UTC --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) merged (#2) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:37:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:37:15 +0000 Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695390 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:38:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:38:53 +0000 Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695391 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:38:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:38:53 +0000 Subject: [Bugs] [Bug 1695391] New: GF_LOG_OCCASSIONALLY API doesn't log at first instance Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 Bug ID: 1695391 Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance Product: GlusterFS Version: 5 Status: NEW Component: logging Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Depends On: 1694925 Blocks: 1695390 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1694925 +++ Description of problem: GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every 42nd iterations which isn't effective as in some cases we might not have the code flow hitting the same log for as many as 42 times and we'd end up suppressing the log. Version-Release number of selected component (if applicable): Mainline How reproducible: Always --- Additional comment from Worker Ant on 2019-04-02 05:35:15 UTC --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-04-02 10:57:31 UTC --- REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY API) merged (#2) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694925 [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance https://bugzilla.redhat.com/show_bug.cgi?id=1695390 [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:38:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:38:53 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1695391 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:39:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:39:22 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22482 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:39:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:39:23 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#1) for review on release-6 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:42:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:42:28 +0000 Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22483 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:42:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:42:29 +0000 Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22483 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#1) for review on release-5 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 03:02:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 03:02:09 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22484 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 03:02:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 03:02:10 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #607 from Worker Ant --- REVIEW: https://review.gluster.org/22484 (glusterd: remove redundant glusterd_check_volume_exists () calls) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 03:55:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 03:55:41 +0000 Subject: [Bugs] [Bug 1695399] New: With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 Bug ID: 1695399 Summary: With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" Product: GlusterFS Version: 5 Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1672851 I am copying this bug because: Description of problem: If parallel-readdir is enabled on a volume, rm -rf fails with "Directory not empty" if contains stale linkto files. Version-Release number of selected component (if applicable): How reproducible: Consistently Steps to Reproduce: 1. Create a 3 brick distribute volume 2. Enable parallel-readdir and readdir-ahead on the volume 3. Fuse mount the volume and mkdir dir0 4. Create some files inside dir0 and rename them so linkto files are created on the bricks 5. Check the bricks to see which files have linkto files. Delete the data files directly on the bricks, leaving the linkto files behind. These are now stale linkto files. 6. Remount the volume 7. rm -rf dir0 Actual results: [root at rhgs313-6 fuse1]# rm -rf dir0/ rm: cannot remove ?dir0/?: Directory not empty Expected results: dir0 should be deleted without errors Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 03:57:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 03:57:00 +0000 Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 --- Comment #1 from Nithya Balachandran --- RCA: rm -rf works by first listing and unlinking all entries in and then calling an rmdir . As DHT readdirp does not return linkto files in the listing, they are not unlinked as part of the rm -rf itself. dht_rmdir handles this by performing a readdirp internally on and deleting all stale linkto files before proceeding with the actual rmdir operation. When parallel-readdir is enabled, the rda xlator is loaded below dht in the graph and proactively lists and caches entries when an opendir is performed. Entries are returned from this cache for any subsequent readdirp calls on the directory that was opened. DHT uses the presence of the trusted.glusterfs.dht.linkto xattr to determine whether a file is a linkto file. As this call to opendir does not set trusted.glusterfs.dht.linkto in the list of requested xattrs for the opendir call, the cached entries do not contain this xattr value. As none of the entries returned will have the xattr, DHT believes they are all data files and fails the rmdir with ENOTEMPTY. Turning off parallel-readdir allows the rm -rf to succeed. Upstream master: https://review.gluster.org/22160 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:07:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:07:32 +0000 Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22485 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:07:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:07:33 +0000 Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22485 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) posted (#1) for review on release-5 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:07:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:07:45 +0000 Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:10:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:10:06 +0000 Subject: [Bugs] [Bug 1695403] New: rm -rf fails with "Directory not empty" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Bug ID: 1695403 Summary: rm -rf fails with "Directory not empty" Product: GlusterFS Version: 5 Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Depends On: 1676400 Blocks: 1458215, 1661258, 1677260, 1686272 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1676400 +++ Description of problem: When 2 clients run rm -rf concurrently, the operation sometimes fails with " Directory not empty" ls on the directory from the gluster mount point does not show any entries however there are directories on some of the bricks. Version-Release number of selected component (if applicable): How reproducible: Rare.This is a race condition. Steps to Reproduce: Steps: 1. Create 3x (2+1) arbiter volume and fuse mount it. Make sure lookup-optimize is enabled. 2. mkdir -p dir0/dir1/dir2. 3. Unmount and remount the volume to ensure a fresh lookup is sent. GDB into the fuse process and set a breakpoint at dht_lookup. 4. from the client mount: rm -rf mra_sources 5. When gdb breaks at dht_lookup for dir0/dir1/dir2, set a breakpoint at dht_lookup_cbk. Allow the process to continue until it hits dht_lookup_cbk. dht_lookup_cbk will return with op_ret = 0 . 6. Delete dir0/dir1/dir2 from every brick on the non-hashed subvols. 7. Set a breakpoint in dht_selfheal_dir_mkdir and allow gdb to continue. 8. When the process breaks at dht_selfheal_dir_mkdir, delete the directory from the hashed subvolume bricks. 9. In dht_selfheal_dir_mkdir_lookup_cbk, set a breakpoint at line : if (local->selfheal.hole_cnt == layout->cnt) { When gdb breaks at this point, set local->selfheal.hole_cnt to a value different from that of layout->cnt. Allow gdb to proceed. DHT will create the directories only on the non-hashed subvolumes as the layout has not been updated to indicate that the dir no longer exists on the hashed subvolume. This directory will no longer be visible on the mount point causing the rm -rf to fail. Actual results: root at server fuse1]# rm -rf mra_sources rm: cannot remove ?dir0/dir1?: Directory not empty Expected results: rm -rf should succeed. Additional info: As lookup-optimize is enabled, subsequent lookups cannot heal the directory. The same steps with lookup-optimize disabled will work as a subsequent lookup will lookup everywhere even if the entry does not exist on the hashed subvol. --- Additional comment from Nithya Balachandran on 2019-02-12 08:08:31 UTC --- RCA for the invisible directory left behind with concurrent rm -rf : -------------------------------------------------------------------- dht_selfheal_dir_mkdir_lookup_cbk (...) { ... 1381 this_call_cnt = dht_frame_return (frame); 1382 1383 LOCK (&frame->lock); 1384 { 1385 if ((op_ret < 0) && 1386 (op_errno == ENOENT || op_errno == ESTALE)) { 1387 local->selfheal.hole_cnt = !local->selfheal.hole_cnt ? 1 1388 : local->selfheal.hole_cnt + 1; 1389 } 1390 1391 if (!op_ret) { 1392 dht_iatt_merge (this, &local->stbuf, stbuf, prev); 1393 } 1394 check_mds = dht_dict_get_array (xattr, conf->mds_xattr_key, 1395 mds_xattr_val, 1, &errst); 1396 if (dict_get (xattr, conf->mds_xattr_key) && check_mds && !errst) { 1397 dict_unref (local->xattr); 1398 local->xattr = dict_ref (xattr); 1399 } 1400 1401 } 1402 UNLOCK (&frame->lock); 1403 1404 if (is_last_call (this_call_cnt)) { 1405 if (local->selfheal.hole_cnt == layout->cnt) { 1406 gf_msg_debug (this->name, op_errno, 1407 "Lookup failed, an rmdir could have " 1408 "deleted this entry %s", loc->name); 1409 local->op_errno = op_errno; 1410 goto err; 1411 } else { 1412 for (i = 0; i < layout->cnt; i++) { 1413 if (layout->list[i].err == ENOENT || 1414 layout->list[i].err == ESTALE || 1415 local->selfheal.force_mkdir) 1416 missing_dirs++; 1417 } There are 2 problems here: 1. The layout is not updated with the new subvol status on error. In this case, the initial lookup found a directory on the hashed subvol so only 2 entries in the layout indicate missing directories. However, by the time the selfheal code is executed, the racing rmdir has deleted the directory from all the subvols. At this point, the directory does not exist on any subvol and dht_selfheal_dir_mkdir_lookup_cbk gets an error from all 3 subvols, but this new status is not updated in the layout which still has only 2 missing dirs marked. 2. this_call_cnt = dht_frame_return (frame); is called before processing the frame. So with a call cnt of 3, it is possible that the second response has reached 1404 before the third one has started processing the return values. At this point, local->selfheal.hole_cnt != layout->cnt so control goes to line 1412. At line 1412, since we are still using the old layout, only the directories on the non-hashed subvols are considered when incrementing missing_dirs and for the healing. The combination of these two causes the selfheal to start healing the directories on the non-hashed subvols. It succeeds in creating the dirs on the non-hashed subvols. However, to set the layout, dht takes an inodelk on the hashed subvol which fails because the directory does on exist there. We therefore end up with directories on the non-hashed subvols with no layouts set. --- Additional comment from Worker Ant on 2019-02-12 08:34:01 UTC --- REVIEW: https://review.gluster.org/22195 (cluster/dht: Fix lookup selfheal and rmdir race) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2019-02-13 18:20:26 UTC --- REVIEW: https://review.gluster.org/22195 (cluster/dht: Fix lookup selfheal and rmdir race) merged (#3) on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1458215 [Bug 1458215] Slave reports ENOTEMPTY when rmdir is executed on master https://bugzilla.redhat.com/show_bug.cgi?id=1676400 [Bug 1676400] rm -rf fails with "Directory not empty" https://bugzilla.redhat.com/show_bug.cgi?id=1677260 [Bug 1677260] rm -rf fails with "Directory not empty" https://bugzilla.redhat.com/show_bug.cgi?id=1686272 [Bug 1686272] fuse mount logs inundated with [dict.c:471:dict_get] (-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x6228d) [0x7f9029d8628d] -->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x202f7) [0x7f9029aa12f7] -->/lib64/libglusterfs.so.0( -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:10:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:10:06 +0000 Subject: [Bugs] [Bug 1676400] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676400 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695403 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 [Bug 1695403] rm -rf fails with "Directory not empty" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:10:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:10:06 +0000 Subject: [Bugs] [Bug 1677260] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677260 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1695403 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 [Bug 1695403] rm -rf fails with "Directory not empty" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:10:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:10:42 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:13:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:13:29 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-03 04:13:29 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22395 (transport/socket: log shutdown msg occasionally) merged (#5) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:13:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:13:30 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1691616, which changed state. Bug 1691616 Summary: client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1691616 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:14:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:14:42 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22486 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:14:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:14:43 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22486 (cluster/dht: Fix lookup selfheal and rmdir race) posted (#1) for review on release-5 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:16:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:16:17 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #16 from asender at testlabs.com.au --- (In reply to Kotresh HR from comment #13) > This issue is fixed in upstream and 5.x and 6.x series > > Patch: https://review.gluster.org/#/c/glusterfs/+/20093/ We are having the issue in replicate mode (using replica 2). Adrian Sender -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:28:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:28:18 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #9 from Worker Ant --- REVIEW: https://review.gluster.org/22455 (posix-acl: remove default functions, and use library fn instead) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:29:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:29:15 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 21960 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:31:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:31:19 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-03 04:31:19 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) merged (#3) on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:31:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:31:19 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Bug 1692093 depends on bug 1692101, which changed state. Bug 1692101 Summary: Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1692101 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:31:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:31:42 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-03 04:31:42 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22459 (gfapi: Unblock epoll thread for upcall processing) merged (#2) on release-6 by Amar Tumballi -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:32:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:32:03 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-03 04:32:03 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22447 (geo-rep: Fix syncing multiple rename of symlink) merged (#3) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:37:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:37:12 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #608 from Worker Ant --- REVIEW: https://review.gluster.org/22387 (changelog: remove unused code.) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:40:34 +0000 Subject: [Bugs] [Bug 1579615] [geo-rep]: [Errno 39] Directory not empty In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1579615 Bug 1579615 depends on bug 1575553, which changed state. Bug 1575553 Summary: [geo-rep]: [Errno 39] Directory not empty https://bugzilla.redhat.com/show_bug.cgi?id=1575553 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |INSUFFICIENT_DATA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:37:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:37:12 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #609 from Worker Ant --- REVIEW: https://review.gluster.org/22439 (rpclib: slow floating point math and libm) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:54:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:54:35 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 --- Comment #2 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#2) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:54:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:54:36 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22482 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 04:54:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:54:38 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22482 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:54:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:54:39 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY API) posted (#2) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:56:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:56:39 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22487 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 04:56:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 04:56:40 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22487 (transport/socket: log shutdown msg occasionally) posted (#1) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 05:00:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:00:09 +0000 Subject: [Bugs] [Bug 1695416] New: client log flooding with intentional socket shutdown message when a brick is down Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695416 Bug ID: 1695416 Summary: client log flooding with intentional socket shutdown message when a brick is down Product: GlusterFS Version: 5 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, mchangir at redhat.com, pasik at iki.fi Depends On: 1679904, 1691616 Blocks: 1691620, 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1691616 +++ +++ This bug was initially created as a clone of Bug #1679904 +++ Description of problem: client log flooding with intentional socket shutdown message when a brick is down [2019-02-22 08:24:42.472457] I [socket.c:811:__socket_shutdown] 0-test-vol-client-0: intentional socket shutdown(5) Version-Release number of selected component (if applicable): glusterfs-6 How reproducible: Always Steps to Reproduce: 1. 1 X 3 volume created and started over a 3 node cluster 2. mount a fuse client 3. kill a brick 4. Observe that fuse client log is flooded with the intentional socket shutdown message after every 3 seconds. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-03-22 05:14:20 UTC --- REVIEW: https://review.gluster.org/22395 (transport/socket: move shutdown msg to DEBUG loglevel) posted (#1) for review on master by Raghavendra G --- Additional comment from Worker Ant on 2019-04-03 04:13:29 UTC --- REVIEW: https://review.gluster.org/22395 (transport/socket: log shutdown msg occasionally) merged (#5) on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1691616 [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 05:00:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:00:09 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695416 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695416 [Bug 1695416] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 05:00:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:00:09 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695416 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695416 [Bug 1695416] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 05:00:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:00:09 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1695416 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695416 [Bug 1695416] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 05:56:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:56:54 +0000 Subject: [Bugs] [Bug 1695436] New: geo-rep session creation fails with IPV6 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Bug ID: 1695436 Summary: geo-rep session creation fails with IPV6 Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Priority: high Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: amukherj at redhat.com, avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com Depends On: 1688833 Blocks: 1688231, 1688239 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1688833 +++ +++ This bug was initially created as a clone of Bug #1688231 +++ Description of problem: ----------------------- This issue is seen with the RHHI-V usecase. VM images are stored in the gluster volumes and geo-replicated to the secondary site, for DR use case. When IPv6 is used, the additional mount option is required --xlator-option=transport.address-family=inet6". But when geo-rep check for slave space with gverify.sh, these mount options are not considered and it fails to mount either master or slave volume Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHGS 3.4.4 ( glusterfs-3.12.2-47 ) How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create geo-rep session from the master to slave Actual results: -------------- Creation of geo-rep session fails at gverify.sh Expected results: ----------------- Creation of geo-rep session should be successful Additional info: --- Additional comment from SATHEESARAN on 2019-03-13 11:49:02 UTC --- [root@ ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 2620:52:0:4624:5054:ff:fee9:57f8 master.lab.eng.blr.redhat.com 2620:52:0:4624:5054:ff:fe6d:d816 slave.lab.eng.blr.redhat.com [root@ ~]# gluster volume info Volume Name: master Type: Distribute Volume ID: 9cf0224f-d827-4028-8a45-37f7bfaf1c78 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: master.lab.eng.blr.redhat.com:/gluster/brick1/master Options Reconfigured: performance.client-io-threads: on server.event-threads: 4 client.event-threads: 4 user.cifs: off features.shard: on network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet6 nfs.disable: on [root at localhost ~]# gluster volume geo-replication master slave.lab.eng.blr.redhat.com::slave create push-pem Unable to mount and fetch slave volume details. Please check the log: /var/log/glusterfs/geo-replication/gverify-slavemnt.log geo-replication command failed Snip from gverify-slavemnt.log [2019-03-13 11:46:28.746494] I [MSGID: 100030] [glusterfsd.c:2646:main] 0-glusterfs: Started running glusterfs version 3.12.2 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off --volfile-server slave.lab.eng.blr.redhat.com --volfile-id slave -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log /tmp/gverify.sh.y1TCoY) [2019-03-13 11:46:28.750595] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2019-03-13 11:46:28.753702] E [MSGID: 101075] [common-utils.c:482:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known) [2019-03-13 11:46:28.753725] E [name.c:267:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host slave.lab.eng.blr.redhat.com [2019-03-13 11:46:28.753953] I [glusterfsd-mgmt.c:2337:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: slave.lab.eng.blr.redhat.com [2019-03-13 11:46:28.753980] I [glusterfsd-mgmt.c:2358:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2019-03-13 11:46:28.753998] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-03-13 11:46:28.754073] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-13 11:46:28.754154] W [glusterfsd.c:1462:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xab) [0x7fc39d379bab] -->glusterfs(+0x11fcd) [0x56427db95fcd] -->glusterfs(cleanup_and_exit+0x6b) [0x56427db8eb2b] ) 0-: received signum (1), shutting down [2019-03-13 11:46:28.754197] I [fuse-bridge.c:6611:fini] 0-fuse: Unmounting '/tmp/gverify.sh.y1TCoY'. [2019-03-13 11:46:28.760213] I [fuse-bridge.c:6616:fini] 0-fuse: Closing fuse connection to '/tmp/gverify.sh.y1TCoY'. --- Additional comment from Worker Ant on 2019-03-14 14:51:56 UTC --- REVIEW: https://review.gluster.org/22363 (WIP geo-rep: IPv6 support) posted (#1) for review on master by Aravinda VK --- Additional comment from Worker Ant on 2019-03-15 14:59:56 UTC --- REVIEW: https://review.gluster.org/22363 (geo-rep: IPv6 support) merged (#3) on master by Aravinda VK Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688231 [Bug 1688231] geo-rep session creation fails with IPV6 https://bugzilla.redhat.com/show_bug.cgi?id=1688239 [Bug 1688239] geo-rep session creation fails with IPV6 https://bugzilla.redhat.com/show_bug.cgi?id=1688833 [Bug 1688833] geo-rep session creation fails with IPV6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 05:56:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 05:56:54 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695436 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 [Bug 1695436] geo-rep session creation fails with IPV6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 06:05:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:05:08 +0000 Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 06:30:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:30:07 +0000 Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22488 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 06:30:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:30:08 +0000 Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22488 (geo-rep: IPv6 support) posted (#1) for review on release-6 by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 06:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:30:59 +0000 Subject: [Bugs] [Bug 1695445] New: ssh-port config set is failing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Bug ID: 1695445 Summary: ssh-port config set is failing Product: GlusterFS Version: 6 Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: bugs at gluster.org Depends On: 1692666 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1692666 +++ Description of problem: If non-standard ssh-port is used, Geo-rep can be configured to use that ssh port by configuring as below ``` gluster volume geo-replication :: config ssh-port 2222 ``` But this command is failing even if a valid value is passed. ``` $ gluster v geo gv1 centos.sonne::gv2 config ssh-port 2222 geo-replication config-set failed for gv1 centos.sonne::gv2 geo-replication command failed ``` --- Additional comment from Worker Ant on 2019-03-26 08:00:05 UTC --- REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config validation) posted (#1) for review on master by Aravinda VK --- Additional comment from Worker Ant on 2019-03-27 14:35:10 UTC --- REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config validation) merged (#2) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 [Bug 1692666] ssh-port config set is failing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 06:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:30:59 +0000 Subject: [Bugs] [Bug 1692666] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1695445 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 [Bug 1695445] ssh-port config set is failing -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 06:31:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:31:13 +0000 Subject: [Bugs] [Bug 1695445] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 06:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:33:17 +0000 Subject: [Bugs] [Bug 1695445] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22489 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 06:33:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 06:33:18 +0000 Subject: [Bugs] [Bug 1695445] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22489 (geo-rep: fix integer config validation) posted (#1) for review on release-6 by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 07:24:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 07:24:59 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 --- Comment #1 from Christian Ihle --- Example of how to reliably reproduce the issue from Kubernetes. 1. kubectl apply -f pvc.yaml 2. kubectl delete -f pvc.yaml There will almost always be a few more glusterfs-processes running after doing this. pvc.yaml: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc2 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc3 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 08:09:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 08:09:22 +0000 Subject: [Bugs] [Bug 1695480] New: Global Thread Pool Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695480 Bug ID: 1695480 Summary: Global Thread Pool Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: The Global Thread Pool provides lower contention and increased performance in some cases, but it has been observed that sometimes there's a huge increment in the number of requests going to the disks in parallel which seems to be causing a performance degradation. Actually, it seems that sending the same amount of requests but from fewer threads is giving higher performance. The current implementation already does some dynamic adjustment of the number of active threads based on the current number of requests, but it doesn't consider the load on the back-end file systems. This means that as long as more requests come, the number of threads is scaled accordingly, which could have a negative impact if the back-end is already saturated. The way to control that in current version is to manually adjust the maximum number of threads that can be used, which effectively limits the load on back-end file systems even if more requests are coming, but this is only useful for volumes whose workload is homogeneous and constant. To make it more versatile, the maximum number of threads need to be automatically self-adjusted to adapt dynamically to the current load so that it can be useful in a general case. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 08:15:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 08:15:37 +0000 Subject: [Bugs] [Bug 1695484] New: smoke fails with "Build root is locked by another process" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695484 Bug ID: 1695484 Summary: smoke fails with "Build root is locked by another process" Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Please check https://build.gluster.org/job/devrpm-fedora/15405/console for more details. Smoke is failing with the reason mentioned in the subject. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 08:35:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 08:35:11 +0000 Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by another process" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695484 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #1 from Deepshikha khandelwal --- It happens mainly because your previously running build was aborted by a new patchset and hence no cleanup. Re-triggering might help. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 08:39:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 08:39:23 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 --- Comment #2 from Christian Ihle --- I have been experimenting with setting "max_inflight_operations" to 1 in Heketi, as mentioned in https://github.com/heketi/heketi/issues/1439 Example of how to configure this: https://github.com/heketi/heketi/blob/8417f25f474b0b16e1936a66f9b63bcedfba6e4c/tests/functional/TestSmokeTest/config/heketi.json I am not able to reproduce the issue anymore when the value is set to 1. The number of glusterfs-processes varies between 0 and 2 during volume changes, but always settles on 1 single process afterwards. This seems to be an easy workaround, but hopefully the bug will be fixed so I can revert back to concurrent Heketi again. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 08:41:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 08:41:52 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/22443 (sdfs: enable pass-through) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 09:50:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 09:50:43 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 nravinas at redhat.com changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Group| |redhat CC| |nravinas at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 10:04:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:04:23 +0000 Subject: [Bugs] [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-03 10:04:23 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:04:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:04:24 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1692957, which changed state. Bug 1692957 Summary: build: link libgfrpc with MATH_LIB (libm, -lm) https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:04:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:04:24 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Bug 1692959 depends on bug 1692957, which changed state. Bug 1692957 Summary: build: link libgfrpc with MATH_LIB (libm, -lm) https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:29:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:29:53 +0000 Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by another process" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695484 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- Mhh, then shouldn't we clean up when there is something that do stop the build ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:30:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:30:46 +0000 Subject: [Bugs] [Bug 1692957] rpclib: slow floating point math and libm In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NOTABUG |--- Assignee|bugs at gluster.org |kkeithle at redhat.com Summary|build: link libgfrpc with |rpclib: slow floating point |MATH_LIB (libm, -lm) |math and libm Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:30:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:30:46 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1692957, which changed state. Bug 1692957 Summary: rpclib: slow floating point math and libm https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NOTABUG |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 10:30:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 10:30:46 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Bug 1692959 depends on bug 1692957, which changed state. Bug 1692957 Summary: rpclib: slow floating point math and libm https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NOTABUG |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 11:09:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 11:09:00 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 nravinas at redhat.com changed: What |Removed |Added ---------------------------------------------------------------------------- Group|redhat | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 11:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 11:23:06 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22491 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 11:23:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 11:23:07 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #11 from Worker Ant --- REVIEW: https://review.gluster.org/22491 (tests: make sure to traverse all of meta dir) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 11:38:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 11:38:07 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22492 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 11:38:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 11:38:08 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #610 from Worker Ant --- REVIEW: https://review.gluster.org/22492 (tests: shard read test correction) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 12:25:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 12:25:57 +0000 Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644322 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22494 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 12:25:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 12:25:58 +0000 Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644322 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22494 (fuse: rate limit reading from fuse device upon receiving EPERM) posted (#1) for review on master by Csaba Henk -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 14:19:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 14:19:24 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Bug 1694002 depends on bug 1693648, which changed state. Bug 1693648 Summary: Geo-re: Geo replication failing in "cannot allocate memory" https://bugzilla.redhat.com/show_bug.cgi?id=1693648 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 3 15:09:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 15:09:52 +0000 Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by another process" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695484 --- Comment #3 from M. Scherer --- So indeed, https://build.gluster.org/job/devrpm-fedora/15404/ aborted the patch test, then https://build.gluster.org/job/devrpm-fedora/15405/ failed. but the next run worked. Maybe the problem is that it take more than 30 seconds to clean the build or something similar. Maybe we need to add some more time, but I can't seems to find a log to evaluate how long it does take when things are cancelled. Let's keep stuff opened if the issue arise again to collect the log, and see if there is a pattern. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 21:55:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 21:55:30 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #17 from perplexed767 --- (In reply to Kotresh HR from comment #14) > Workaround: > The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes. > It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to > convert > single distribute to two distribute volume or upgrade to later versions > if it can't be waited until next 4.1.x release. greate thanks, is it planned to be backported to for 4.x as my os (sles 12.2) does not currenty support 5.x gluster) I would have to upgrade the os to sles 12.3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 04:28:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 04:28:46 +0000 Subject: [Bugs] [Bug 1696046] New: Log level changes do not take effect until the process is restarted Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Bug ID: 1696046 Summary: Log level changes do not take effect until the process is restarted Product: GlusterFS Version: mainline Status: NEW Component: core Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, nbalacha at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, vbellur at redhat.com Depends On: 1695081 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695081 [Bug 1695081] Log level changes do not take effect until the process is restarted -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 04:29:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 04:29:02 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 04:43:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 04:43:23 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22495 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 04:43:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 04:43:24 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22495 (core: Log level changes do not effect on running client process) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 06:10:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 06:10:30 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #18 from Kotresh HR --- I have backported the patch https://review.gluster.org/#/c/glusterfs/+/22476/. It's not merged yet. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 06:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 06:44:07 +0000 Subject: [Bugs] [Bug 1696075] New: Client lookup is unable to heal missing directory GFID entry Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Bug ID: 1696075 Summary: Client lookup is unable to heal missing directory GFID entry Product: GlusterFS Version: 6 Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: anepatel at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When dir gfid entry is missing on few backend bricks for a directory, client heal is unable to re-create the gfid entries after doing stat from client. The automated test-case passes on downstream 3.4.4 but is failing on upstream gluster 6. Version-Release number of selected component (if applicable): Latest gluster 6 How reproducible: Always, Steps to Reproduce: 1. Create a 2X3 dist-replicated volume, and fuse mount it 2. Create a empty directory from mount point 3. Verify the gfid entry is present on all backend bricks for this dir 4. Delete gfid entry for 5 out of 6 backend bricks, brick{1..6} 5. Now trigger heal from mount pt. #ls -l #find . | xargs stat 6. Check backend bricks, the gfid entry should be healed for all the bricks. Actual results: At step 6, gfid entry is not created after client lookup. Expected results: Client lookup should trigger heal and gfid should be healed Additional info: There is also a latest fix per BZ#1661258, in which the dht delegates task to AFR when there is a missing gfid for all bricks in subvol, as per my understanding. The test-case is automated and can be found at https://review.gluster.org/c/glusto-tests/+/22480/ The test passes Downstream but fails upstream, the glusto logs for the failure can be found at https://ci.centos.org/job/gluster_glusto-patch-check/1277/artifact/glustomain.log -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 06:44:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 06:44:29 +0000 Subject: [Bugs] [Bug 1696075] Client lookup is unable to heal missing directory GFID entry In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Anees Patel changed: What |Removed |Added ---------------------------------------------------------------------------- QA Contact| |anepatel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 06:45:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 06:45:59 +0000 Subject: [Bugs] [Bug 1696077] New: Add pause and resume test case for geo-rep Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696077 Bug ID: 1696077 Summary: Add pause and resume test case for geo-rep Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: sacharya at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: There is no pause and resume test case for geo-rep Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 07:17:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 07:17:44 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22496 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 07:17:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 07:17:45 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #611 from Worker Ant --- REVIEW: https://review.gluster.org/22496 (cluster/afr: Invalidate inode on change of split-brain-choice) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:28:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:28:38 +0000 Subject: [Bugs] [Bug 1696136] New: gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Bug ID: 1696136 Summary: gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: sharding Keywords: Triaged Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: kdhananj at redhat.com QA Contact: bugs at gluster.org CC: amukherj at redhat.com, bkunal at redhat.com, bugs at gluster.org, pasik at iki.fi, rhs-bugs at redhat.com, sabose at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com, ykaul at redhat.com Depends On: 1694595 Blocks: 1694604 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1694595 +++ Description of problem: ------------------------ When deleting the 2TB image file , gluster fuse mount process has crashed Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-3.12.2-47 How reproducible: ----------------- 1/1 Steps to Reproduce: ------------------- 1. Create a image file of 2T from oVirt Manager UI 2. Delete the same image file after its created successfully Actual results: --------------- Fuse mount crashed Expected results: ----------------- All should work fine and no fuse mount crashes --- Additional comment from SATHEESARAN on 2019-04-01 08:33:14 UTC --- frame : type(0) op(0) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-04-01 07:57:53 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.12.2 /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9d)[0x7fc72c186b9d] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fc72c191114] /lib64/libc.so.6(+0x36280)[0x7fc72a7c2280] /usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9627)[0x7fc71f8ba627] /usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9ef1)[0x7fc71f8baef1] /usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x3ae9c)[0x7fc71fb15e9c] /usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x9e8c)[0x7fc71fd88e8c] /usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xb79b)[0x7fc71fd8a79b] /usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xc226)[0x7fc71fd8b226] /usr/lib64/glusterfs/3.12.2/xlator/protocol/client.so(+0x17cbc)[0x7fc72413fcbc] /lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7fc72bf2ca00] /lib64/libgfrpc.so.0(rpc_clnt_notify+0x26b)[0x7fc72bf2cd6b] /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fc72bf28ae3] /usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x7586)[0x7fc727043586] /usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x9bca)[0x7fc727045bca] /lib64/libglusterfs.so.0(+0x8a870)[0x7fc72c1e5870] /lib64/libpthread.so.0(+0x7dd5)[0x7fc72afc2dd5] /lib64/libc.so.6(clone+0x6d)[0x7fc72a889ead] --- Additional comment from SATHEESARAN on 2019-04-01 08:37:56 UTC --- 1. RHHI-V Information ---------------------- RHV 4.3.3 RHGS 3.4.4 2. Cluster Information ----------------------- [root at rhsqa-grafton11 ~]# gluster pe s Number of Peers: 2 Hostname: rhsqa-grafton10.lab.eng.blr.redhat.com Uuid: 46807597-245c-4596-9be3-f7f127aa4aa2 State: Peer in Cluster (Connected) Other names: 10.70.45.32 Hostname: rhsqa-grafton12.lab.eng.blr.redhat.com Uuid: 8a3bc1a5-07c1-4e1c-aa37-75ab15f29877 State: Peer in Cluster (Connected) Other names: 10.70.45.34 3. Volume information ----------------------- Affected volume: data [root at rhsqa-grafton11 ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 9d5a9d10-f192-49ed-a6f0-c912224869e8 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data (arbiter) Options Reconfigured: cluster.granular-entry-heal: enable performance.strict-o-direct: on network.ping-timeout: 30 storage.owner-gid: 36 storage.owner-uid: 36 server.event-threads: 4 client.event-threads: 4 cluster.choose-local: off user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on performance.client-io-threads: on [root at rhsqa-grafton11 ~]# gluster volume status data Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhsqa-grafton10.lab.eng.blr.redhat.co m:/gluster_bricks/data/data 49154 0 Y 23403 Brick rhsqa-grafton11.lab.eng.blr.redhat.co m:/gluster_bricks/data/data 49154 0 Y 23285 Brick rhsqa-grafton12.lab.eng.blr.redhat.co m:/gluster_bricks/data/data 49154 0 Y 23296 Self-heal Daemon on localhost N/A N/A Y 16195 Self-heal Daemon on rhsqa-grafton12.lab.eng .blr.redhat.com N/A N/A Y 52917 Self-heal Daemon on rhsqa-grafton10.lab.eng .blr.redhat.com N/A N/A Y 43829 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694595 [Bug 1694595] gluster fuse mount crashed, when deleting 2T image file from RHV Manager UI https://bugzilla.redhat.com/show_bug.cgi?id=1694604 [Bug 1694604] gluster fuse mount crashed, when deleting 2T image file from RHV Manager UI -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:29:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:29:38 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Krutika Dhananjay changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |kdhananj at redhat.com -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:35:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:35:59 +0000 Subject: [Bugs] [Bug 1696077] Add pause and resume test case for geo-rep In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696077 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22498 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:36:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:36:00 +0000 Subject: [Bugs] [Bug 1696077] Add pause and resume test case for geo-rep In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696077 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22498 (tests/geo-rep: Add pause and resume test case for geo-rep) posted (#1) for review on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:42:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:42:09 +0000 Subject: [Bugs] [Bug 1696147] New: Multiple shd processes are running on brick_mux environmet Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Bug ID: 1696147 Summary: Multiple shd processes are running on brick_mux environmet Product: GlusterFS Version: 5 Hardware: x86_64 Status: NEW Component: glusterd Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi Depends On: 1683880 Blocks: 1672818 (glusterfs-6.0), 1684404 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1683880 +++ Description of problem: Multiple shd processes are running while created 100 volumes in brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a 1x3 volume 2. Enable brick_mux 3.Run below command n1= n2= n3= for i in {1..10};do for h in {1..20};do gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force gluster v start vol-$i-$h sleep 1 done done for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done Actual results: Multiple shd processes are running and consuming system resources Expected results: Only one shd process should be run Additional info: --- Additional comment from Mohit Agrawal on 2019-03-01 08:23:03 UTC --- Upstream patch is posted to resolve the same https://review.gluster.org/#/c/glusterfs/+/22290/ --- Additional comment from Atin Mukherjee on 2019-03-06 15:30:41 UTC --- (In reply to Mohit Agrawal from comment #1) > Upstream patch is posted to resolve the same > https://review.gluster.org/#/c/glusterfs/+/22290/ this is an upstream bug only :-) Once the mainline patch is merged and we backport it to release-6 branch, the bug status will be corrected. --- Additional comment from Worker Ant on 2019-03-12 11:21:18 UTC --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-6 by MOHIT AGRAWAL --- Additional comment from Worker Ant on 2019-03-12 20:53:28 UTC --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar Ranganathan --- Additional comment from Shyamsundar on 2019-03-25 16:33:26 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1683880 [Bug 1683880] Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1684404 [Bug 1684404] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:42:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:42:09 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1696147 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 [Bug 1696147] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 08:42:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:42:09 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1696147 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 [Bug 1696147] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:42:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:42:09 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1696147 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 [Bug 1696147] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 08:42:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:42:27 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 08:44:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:44:39 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22499 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 08:44:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:44:40 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22499 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#1) for review on release-5 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 08:48:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 08:48:42 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 joao.bauto at neuro.fchampalimaud.org changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joao.bauto at neuro.fchampalim | |aud.org --- Comment #9 from joao.bauto at neuro.fchampalimaud.org --- So I think I'm hitting this bug also. I have an 8 brick distributed volume where Windows and Linux clients mount the volume via samba and headless compute servers using gluster native fuse. With parallel-readdir on, if a Windows client creates a new folder, the folder is indeed created but invisible to the Windows client. Accessing the same samba share in a Linux client, the folder is again visible and with normal behaviour. The same folder is also visible when mounting via gluster native fuse. The Windows client can list existing directories and rename them while, for files, everything seems to be working fine. Gluster servers: CentOS 7.5 with Gluster 5.3 and Samba 4.8.3-4.el7.0.1 from @fasttrack Clients tested: Windows 10, Ubuntu 18.10, CentOS 7.5 Volume Name: tank Type: Distribute Volume ID: 9582685f-07fa-41fd-b9fc-ebab3a6989cf Status: Started Snapshot Count: 0 Number of Bricks: 8 Transport-type: tcp Bricks: Brick1: swp-gluster-01:/tank/volume1/brick Brick2: swp-gluster-02:/tank/volume1/brick Brick3: swp-gluster-03:/tank/volume1/brick Brick4: swp-gluster-04:/tank/volume1/brick Brick5: swp-gluster-01:/tank/volume2/brick Brick6: swp-gluster-02:/tank/volume2/brick Brick7: swp-gluster-03:/tank/volume2/brick Brick8: swp-gluster-04:/tank/volume2/brick Options Reconfigured: performance.parallel-readdir: on performance.readdir-ahead: on performance.cache-invalidation: on performance.md-cache-timeout: 600 storage.batch-fsync-delay-usec: 0 performance.write-behind-window-size: 32MB performance.stat-prefetch: on performance.read-ahead: on performance.read-ahead-page-count: 16 performance.rda-request-size: 131072 performance.quick-read: on performance.open-behind: on performance.nl-cache-timeout: 600 performance.nl-cache: on performance.io-thread-count: 64 performance.io-cache: off performance.flush-behind: on performance.client-io-threads: off performance.write-behind: off performance.cache-samba-metadata: on network.inode-lru-limit: 0 features.cache-invalidation-timeout: 600 features.cache-invalidation: on cluster.readdir-optimize: on cluster.lookup-optimize: on client.event-threads: 4 server.event-threads: 16 features.quota-deem-statfs: on nfs.disable: on features.quota: on features.inode-quota: on cluster.enable-shared-storage: disable Cheers -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 11:20:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 11:20:31 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-04 11:20:31 --- Comment #19 from Worker Ant --- REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in changelog) merged (#1) on release-4.1 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 16:28:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 16:28:37 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22507 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 16:28:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 16:28:38 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22507 (features/shard: Fix crash during background shard deletion in a specific case) posted (#1) for review on master by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 19:44:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 19:44:23 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-04 19:44:23 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/21585 (libglusterfs: define macros needed for cloudsync) merged (#9) on master by Vijay Bellur -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 4 21:06:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 21:06:32 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22372 (cluster/ec: Fix handling of heal info cases without locks) merged (#5) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 2 20:26:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 02 Apr 2019 20:26:29 +0000 Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695327 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-04 21:10:59 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22481 (tests/bitrot: enable self-heal daemon before accessing the files) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 22:05:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 22:05:58 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22509 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 4 22:05:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 04 Apr 2019 22:05:59 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #612 from Worker Ant --- REVIEW: https://review.gluster.org/22509 (ec: increase line coverage of ec) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 03:46:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:46:28 +0000 Subject: [Bugs] [Bug 1696512] New: glusterfs build is failing on rhel-6 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Bug ID: 1696512 Summary: glusterfs build is failing on rhel-6 Product: GlusterFS Version: mainline Status: NEW Component: build Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: glusterfs build is failing on RHEL 6. Version-Release number of selected component (if applicable): How reproducible: Run make for glusterfs on RHEL-6 make us throwing below error .libs/glusterd_la-glusterd-utils.o: In function `glusterd_get_volopt_content': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:13333: undefined reference to `dlclose' .libs/glusterd_la-glusterd-utils.o: In function `glusterd_get_value_for_vme_entry': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:12890: undefined reference to `dlclose' .libs/glusterd_la-glusterd-volgen.o: In function `_gd_get_option_type': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-volgen.c:6902: undefined reference to `dlclose' .libs/glusterd_la-glusterd-quota.o: In function `_glusterd_validate_quota_opts': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-quota.c:1947: undefined reference to `dlclose' collect2: ld returned 1 exit status Steps to Reproduce: 1. 2. 3. Actual results: glusterfs build is failing Expected results: the build should not fail Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 03:46:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:46:45 +0000 Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 03:52:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:52:09 +0000 Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22510 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 03:52:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:52:10 +0000 Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22510 (build: glusterfs build is failing on RHEL-6) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:38 +0000 Subject: [Bugs] [Bug 1696513] New: Multiple shd processes are running on brick_mux environmet Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 Bug ID: 1696513 Summary: Multiple shd processes are running on brick_mux environmet Product: GlusterFS Version: 4.1 Hardware: x86_64 Status: NEW Component: glusterd Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi Depends On: 1683880 Blocks: 1696147, 1672818 (glusterfs-6.0), 1684404 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1683880 +++ Description of problem: Multiple shd processes are running while created 100 volumes in brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a 1x3 volume 2. Enable brick_mux 3.Run below command n1= n2= n3= for i in {1..10};do for h in {1..20};do gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force gluster v start vol-$i-$h sleep 1 done done for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done Actual results: Multiple shd processes are running and consuming system resources Expected results: Only one shd process should be run Additional info: --- Additional comment from Mohit Agrawal on 2019-03-01 08:23:03 UTC --- Upstream patch is posted to resolve the same https://review.gluster.org/#/c/glusterfs/+/22290/ --- Additional comment from Atin Mukherjee on 2019-03-06 15:30:41 UTC --- (In reply to Mohit Agrawal from comment #1) > Upstream patch is posted to resolve the same > https://review.gluster.org/#/c/glusterfs/+/22290/ this is an upstream bug only :-) Once the mainline patch is merged and we backport it to release-6 branch, the bug status will be corrected. --- Additional comment from Worker Ant on 2019-03-12 11:21:18 UTC --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-6 by MOHIT AGRAWAL --- Additional comment from Worker Ant on 2019-03-12 20:53:28 UTC --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar Ranganathan --- Additional comment from Shyamsundar on 2019-03-25 16:33:26 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1683880 [Bug 1683880] Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1684404 [Bug 1684404] Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696147 [Bug 1696147] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:38 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1696513 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 [Bug 1696513] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:38 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1696513 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 [Bug 1696513] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:38 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1696513 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 [Bug 1696513] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:38 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1696513 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 [Bug 1696513] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 03:56:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 03:56:57 +0000 Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 04:21:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 04:21:25 +0000 Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22511 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 04:21:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 04:21:26 +0000 Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22511 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-4.1 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 04:51:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 04:51:44 +0000 Subject: [Bugs] [Bug 1696518] New: builder203 does not have a valid hostname set Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696518 Bug ID: 1696518 Summary: builder203 does not have a valid hostname set Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: dkhandel at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: After reinstallation builder203 on AWS does not have a valid hostname set and hence it's network service might behave weird. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 05:55:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 05:55:21 +0000 Subject: [Bugs] [Bug 1696518] builder203 does not have a valid hostname set In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696518 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- Can you be a bit more specific on: - what network do behave weirdly ? I also did set the hostname (using hostnamectl), so maybe this requires a reboot, and/or a different hostname. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 06:05:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 06:05:06 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22512 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 06:05:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 06:05:07 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #613 from Worker Ant --- REVIEW: https://review.gluster.org/22512 ([WIP]glusterd-volgen.c: skip fetching skip-CLIOT in a loop.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 07:02:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 07:02:14 +0000 Subject: [Bugs] [Bug 1696518] builder203 does not have a valid hostname set In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696518 --- Comment #2 from M. Scherer --- So, answering to myself, rpc.statd didn't start after reboot, and the hostname was ip-172-31-38-158.us-east-2.compute.internal. After "hostnamectl set-hostname builder203.int.aws.gluster.org", that's better; Guess we need to automate that (as I used builder203.aws.gluster.org and this was wrong). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 08:35:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 08:35:03 +0000 Subject: [Bugs] [Bug 1696599] New: Fops hang when inodelk fails on the first fop Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Bug ID: 1696599 Summary: Fops hang when inodelk fails on the first fop Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Steps: glusterd gluster peer probe localhost.localdomain peer probe: success. Probe on localhost not needed gluster --mode=script --wignore volume create r3 replica 3 localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1 localhost.localdomain:/home/gfs/r3_2 volume create: r3: success: please start the volume to access data gluster --mode=script volume start r3 volume start: r3: success mkdir: cannot create directory ?/mnt/r3?: File exists mount -t glusterfs localhost.localdomain:/r3 /mnt/r3 First terminal: # cd /mnt/r3 # touch abc Attach the mount process in gdb and put a break point on function afr_lock() >From second terminal: # exec 200>abc # echo abc >&200 # When the break point is hit, on third terminal execute "gluster volume stop r3" # quit gdb # execute "gluster volume start r3 force" # On the first terminal execute "exec abc >&200" again and this command hangs. Version-Release number of selected component (if applicable): How reproducible: Always Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 08:35:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 08:35:20 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1688395 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 08:37:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 08:37:53 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22515 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 08:37:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 08:37:54 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 09:06:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 09:06:48 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22517 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 09:06:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 09:06:49 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22517 (features/shard: Fix extra unref when inode object is lru'd out and added back) posted (#1) for review on master by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 09:09:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 09:09:04 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 anuradha.stalur at gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 09:17:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 09:17:45 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Sandro Bonazzola changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1677319 | |(Gluster_5_Affecting_oVirt_ | |4.3) Dependent Products| |Red Hat Enterprise | |Virtualization Manager Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 09:49:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 09:49:30 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #614 from Worker Ant --- REVIEW: https://review.gluster.org/22496 (cluster/afr: Invalidate inode on change of split-brain-choice) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 10:28:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 10:28:37 +0000 Subject: [Bugs] [Bug 1696633] New: GlusterFs v4.1.5 Tests from /tests/bugs/ module failing on Intel Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696633 Bug ID: 1696633 Summary: GlusterFs v4.1.5 Tests from /tests/bugs/ module failing on Intel Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: tests Severity: high Assignee: bugs at gluster.org Reporter: chandranaik2 at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Some of the tests from /tests/bugs module are failing on x86 on "SUSE Linux Enterprise Server 12 SP3" in GlusterFs v4.1.5 Failing Tests from /tests/bugs/ module are as mentioned below: glusterfs-server/bug-887145.t nfs/bug-974972.t rpc/bug-847624.t rpc/bug-954057.t shard/bug-1251824.t shard/bug-1468483.t shard/zero-flag.t How reproducible: Run the tests with ./run-tests.sh or run individual tests with ./run-tests.sh prove -vf Steps to Reproduce: 1. Build GlusterFs v4.1.5 2. Run the tests as below ./run-tests.sh prove -vf Actual results: Tests should pass Expected results: Tests fail Additional info: Failure Details: glusterfs-server/bug-887145.t - Subtest 21-24, fails with touch: cannot touch '/mnt/glusterfs/0/dir/file': Permission denied. Whereas subtest 26 fails with error : rmdir: failed to remove '/mnt/nfs/0/dir/*': No such file or directory nfs/bug-974972.t Subtest 14 fails with rm: cannot remove '/var/run/gluster/': Is a directory rpc/bug-847624.t Subtest 9 which does "dbench -t 10 10" fails. rpc/bug-954057.t Subtest 16 fails to create the directory ?/mnt/glusterfs/0/nobody/other?: Permission denied shard/bug-1251824.t, shard/bug-1468483.t Subtest 14-26, 40-42 fails for user ?test_user:test_user? in the test. shard/zero-flag.t Sub tests fails as below: TEST 17 (line 40): 2097152 echo not ok 17 Got "" instead of "2097152", LINENUM:40 Please let us know if these are known failures on intel. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1670303] api: bad GFAPI_4.1.6 block In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670303 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672249 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-4.1.8 Resolution|--- |CURRENTRELEASE Last Closed|2019-02-18 14:41:34 |2019-04-05 13:39:17 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1673265] Fix timeouts so the tests pass on AWS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673265 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1693057] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 5 13:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:39:17 +0000 Subject: [Bugs] [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-4.1.8 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:41:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:41:00 +0000 Subject: [Bugs] [Bug 1667099] GlusterFS 4.1.8 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667099 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.8, please open a new bug report. glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:42:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:42:43 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Deadline| |2019-05-10 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:43:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:43:33 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Deadline| |2019-04-10 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 5 13:50:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 05 Apr 2019 13:50:20 +0000 Subject: [Bugs] [Bug 1696721] New: geo-replication failing after upgrade from 5.5 to 6.0 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696721 Bug ID: 1696721 Summary: geo-replication failing after upgrade from 5.5 to 6.0 Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Assignee: bugs at gluster.org Reporter: chad.cropper at genusplc.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: After upgrading Gluster from 5.5 to 6.0, geo-replication stays in initialized status permanently. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Stop all gluster vols/services 2. Upgrade RPMs form 5.5. to 6.0 3. Start all gluster vols/services 4. geo-repl will not start Actual results: Not moving past initializing state Expected results: geo-replication moves to active/changelog crawl Additional info: Last part of log [2019-04-05 13:38:07.052740] I [socket.c:811:__socket_shutdown] 0-geovol-client-1: intentional socket shutdown(13) [2019-04-05 13:38:07.053331] W [dict.c:986:str_to_data] (-->/usr/lib64/glusterfs/6.0/xlator/protocol/client.so(+0x40f3a) [0x7f42fb706f3a] -->/lib64/libglusterfs.so.0(dict_set_str+0x16) [0x7f4309ed8bb6] -->/lib64/libglusterfs.so.0(str_to_data+0x71) [0x7f4309ed54d1] ) 0-dict: value is NULL [Invalid argument] [2019-04-05 13:38:07.053385] I [MSGID: 114006] [client-handshake.c:1238:client_setvolume] 0-geovol-client-0: failed to set process-name in handshake msg [2019-04-05 13:38:07.053463] W [dict.c:986:str_to_data] (-->/usr/lib64/glusterfs/6.0/xlator/protocol/client.so(+0x40f3a) [0x7f42fb706f3a] -->/lib64/libglusterfs.so.0(dict_set_str+0x16) [0x7f4309ed8bb6] -->/lib64/libglusterfs.so.0(str_to_data+0x71) [0x7f4309ed54d1] ) 0-dict: value is NULL [Invalid argument] [2019-04-05 13:38:07.053493] I [MSGID: 114006] [client-handshake.c:1238:client_setvolume] 0-geovol-client-1: failed to set process-name in handshake msg [2019-04-05 13:38:07.054023] I [MSGID: 114046] [client-handshake.c:1107:client_setvolume_cbk] 0-geovol-client-0: Connected to geovol-client-0, attached to remote volume '/glusterfs/geovol_b1/brick'. [2019-04-05 13:38:07.054314] I [MSGID: 114046] [client-handshake.c:1107:client_setvolume_cbk] 0-geovol-client-1: Connected to geovol-client-1, attached to remote volume '/glusterfs/geovol_b1/brick'. [2019-04-05 13:38:07.056436] I [fuse-bridge.c:5142:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22 [2019-04-05 13:38:07.056467] I [fuse-bridge.c:5753:fuse_graph_sync] 0-fuse: switched to graph 0 [2019-04-05 13:39:32.048849] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-geovol-client-1: server 192.168.xxx.xxx:49153 has not responded in the last 42 seconds, disconnecting. [2019-04-05 13:39:32.048952] I [socket.c:811:__socket_shutdown] 0-geovol-client-1: intentional socket shutdown(10) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 6 01:41:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 01:41:34 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 --- Comment #13 from Worker Ant --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) merged (#10) on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 6 06:27:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 06:27:03 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ravishankar at redhat.com Depends On| |1693155, 1693992 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1693992 [Bug 1693992] Thin-arbiter minor fixes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 6 06:27:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 06:27:03 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 6 06:27:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 06:27:03 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 6 07:22:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 07:22:25 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |pgurusid at redhat.com Depends On| |1692101 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 [Bug 1692101] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 6 07:22:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 06 Apr 2019 07:22:25 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 7 05:18:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 07 Apr 2019 05:18:24 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |moagrawa at redhat.com Depends On| |1696046 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 [Bug 1696046] Log level changes do not take effect until the process is restarted -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 7 05:18:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 07 Apr 2019 05:18:24 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 7 05:23:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 07 Apr 2019 05:23:32 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22519 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 7 05:23:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 07 Apr 2019 05:23:33 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22519 (geo-rep: Fix rename with existing destination with same gfid) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 02:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 02:42:22 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1672818 (glusterfs-6.0) |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 02:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 02:42:22 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1679904 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 02:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 02:42:22 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1679904 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 05:14:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 05:14:37 +0000 Subject: [Bugs] [Bug 1696075] Client lookup is unable to heal missing directory GFID entry In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED CC| |ravishankar at redhat.com Flags| |needinfo?(anepatel at redhat.c | |om) --- Comment #1 from Ravishankar N --- Hi Anees, are you sure this test is passing in downstream? I tried the "Steps to Reproduce" in RHGS-3.4.0 (glusterfs v3.12.2-18.2) and observed the same behaviour of gfids not healing. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 07:56:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 07:56:40 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1602 from Worker Ant --- REVIEW: https://review.gluster.org/22452 (GlusterD: Resolves the issue of referencing memory after it has been freed) merged (#12) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 09:53:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 09:53:48 +0000 Subject: [Bugs] [Bug 1696075] Client lookup is unable to heal missing directory GFID entry In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Anees Patel changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(anepatel at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 10:04:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:04:15 +0000 Subject: [Bugs] [Bug 1697293] New: DHT: print hash and layout values in hexadecimal format in the logs Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697293 Bug ID: 1697293 Summary: DHT: print hash and layout values in hexadecimal format in the logs Product: GlusterFS Version: mainline Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: DHT currently prints the values of hashes and the layout ranges in decimal format in the logs. It is easier to compare them to the on disk layouts if they are printed in hexadecimal format. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 10:29:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:29:41 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22525 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 10:29:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:29:42 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1603 from Worker Ant --- REVIEW: https://review.gluster.org/22525 (GlusterD: Avoiding explicit null pointer dereference.) posted (#1) for review on master by Rishubh Jain -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 10:54:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:54:48 +0000 Subject: [Bugs] [Bug 1697293] DHT: print hash and layout values in hexadecimal format in the logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697293 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Assignee|bugs at gluster.org |nbalacha at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 10:55:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:55:50 +0000 Subject: [Bugs] [Bug 1697316] New: Getting SEEK-2 and SEEK7 errors with [Invalid argument] in the bricks' logs Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697316 Bug ID: 1697316 Summary: Getting SEEK-2 and SEEK7 errors with [Invalid argument] in the bricks' logs Product: GlusterFS Version: mainline Status: ASSIGNED Component: core Assignee: ndevos at redhat.com Reporter: ndevos at redhat.com CC: bugs at gluster.org Blocks: 1696903 Target Milestone: --- Classification: Community Description of problem: seeing thousands of SEEK* errors in two bricks' logs on one particular node, gluster02.example.com. The following shows the start of the errors: [server-rpc-fops.c:2091:server_seek_cbk] 0-vol01-server: 4947: SEEK-2 (53920aee-062c-4598-aa50-2b4d7821b204), client: worker.example.com-7808-2019/02/08-18:04:57:903430-vol01-client-0-0-0, error-xlator: vol01-posix [Invalid argument] Version-Release number of selected component (if applicable): glusterfs server 3.12.2 on el6 How reproducible: Run an application on an el7 gluster client, against a glusterfs server on el6. The application should call lseek() with whence=SEEK_HOLE/SEEK_DATA. Actual results: glusterfs server on el6 fills the brick logs, disk can run out of space quickly Expected results: no logging if this is correct behaviour, limit logging in case it is an error. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 10:59:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:59:53 +0000 Subject: [Bugs] [Bug 1697316] Getting SEEK-2 and SEEK7 errors with [Invalid argument] in the bricks' logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697316 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22526 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 10:59:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 10:59:54 +0000 Subject: [Bugs] [Bug 1697316] Getting SEEK-2 and SEEK7 errors with [Invalid argument] in the bricks' logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697316 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22526 (core: only log seek errors if SEEK_HOLE/SEEK_DATA is available) posted (#1) for review on master by Niels de Vos -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 13:08:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:08:23 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #615 from Worker Ant --- REVIEW: https://review.gluster.org/22484 (glusterd: remove redundant glusterd_check_volume_exists () calls) merged (#9) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:27:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:27:06 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22528 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:27:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:27:08 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #616 from Worker Ant --- REVIEW: https://review.gluster.org/22528 (glusterd: remove glusterd_check_volume_exists() call) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:47:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:47:42 +0000 Subject: [Bugs] [Bug 1694976] On Fedora 29 GlusterFS 4.1 repo has bad/missing rpm signs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694976 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |kkeithle at redhat.com Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-08 13:47:42 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:56:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:56:28 +0000 Subject: [Bugs] [Bug 1697486] New: bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697486 Bug ID: 1697486 Summary: bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile Version-Release number of selected component (if applicable): How reproducible: Run test cases Steps to Reproduce: 1. 2. 3. Actual results: .t is throwing an error "No such file" Expected results: .t should not throw any error. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:56:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:56:44 +0000 Subject: [Bugs] [Bug 1697486] bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697486 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 13:59:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:59:40 +0000 Subject: [Bugs] [Bug 1697486] bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697486 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22529 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 13:59:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 13:59:41 +0000 Subject: [Bugs] [Bug 1697486] bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697486 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22529 (test: Change glustershd_pid update in .t file) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:01:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:01:26 +0000 Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694563 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:01:26 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22461 (gfapi: Unblock epoll thread for upcall processing) merged (#2) on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:01:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:01:27 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Bug 1694562 depends on bug 1694563, which changed state. Bug 1694563 Summary: gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694563 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:01:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:01:27 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Bug 1694561 depends on bug 1694563, which changed state. Bug 1694563 Summary: gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694563 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:02:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:02:56 +0000 Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696513 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:02:56 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22511 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:02:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:02:56 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Bug 1696147 depends on bug 1696513, which changed state. Bug 1696513 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696513 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:02:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:02:56 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1696513, which changed state. Bug 1696513 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696513 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:02:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:02:57 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Bug 1684404 depends on bug 1696513, which changed state. Bug 1696513 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696513 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:05:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:05:22 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:05:22 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22460 (gfapi: Unblock epoll thread for upcall processing) merged (#1) on release-5 by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:05:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:05:22 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Bug 1694561 depends on bug 1694562, which changed state. Bug 1694562 Summary: gfapi: do not block epoll thread for upcall notifications https://bugzilla.redhat.com/show_bug.cgi?id=1694562 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:06:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:06:54 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:06:54 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22486 (cluster/dht: Fix lookup selfheal and rmdir race) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:06:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:06:57 +0000 Subject: [Bugs] [Bug 1677260] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677260 Bug 1677260 depends on bug 1695403, which changed state. Bug 1695403 Summary: rm -rf fails with "Directory not empty" https://bugzilla.redhat.com/show_bug.cgi?id=1695403 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:15:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:15:01 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:15:01 --- Comment #27 from Worker Ant --- REVIEW: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) merged (#4) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:15:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:15:02 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Bug 1692093 depends on bug 1673058, which changed state. Bug 1673058 Summary: Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1673058 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:15:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:15:04 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Bug 1692101 depends on bug 1673058, which changed state. Bug 1673058 Summary: Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1673058 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 3 02:42:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 03 Apr 2019 02:42:29 +0000 Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:16:07 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22483 (logging: Fix GF_LOG_OCCASSIONALLY API) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:16:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:16:07 +0000 Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695390 Bug 1695390 depends on bug 1695391, which changed state. Bug 1695391 Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance https://bugzilla.redhat.com/show_bug.cgi?id=1695391 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:16:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:16:50 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-08 14:16:50 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22499 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 14:16:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:16:51 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1696147, which changed state. Bug 1696147 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696147 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 14:16:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 14:16:51 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Bug 1684404 depends on bug 1696147, which changed state. Bug 1696147 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1696147 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 15:00:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 15:00:33 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |spalai at redhat.com Depends On| |1642168 --- Comment #1 from Susant Kumar Palai --- Would like to add the commvault patches for cloudsync and the commvault plugin. https://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:ref-1642168. Added as dependent. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 [Bug 1642168] changes to cloudsync xlator -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 15:00:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 15:00:33 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 8 15:04:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 15:04:45 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 --- Comment #2 from Shyamsundar --- (In reply to Susant Kumar Palai from comment #1) > Would like to add the commvault patches for cloudsync and the commvault > plugin. > https://review.gluster.org/#/q/status:open+project:glusterfs+branch: > master+topic:ref-1642168. > > Added as dependent. These are fat patches, ideally considered enhancements or features. These are usually not a target of a minor release, which tends to help with stabilizing the release contents, than adding to it. Posting this here, as I do not see why these patches would get merged into 6.1 release. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 8 16:28:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 08 Apr 2019 16:28:08 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #617 from Worker Ant --- REVIEW: https://review.gluster.org/22512 (glusterd-volgen.c: skip fetching skip-CLIOT in a loop.) merged (#6) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:27:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:27:00 +0000 Subject: [Bugs] [Bug 1697764] New: [cluster/ec] : Fix handling of heal info cases without locks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697764 Bug ID: 1697764 Summary: [cluster/ec] : Fix handling of heal info cases without locks Product: GlusterFS Version: 6 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Depends On: 1689799 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1689799 +++ Description of problem: When we use heal info command it takes lot of time as some cases it takes lock on entries to find out if the entry actualy needs heal or not. There are some cases where we can avoid these locks and can conclude if the entry needs heal or not. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-03-18 08:27:59 UTC --- REVIEW: https://review.gluster.org/22372 (cluster/ec: Fix handling of heal info cases without locks) posted (#1) for review on master by Ashish Pandey --- Additional comment from Worker Ant on 2019-04-04 21:06:32 UTC --- REVIEW: https://review.gluster.org/22372 (cluster/ec: Fix handling of heal info cases without locks) merged (#5) on master by Xavi Hernandez Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:27:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:27:00 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1697764 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697764 [Bug 1697764] [cluster/ec] : Fix handling of heal info cases without locks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:27:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:27:40 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22532 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:27:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:27:41 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22532 (cluster/ec: Fix handling of heal info cases without locks) posted (#1) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:29:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:29:25 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 --- Comment #4 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22532 (cluster/ec: Fix handling of heal info cases without locks) posted (#2) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:29:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:29:26 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22532 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 05:29:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 05:29:28 +0000 Subject: [Bugs] [Bug 1697764] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697764 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22532 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 07:18:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 07:18:53 +0000 Subject: [Bugs] [Bug 1697812] New: mention a pointer to all the mailing lists available under glusterfs project(https://www.gluster.org/community/) Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697812 Bug ID: 1697812 Summary: mention a pointer to all the mailing lists available under glusterfs project(https://www.gluster.org/community/) Product: GlusterFS Version: 6 Status: NEW Component: website Severity: medium Assignee: bugs at gluster.org Reporter: nchilaka at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: ======================== currently under mailing lists in https://www.gluster.org/community/ only gluster-devel and gluster-users are mentioned. However they are more mailing lists available. For eg; I was stuggling to find automated-testing mailing list as it was not mentioned. Expected: Put a reference/pointer saying that more mailing lists can be subscribed to from here -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 08:26:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 08:26:22 +0000 Subject: [Bugs] [Bug 1697756] Glusterd do not response any request through its 24007 port In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697756 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Target Release|RHGS 3.4.z Batch Update 4 |--- Version|unspecified |mainline Component|glusterd |core CC| |bugs at gluster.org, | |rgowdapp at redhat.com Assignee|amukherj at redhat.com |bugs at gluster.org QA Contact|bmekala at redhat.com | Product|Red Hat Gluster Storage |GlusterFS Flags|rhgs-3.5.0? pm_ack? |needinfo?(rgowdapp at redhat.c |devel_ack? qa_ack? |om) --- Comment #2 from Atin Mukherjee --- Mohit or Raghavendra G will be looking into this. I believe this is the same issue which has been highlighted in the user ML yesterday. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 08:28:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 08:28:17 +0000 Subject: [Bugs] [Bug 1697866] New: Provide a way to detach a failed node Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Bug ID: 1697866 Summary: Provide a way to detach a failed node Product: GlusterFS Version: mainline Status: NEW Component: glusterd Severity: low Priority: low Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bmekala at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, rtalur at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1696334 Target Milestone: --- Classification: Community Description of problem: When a gluster peer node has failed due to hardware issues, it should be possible to detach it. Currently, the peer detach command fails because the peer hosts one or more bricks. If delete of the volume that has that brick is attempted then volume delete fails with "Not all peers are up" error. One way out is to use a replace-brick command and move the brick to some other node. However, it might not be possible to replace-brick sometimes. A trick that worked for us was to use remove-brick to convert the replica 3 volume to replica 2 and then peer detach the node. May be the peer detach command can show the trick in output. Something on the lines: "This peer has one or more bricks. If the peer is lost and is not recoverable then you should use either replace-brick or remove-brick procedure to remove all bricks from the peer and attempt the peer detach again" Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696334 [Bug 1696334] Provide a way to detach a failed node -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 08:28:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 08:28:37 +0000 Subject: [Bugs] [Bug 1697866] Provide a way to detach a failed node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |srakonde at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 08:39:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 08:39:47 +0000 Subject: [Bugs] [Bug 1697866] Provide a way to detach a failed node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22534 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 08:39:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 08:39:48 +0000 Subject: [Bugs] [Bug 1697866] Provide a way to detach a failed node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22534 (glusterd: provide a way to detach failed node) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 09:00:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:00:15 +0000 Subject: [Bugs] [Bug 1697756] Glusterd do not response any request through its 24007 port In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697756 --- Comment #3 from Hunang Shujun --- I have tested my patch and it seems after thousands of restart, no such error happen again. I want to commit my correction -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:00:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:00:49 +0000 Subject: [Bugs] [Bug 1697890] New: centos-regression is not giving its vote Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697890 Bug ID: 1697890 Summary: centos-regression is not giving its vote Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: on completion, centos-regression job is not giving its vote at the patches in gerrit. Such patches are: https://review.gluster.org/#/c/glusterfs/+/22528/ https://review.gluster.org/#/c/glusterfs/+/22530/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:41:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:41:45 +0000 Subject: [Bugs] [Bug 1697756] Glusterd do not response any request through its 24007 port In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697756 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22535 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:41:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:41:46 +0000 Subject: [Bugs] [Bug 1697756] Glusterd do not response any request through its 24007 port In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697756 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22535 (fix glusterd stuck during restart) posted (#1) for review on master by None -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:42:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:42:08 +0000 Subject: [Bugs] [Bug 1697907] New: ctime feature breaks old client to connect to new server Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Bug ID: 1697907 Summary: ctime feature breaks old client to connect to new server Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Considering ctime is a client side feature, we can't blindly load ctime xlator into the client graph if it's explicitly turned off, that'd result into backward compatibility issue where an old client can't mount a volume configured on a server which is having ctime feature. Since ctime feature is enabled by default, any old client would still fail to connect to a new server until and unless this feature is turned off explicitly. Any client side feature when marked as enabled by default means there's a need to either turn of the feature if old client is to be made work with new servers or client needs to be upgraded to the latest version of server. Version-Release number of selected component (if applicable): Server >= release-6, client <= release-5 How reproducible: Always Steps to Reproduce: 1. Create a volume on glusterfs-5 or higher 2. Mount the volume from a client which is running any version lower than glusterfs-5 3. Mount doesn't go through Actual results: Mount fails Expected results: Mount shouldn't fail Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:44:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:44:01 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1697820 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697820 [Bug 1697820] rhgs 3.5 server not compatible with 3.4 client -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:48:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:48:08 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22536 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 09:48:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 09:48:09 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22536 (glusterd: load ctime in the client graph only if it's not turned off) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 10:15:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:15:18 +0000 Subject: [Bugs] [Bug 1697923] New: CI: collect core file in a job artifacts Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697923 Bug ID: 1697923 Summary: CI: collect core file in a job artifacts Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: high Assignee: bugs at gluster.org Reporter: ykaul at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: When CI fails with a coredump, it'll be possibly useful to save that core dump for a more thorough investigation. Example - https://build.gluster.org/job/centos7-regression/5473/ There is just /var/log/glusterfs files there. Would it be possible to run perhaps abrt-action-analyze-backtrace to get a better result? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 10:26:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:26:51 +0000 Subject: [Bugs] [Bug 1697930] New: Thin-Arbiter SHD minor fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 Bug ID: 1697930 Summary: Thin-Arbiter SHD minor fixes Product: GlusterFS Version: mainline Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Address post-merge review comments for commit 5784a00f997212d34bd52b2303e20c097240d91c -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 10:42:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:42:23 +0000 Subject: [Bugs] [Bug 1697930] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22537 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 10:42:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:42:24 +0000 Subject: [Bugs] [Bug 1697930] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22537 (cluster/afr: Thin-arbiter SHD fixes) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 10:43:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:43:38 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Netbulae changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(info at netbulae.com | |) | --- Comment #28 from Netbulae --- (In reply to Atin Mukherjee from comment #27) I tried the sequence you posted but currently we have *.16 and ssd5 as the master storage domain as I removed ssd9 Also upgraded to ovirt node 4.3.3 rc2 because of several fixes. But I don't get the 'failed to fetch volume file' this way, it appears the backup volfile option is not working: >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:10:09,186::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.16 ssd5 --xml (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:10:09,297::commands::219::root::(execCmd) FAILED: = ''; = 1 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:10:09,297::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo > xmltree = _execGlusterXml(command) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml > return _getTree(rc, out, err) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree > raise ge.GlusterCmdExecFailedException(rc, out, err) >GlusterCmdExecFailedException: Command execution failed: rc=1 out='Connection failed. Please check if gluster daemon is operational.\n' err='' >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:10:09,298::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call mount with (, u'*.*.*.16:ssd5', u'/rhev/data-center/mnt/glusterSD/*.*.*.16:ssd5') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=1*.*.*.15:*.*.*.14', 'cgroup': 'vdsm-glusterfs'} When I leave the glusterfs process running, I also don't get it anymore in 4.3.3, but I get "'Error : Request timed out\n' err=''" and "MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.15:ssd4: mountpoint not found\n')" >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:25:54,918::commands::219::root::(execCmd) FAILED: = 'Running scope as unit run-10178.scope.\nMount failed. Please check the log file for more details.\n'; = 1 >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:25:54,918::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 765 seconds (max pending: 2) >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:25:54,918::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in mount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 143, in mount > cgroup=cgroup) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (1, ';Running scope as unit run-10178.scope.\nMount failed. Please check the log file for more details.\n') >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,720::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {} >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,720::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 294 seconds (max pending: 2) >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,802::routes::110::root::(get_gateway) The gateway *.*.*.3 is duplicated for the device ovirtmgmt >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,807::routes::110::root::(get_gateway) The gateway *.*.*.3 is duplicated for the device ovirtmgmt >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,809::cmdutils::133::root::(exec_cmd) /sbin/tc qdisc show (cwd None) >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,860::cmdutils::141::root::(exec_cmd) SUCCESS: = ''; = 0 >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,866::cmdutils::133::root::(exec_cmd) /sbin/tc class show dev bond0 classid 1389:1388 (cwd None) >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:49,919::cmdutils::141::root::(exec_cmd) SUCCESS: = ''; = 0 >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:50,005::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:50,006::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None) >MainProcess|jsonrpc/1::DEBUG::2019-04-09 12:30:50,038::cmdutils::141::root::(exec_cmd) SUCCESS: = ''; = 0 > >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,472::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/ovirt.netbulae.mgmt:_var_lib_exports_iso (cwd None) >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,502::commands::219::root::(execCmd) SUCCESS: = ''; = 0 >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,503::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return umount with None >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,529::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/*.*.*.15:_data_ovirt') >{'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,529::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/*.*.*.15:_data_ovirt (cwd None) >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,595::commands::219::root::(execCmd) SUCCESS: = ''; = 0 >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,596::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return umount with None >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,738::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with (,) {} >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,738::commands::198::storage.HBA::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/libexec/vdsm/fc-scan (cwd None) >MainProcess|jsonrpc/2::DEBUG::2019-04-09 12:37:02,804::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,841::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.15:ssd4') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,841::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.15:ssd4 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,849::commands::219::root::(execCmd) FAILED: = 'umount: /rhev/data-center/mnt/glusterSD/*.*.*.15:ssd4: mountpoint not found\n'; = 32 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:37:02,850::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in umount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 147, in umount > mount._umount(fs_file, force=force, lazy=lazy, freeloop=freeloop) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 296, in _umount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.15:ssd4: mountpoint not found\n') >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,850::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.16:ssd5') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,850::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.16:ssd5 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,875::commands::219::root::(execCmd) SUCCESS: = ''; = 0 >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,876::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return umount with None >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,889::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.14:ssd6') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,889::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.14:ssd6 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,895::commands::219::root::(execCmd) FAILED: = 'umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:ssd6: mountpoint not found\n'; = 32 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:37:02,895::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in umount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 147, in umount > mount._umount(fs_file, force=force, lazy=lazy, freeloop=freeloop) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 296, in _umount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:ssd6: mountpoint not found\n') >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,895::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.14:_hdd2') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,896::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.14:_hdd2 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,902::commands::219::root::(execCmd) FAILED: = 'umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_hdd2: mountpoint not found\n'; = 32 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:37:02,902::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in umount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 147, in umount > mount._umount(fs_file, force=force, lazy=lazy, freeloop=freeloop) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 296, in _umount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_hdd2: mountpoint not found\n') >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,903::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd3') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,903::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd3 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,910::commands::219::root::(execCmd) FAILED: = 'umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd3: mountpoint not found\n'; = 32 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:37:02,910::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in umount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 147, in umount > mount._umount(fs_file, force=force, lazy=lazy, freeloop=freeloop) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 296, in _umount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd3: mountpoint not found\n') >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,911::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call umount with (, u'/rhev/data-center/mnt/glusterSD/*.*.*.14:_sdd8') {'force': True, 'lazy': True, 'freeloop': False} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,911::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/*.*.*.14:_sdd8 (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:02,917::commands::219::root::(execCmd) FAILED: = 'umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_sdd8: mountpoint not found\n'; = 32 >MainProcess|jsonrpc/6::ERROR::2019-04-09 12:37:02,917::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in umount >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 147, in umount > mount._umount(fs_file, force=force, lazy=lazy, freeloop=freeloop) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 296, in _umount > _runcmd(cmd) > File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd > raise MountError(rc, b";".join((out, err))) >MountError: (32, ';umount: /rhev/data-center/mnt/glusterSD/*.*.*.14:_sdd8: mountpoint not found\n') >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:03,042::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with (,) {} >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:03,042::commands::198::storage.HBA::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/libexec/vdsm/fc-scan (cwd None) >MainProcess|jsonrpc/6::DEBUG::2019-04-09 12:37:03,107::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 10:45:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 10:45:21 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 --- Comment #29 from Netbulae --- I cannot find anything related in glusterd log gluster v get all cluster.op-version Option Value ------ ----- cluster.op-version 31202 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:07:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:07:12 +0000 Subject: [Bugs] [Bug 1690952] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690952 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22386 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) merged (#3) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:07:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:07:35 +0000 Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-09 11:07:35 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22466 (glusterd: fix txn-id mem leak) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:07:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:07:36 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Bug 1694610 depends on bug 1694612, which changed state. Bug 1694612 Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly https://bugzilla.redhat.com/show_bug.cgi?id=1694612 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:27:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:27:47 +0000 Subject: [Bugs] [Bug 1697971] New: Segfault in FUSE process, potential use after free Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Bug ID: 1697971 Summary: Segfault in FUSE process, potential use after free Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: fuse Severity: urgent Assignee: bugs at gluster.org Reporter: manschwetus at cs-software-gmbh.de CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1553829 --> https://bugzilla.redhat.com/attachment.cgi?id=1553829&action=edit backtrace and threadlist Description of problem: We face regular segfaults of FUSE process, when running gitlab docker with data placed on 1x3 glusterfs, all nodes running the container are gluster servers. Version-Release number of selected component (if applicable): We use latest packages available for ubuntu lts # cat /etc/apt/sources.list.d/gluster-ubuntu-glusterfs-6-bionic.list deb http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu bionic main How reproducible: It happens regularly, at least daily, but we aren't aware of a specific action/activity. Steps to Reproduce: 1. setup docker swarm backed with glusterfs and run gitlab on it 2. ??? 3. ??? Actual results: Regular crashes of FUSE, cutting of persistent storage used for dockerized gitlab Expected results: Stable availability of glusterfs Additional info: we preserve the produced corefiles, I'll attach bt and thread list of te one today. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:31:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:31:55 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #1 from manschwetus at cs-software-gmbh.de --- vlume infos: $ sudo gluster volume info Volume Name: swarm-vols Type: Replicate Volume ID: a103c1da-d651-4d65-8f86-a8731e2a670c Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.1.81:/gluster/data Brick2: 192.168.1.86:/gluster/data Brick3: 192.168.1.85:/gluster/data Options Reconfigured: performance.write-behind: off performance.cache-max-file-size: 1GB performance.cache-size: 2GB nfs.disable: on transport.address-family: inet auth.allow: 127.0.0.1 $ sudo gluster volume status Status of volume: swarm-vols Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.1.81:/gluster/data 49153 0 Y 188805 Brick 192.168.1.86:/gluster/data 49153 0 Y 5543 Brick 192.168.1.85:/gluster/data 49152 0 Y 2069 Self-heal Daemon on localhost N/A N/A Y 2080 Self-heal Daemon on 192.168.1.81 N/A N/A Y 144760 Self-heal Daemon on 192.168.1.86 N/A N/A Y 63715 Task Status of Volume swarm-vols ------------------------------------------------------------------------------ There are no active volume tasks $ mount | grep swarm localhost:/swarm-vols on /swarm/volumes type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:32:41 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22538 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:32:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:32:42 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22538 (doc: Added release 5.6 notes) posted (#1) for review on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:53:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:53:52 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-09 11:53:52 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22538 (doc: Added release 5.6 notes) merged (#1) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 11:59:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 11:59:31 +0000 Subject: [Bugs] [Bug 1697986] New: GlusterFS 5.7 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 Bug ID: 1697986 Summary: GlusterFS 5.7 tracker Product: GlusterFS Version: 5 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Deadline: 2019-06-10 Classification: Community Tracker for the release 5.7 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 12:57:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 12:57:42 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 --- Comment #3 from Christian Ihle --- Oh, just noticed I wrote CentOS 7.6 only. We use RedHat 7.6 on our main servers, but the issue is the same on both CentOS and RedHat. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:11:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:11:37 +0000 Subject: [Bugs] [Bug 1697923] CI: collect core file in a job artifacts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697923 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #1 from Deepshikha khandelwal --- We do archive the core file at a centarlized log server. For the given regression build: https://logs.aws.gluster.org/centos7-regression-5473.tgz (build/install/cores/file) I need to look more abrt-action-analyze-backtrace and it can be implemented. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:23:06 +0000 Subject: [Bugs] [Bug 1697923] CI: collect core file in a job artifacts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697923 --- Comment #2 from Yaniv Kaul --- (In reply to Deepshikha khandelwal from comment #1) > We do archive the core file at a centarlized log server. > > For the given regression build: > https://logs.aws.gluster.org/centos7-regression-5473.tgz > (build/install/cores/file) Yes, I just saw it now in the console of the Jenkins job. I believe the instructions there how to use gdb to look at it are outdated, btw. > > I need to look more abrt-action-analyze-backtrace and it can be implemented. I couldn't get it - but perhaps on the machine itself its doable. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:41:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:41:04 +0000 Subject: [Bugs] [Bug 1698042] New: quick-read cache invalidation feature has the same key of md-cache Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698042 Bug ID: 1698042 Summary: quick-read cache invalidation feature has the same key of md-cache Product: GlusterFS Version: mainline Status: NEW Component: quick-read Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: With group-metadata-cache group profile settings performance.cache-invalidation option when turned on enables both md-cache and quick-read xlator's cache-invalidation feature. While the intent of the group-metadata-cache is to set md-cache xlator's cache-invalidation feature, quick-read xlator also gets affected due to the same. While md-cache feature and it's profile existed since release-3.9, quick-read cache-invalidation was introduced in release-4 and due to this op-version mismatch on any cluster which is >= glusterfs-4 when this group profile is applied it breaks backward compatibility with the old clients. Version-Release number of selected component (if applicable): mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:43:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:43:16 +0000 Subject: [Bugs] [Bug 1698042] quick-read cache invalidation feature has the same key of md-cache In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698042 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1697820 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697820 [Bug 1697820] rhgs 3.5 server not compatible with 3.4 client -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:48:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:48:25 +0000 Subject: [Bugs] [Bug 1698042] quick-read cache invalidation feature has the same key of md-cache In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698042 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22539 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:48:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:48:26 +0000 Subject: [Bugs] [Bug 1698042] quick-read cache invalidation feature has the same key of md-cache In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698042 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22539 (quick-read: rename cache-invalidation key to avoid redundant keys) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 13:52:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:52:55 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #57 from Amgad --- (In reply to Sanju from comment #55) > Amgad, > > Allow me some time, I will get back to you soon. > > Thanks, > Sanju Sanju / Shyam It has been two weeks now. What's the update on this. We're blocked and stuck not able to deploy 5.x because of the online rollback Appreciate your timely update! Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 13:57:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 13:57:07 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #58 from Amgad --- is it fixed in 5.6? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 14:01:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:01:00 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amgad.saleh at nokia.com Depends On| |1687051 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 14:01:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:01:00 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693300 (glusterfs-5.6) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 [Bug 1693300] GlusterFS 5.6 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 14:02:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:02:13 +0000 Subject: [Bugs] [Bug 1697986] GlusterFS 5.7 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amgad.saleh at nokia.com Depends On| |1687051 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 14:02:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:02:13 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1697986 (glusterfs-5.7) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697986 [Bug 1697986] GlusterFS 5.7 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 14:55:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:55:05 +0000 Subject: [Bugs] [Bug 1698078] New: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Bug ID: 1698078 Summary: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Product: GlusterFS Version: mainline Status: NEW Component: ctime Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: On latest master where ctime feature is enabled by default, creation of tar file throws warning that 'file changed as we read it' Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create a replica 1*3 gluster volume and mount it. 2. Untar a file onto gluster mount #tar xvf ~/linux-5.0.6.tar.xz -C /gluster-mnt/ 3. Create tar file from untarred files #mkdir /gluster-mnt/test-untar/ #cd /gluster-mnt #tar -cvf ./test-untar/linux.tar ./linux-5.0.6 Actual results: Creation of tar file from gluster mount throws warning 'file changed as we read it' Expected results: Creation of tar file from gluster mount should not throw any warning. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 14:55:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:55:19 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 14:59:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:59:37 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22540 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 14:59:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 14:59:38 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22540 (posix/ctime: Fix stat(time attributes) inconsistency during readdirp) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 9 17:59:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 17:59:22 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22541 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 9 17:59:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 09 Apr 2019 17:59:23 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #618 from Worker Ant --- REVIEW: https://review.gluster.org/22541 ([WIP]glusterd-volgen.c: skip fetching some vol settings in a bricks loop.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 03:27:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 03:27:13 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-10 03:27:13 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22536 (glusterd: load ctime in the client graph only if it's not turned off) merged (#1) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 03:27:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 03:27:41 +0000 Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-10 03:27:41 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22510 (build: glusterfs build is failing on RHEL-6) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 04:31:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 04:31:12 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22442 (tests: add a tests for trace xlator) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 04:32:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 04:32:39 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #619 from Worker Ant --- REVIEW: https://review.gluster.org/22509 (ec: increase line coverage of ec) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 04:42:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 04:42:42 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #13 from Worker Ant --- REVIEW: https://review.gluster.org/22444 (protocol: add an option to force using old-protocol) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 06:21:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 06:21:52 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 --- Comment #4 from Christian Ihle --- I see from the release notes of 5.6 that this issue is resolved: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Looks like it may be the same as this. I will test 5.6 once it's out. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 08:06:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 08:06:11 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 --- Comment #30 from Netbulae --- The volume info is in comment 4 I get the same with the brick running I see now: >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:51:27,437::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd5', u'*.*.*.16') {} >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:51:27,437::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.16 ssd5 --xml (cwd None) >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:53:27,553::commands::219::root::(execCmd) FAILED: = ''; = 1 >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:53:27,553::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 123 seconds (max pending: 1) >MainProcess|jsonrpc/0::ERROR::2019-04-10 09:53:27,553::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper > res = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo > xmltree = _execGlusterXml(command) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml > return _getTree(rc, out, err) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree > raise ge.GlusterCmdExecFailedException(rc, out, err) >GlusterCmdExecFailedException: Command execution failed: rc=1 out='Error : Request timed out\n' err='' >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:53:27,555::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call mount with (, u'*.*.*.16:ssd5', u'/rhev/data-center/mnt/glusterSD/*.*.*.16:ssd5') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.15:*.*.*.14', 'cgroup': 'vdsm-glusterfs'} >MainProcess|jsonrpc/0::DEBUG::2019-04-10 09:53:27,555::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.15:*.*.*.14 *.*.*.16:ssd5 /rhev/data-center/mnt/glusterSD/*.*.*.16:ssd5 (cwd None) >MainProcess|jsonrpc/1::DEBUG::2019-04-10 09:54:27,765::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with (,) {} But the port is reachable from the node: >Brick *.*.*.16:/data/ssd5/brick1 49156 0 Y 15429 >[root at node9 ~]# telnet *.*.*.16 49156 >Trying *.*.*.16... >Connected to *.*.*.16. >Escape character is '^]'. And in /var/log/messages: >Apr 10 09:56:27 node9 vdsm[56358]: WARN Worker blocked: u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'fbedfc27-e064-48d5-b332-daeeccdd1cf4'} at 0x7f77c4589210> timeout=60, duration=300.00 at 0x7f77c4589810> >task#=674 at 0x7f77e406f790>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 >self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, >**kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = >method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1095, in connectStorageServer#012 >connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: >"/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 >self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in #012 **kwargs)#012File: "", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv() >Apr 10 09:56:27 node9 vdsm[56358]: WARN Worker blocked: u'active', u'1ed0a635-67ee-4255-aad9-b70822350706': u'active', u'95b4e5d2-2974-4d5f-91e4-351f75a15435': u'active', u'84f2ff4a-2ec5-42dc-807d-bd12745f387d': u'active'}, u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'scsiKey': >u'00000001-0001-0001-0001-000000000043', u'masterSdUUID': u'09959920-a31b-42c2-a547-e50b73602c96', u'hostID': 12}, 'jsonrpc': '2.0', 'method': u'StoragePool.connect', 'id': u'95545e2f-6b72-4107-bc19-8de3e549aba0'} at 0x7f77c458d310> >timeout=60, duration=120.01 at 0x7f77c458d0d0> task#=677 at 0x7f77e406f990>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", >line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line >195, in >run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 >task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, >self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1091, in >connect#012 self._UUID, hostID, masterSdUUID, masterVersion, domainDict)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "", line 2, in connectStoragePool#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", >line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1034, in connectStoragePool#012 spUUID, hostID, msdUUID, masterVersion, domainsMap)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1096, in _connectStoragePool#012 res = pool.connect(hostID, msdUUID, masterVersion)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 700, in connect#012 >self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1274, in __rebuild#012 self.setMasterDomain(msdUUID, masterVersion)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1491, in setMasterDomain#012 domain = sdCache.produce(msdUUID)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce#012 domain.getRealDomain()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain#012 return self._cache._realProduce(self._sdUUID)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce#012 domain = >self._findDomain(sdUUID)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain#012 return findMethod(sdUUID)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 169, in >_findUnfetchedDomain#012 return mod.findDomain(sdUUID)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 146, in findDomain#012 return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 130, in findDomainPath#012 for tmpSdUUID, domainPath in fileSD.scanDomains("*"):#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 893, in scanDomains#012 for >res in misc.itmap(collectMetaFiles, mntList, oop.HELPERS_PER_DOMAIN):#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/misc.py", line 538, in itmap#012 yield respQueue.get()#012File: "/usr/lib64/python2.7/Queue.py", line 168, in >get#012 self.not_empty.wait()#012File: "/usr/lib/python2.7/site-packages/pthreading.py", line 127, in wait#012 return self.__cond.wait()#012File: "/usr/lib/python2.7/site-packages/pthread.py", line 131, in wait#012 return >_libpthread.pthread_cond_wait(self._cond, m.mutex()) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 08:12:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 08:12:46 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-04 19:44:23 |2019-04-10 08:12:46 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/21681 (mgmt/glusterd: Make changes related to cloudsync xlator) merged (#13) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 08:12:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 08:12:47 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1642168, which changed state. Bug 1642168 Summary: changes to cloudsync xlator https://bugzilla.redhat.com/show_bug.cgi?id=1642168 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 09:03:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 09:03:14 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22542 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 09:03:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 09:03:15 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 --- Comment #14 from Worker Ant --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 09:11:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 09:11:57 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 --- Comment #9 from Worker Ant --- REVIEW: https://review.gluster.org/21694 (storage/posix: changes with respect to cloudsync) merged (#13) on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 09:52:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 09:52:37 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1642168 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 [Bug 1642168] changes to cloudsync xlator -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 09:52:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 09:52:37 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1692394 (glusterfs-6.1) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 11:15:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:15:15 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22508 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 11:15:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:15:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #620 from Worker Ant --- REVIEW: https://review.gluster.org/22508 (tests: correctly check open fd's when gfid is missing) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 11:50:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:50:35 +0000 Subject: [Bugs] [Bug 1698449] New: thin-arbiter lock release fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Bug ID: 1698449 Summary: thin-arbiter lock release fixes Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Addresses post-merge review comments for https://review.gluster.org/#/c/glusterfs/+/20095/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 11:50:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:50:52 +0000 Subject: [Bugs] [Bug 1698449] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 11:53:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:53:44 +0000 Subject: [Bugs] [Bug 1698449] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22543 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 11:53:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 11:53:45 +0000 Subject: [Bugs] [Bug 1698449] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22543 (afr: thin-arbiter lock release fixes) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 12:05:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:05:40 +0000 Subject: [Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |bugs at gluster.org Component|glusterd |glusterd Version|rhgs-3.5 |6 Product|Red Hat Gluster Storage |GlusterFS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 12:13:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:13:22 +0000 Subject: [Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Blocks| |1692394 (glusterfs-6.1) Flags| |needinfo?(budic at onholygroun | |d.com) --- Comment #2 from Atin Mukherjee --- Two requests I have from you: 1. Could you pass back the output of 'gluster peer status' and 'gluster volume status' 2. Could you share the tar of /var/log/glusterfs/*.log ? Please note that we did fix a similar problem in glusterfs-6.0 with the following commit, but if you're still able to reproduce it we need to investigate. On a test setup, running a volume start and ps aux | grep glusterfsd only shows me the required brick processes though, but the details asked for might give us more insights. commit 36c75523c1f0545f32db4b807623a8f94df98ca7 Author: Mohit Agrawal Date: Fri Mar 1 13:41:24 2019 +0530 glusterfsd: Multiple shd processes are spawned on brick_mux environment Problem: Multiple shd processes are spawned while starting volumes in the loop on brick_mux environment.glusterd spawn a process based on a pidfile and shd daemon is taking some time to update pid in pidfile due to that glusterd is not able to get shd pid Solution: Commit cd249f4cb783f8d79e79468c455732669e835a4f changed the code to update pidfile in parent for any gluster daemon after getting the status of forking child in parent.To resolve the same correct the condition update pidfile in parent only for glusterd and for rest of the daemon pidfile is updated in child > Change-Id: Ifd14797fa949562594a285ec82d58384ad717e81 > fixes: bz#1684404 > (Cherry pick from commit 66986594a9023c49e61b32769b7e6b260b600626) > (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22290/) Change-Id: I9a68064d2da1acd0ec54b4071a9995ece0c3320c fixes: bz#1683880 Signed-off-by: Mohit Agrawal Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 10 12:13:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:13:22 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1698131 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 12:38:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:38:15 +0000 Subject: [Bugs] [Bug 1698471] New: ctime feature breaks old client to connect to new server Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Bug ID: 1698471 Summary: ctime feature breaks old client to connect to new server Product: GlusterFS Version: 6 Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org Depends On: 1697907 Blocks: 1697820 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1697907 +++ Description of problem: Considering ctime is a client side feature, we can't blindly load ctime xlator into the client graph if it's explicitly turned off, that'd result into backward compatibility issue where an old client can't mount a volume configured on a server which is having ctime feature. Since ctime feature is enabled by default, any old client would still fail to connect to a new server until and unless this feature is turned off explicitly. Any client side feature when marked as enabled by default means there's a need to either turn of the feature if old client is to be made work with new servers or client needs to be upgraded to the latest version of server. Version-Release number of selected component (if applicable): Server >= release-6, client <= release-5 How reproducible: Always Steps to Reproduce: 1. Create a volume on glusterfs-5 or higher 2. Mount the volume from a client which is running any version lower than glusterfs-5 3. Mount doesn't go through Actual results: Mount fails Expected results: Mount shouldn't fail Additional info: --- Additional comment from Worker Ant on 2019-04-09 09:48:09 UTC --- REVIEW: https://review.gluster.org/22536 (glusterd: load ctime in the client graph only if it's not turned off) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2019-04-10 03:27:13 UTC --- REVIEW: https://review.gluster.org/22536 (glusterd: load ctime in the client graph only if it's not turned off) merged (#1) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697820 [Bug 1697820] rhgs 3.5 server not compatible with 3.4 client https://bugzilla.redhat.com/show_bug.cgi?id=1697907 [Bug 1697907] ctime feature breaks old client to connect to new server -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 12:38:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:38:15 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1698471 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 [Bug 1698471] ctime feature breaks old client to connect to new server -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 12:40:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:40:33 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22544 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 12:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 12:40:34 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22544 (glusterd: load ctime in the client graph only if it's not turned off) posted (#1) for review on release-6 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 15:05:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 15:05:24 +0000 Subject: [Bugs] [Bug 1697890] centos-regression is not giving its vote In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697890 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Flags| |needinfo?(srakonde at redhat.c | |om) --- Comment #1 from M. Scherer --- Mhh, I see that it did vote, can you explain a bit more the issue you have seen in details (as it could have been just a temporary problem) ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 15:07:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 15:07:39 +0000 Subject: [Bugs] [Bug 1697812] mention a pointer to all the mailing lists available under glusterfs project(https://www.gluster.org/community/) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697812 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- I do agree, I am however unsure on who is formally in charge of the website :/ I guess I do have the access to do the change, so if no one volunteer first, I will just go ahead and see. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 15:16:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 15:16:38 +0000 Subject: [Bugs] [Bug 1697890] centos-regression is not giving its vote In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697890 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srakonde at redhat.c | |om) | --- Comment #2 from Sanju --- There was an issue, Deepshikha has fixed it. Now, you can close this BZ as it won't exist anymore. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 15:48:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 15:48:48 +0000 Subject: [Bugs] [Bug 1698566] New: shd crashed while executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698566 Bug ID: 1698566 Summary: shd crashed while executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI Product: GlusterFS Version: mainline Status: NEW Component: selfheal Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: core has generated by ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in https://build.gluster.org/job/centos7-regression/5516/consoleFull Version-Release number of selected component (if applicable): mainline Additional info: >From the core: 18:27:05 Core was generated by `/build/install/sbin/glusterfs -s localhost --volfile-id shd/patchy-vol02 -p /va'. 18:27:05 Program terminated with signal 11, Segmentation fault. 18:27:05 #0 0x00007f657903542f in gf_mem_set_acct_info (xl=0x7f65342cb5f0, alloc_ptr=0x7f656b2ec7c8, size=31, type=24, typestr=0x7f65790e3960 "gf_common_mt_asprintf") at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/mem-pool.c:54 18:27:05 54 GF_ASSERT(type <= xl->mem_acct->num_types); this means the memory has corrupted. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 10 18:02:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 10 Apr 2019 18:02:03 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 anuradha.stalur at gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 02:43:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 02:43:38 +0000 Subject: [Bugs] [Bug 1698694] New: regression job isn't voting back to gerrit Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 Bug ID: 1698694 Summary: regression job isn't voting back to gerrit Product: GlusterFS Version: 6 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Please see https://review.gluster.org/#/c/glusterfs/+/22544/ & https://build.gluster.org/job/centos7-regression/5518/ which should have voted back and that didn't happen. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 03:41:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 03:41:52 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-10 08:12:46 |2019-04-11 03:41:52 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/21757 (features/cloudsync : Added some new functions) merged (#13) on master by Anuradha Talur -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 03:48:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 03:48:25 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-01 03:45:02 |2019-04-11 03:48:25 --- Comment #16 from Worker Ant --- REVIEW: https://review.gluster.org/22468 (client/fini: return fini after rpc cleanup) merged (#6) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 04:15:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 04:15:30 +0000 Subject: [Bugs] [Bug 1698694] regression job isn't voting back to gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #1 from Deepshikha khandelwal --- I thought I solved this issue. But looking more into the problem it seems the issue is not with Gerrit trigger plugin of Jenkins but with the builder. As I see all the patches which had due vote built on the same 203builder. Looking at what recent changes introduced this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 04:37:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 04:37:45 +0000 Subject: [Bugs] [Bug 1697316] Getting SEEK-2 and SEEK7 errors with [Invalid argument] in the bricks' logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697316 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-11 04:37:45 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22526 (core: only log seek errors if SEEK_HOLE/SEEK_DATA is available) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 05:14:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 05:14:27 +0000 Subject: [Bugs] [Bug 1698716] New: Regression job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/ Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698716 Bug ID: 1698716 Summary: Regression job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/ Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: A full regression run has passed on https://review.gluster.org/#/c/glusterfs/+/22366/ but the Centos regression vote has not been updated. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 06:19:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:19:35 +0000 Subject: [Bugs] [Bug 1698728] New: FUSE mount seems to be hung and not accessible Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 Bug ID: 1698728 Summary: FUSE mount seems to be hung and not accessible Product: Red Hat Gluster Storage Status: NEW Component: fuse Severity: high Assignee: csaba at redhat.com Reporter: saraut at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, nbalacha at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, tdesala at redhat.com Depends On: 1659334 Blocks: 1662838 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 [Bug 1659334] FUSE mount seems to be hung and not accessible https://bugzilla.redhat.com/show_bug.cgi?id=1662838 [Bug 1662838] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 06:19:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:19:35 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Sayalee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1698728 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 [Bug 1698728] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 06:19:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:19:35 +0000 Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662838 Sayalee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1698728 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 [Bug 1698728] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 06:45:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:45:45 +0000 Subject: [Bugs] [Bug 1628194] tests/dht: Additional tests for dht operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628194 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22545 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 06:45:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:45:46 +0000 Subject: [Bugs] [Bug 1628194] tests/dht: Additional tests for dht operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628194 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22545 (tests/dht: Test that lookups are sent post brick up) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 06:52:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 06:52:29 +0000 Subject: [Bugs] [Bug 1698728] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 Sachin P Mali changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Regression CC| |smali at redhat.com Version|unspecified |rhgs-3.5 QA Contact|rhinduja at redhat.com |saraut at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:38:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:38:40 +0000 Subject: [Bugs] [Bug 1593199] Stack overflow in readdirp with parallel-readdir enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593199 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-04-11 07:38:40 --- Comment #3 from Nithya Balachandran --- *** This bug has been marked as a duplicate of bug 1593548 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:38:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:38:40 +0000 Subject: [Bugs] [Bug 1593548] Stack overflow in readdirp with parallel-readdir enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593548 --- Comment #6 from Nithya Balachandran --- *** Bug 1593199 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:38:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:38:42 +0000 Subject: [Bugs] [Bug 1593548] Stack overflow in readdirp with parallel-readdir enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593548 Bug 1593548 depends on bug 1593199, which changed state. Bug 1593199 Summary: Stack overflow in readdirp with parallel-readdir enabled https://bugzilla.redhat.com/show_bug.cgi?id=1593199 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:40:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:40:24 +0000 Subject: [Bugs] [Bug 1414217] RFE: Add a server side stub to filter unhashed directories in readdirp In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1414217 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-11 07:40:24 --- Comment #4 from Nithya Balachandran --- I'm closing this BZ with WontFix. Please reopen the BZ if you feel this should be fixed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:45:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:45:14 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1546649 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1546649 [Bug 1546649] DHT: Readdir of directory which contain directory entries is slow -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 07:45:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 07:45:14 +0000 Subject: [Bugs] [Bug 1546649] DHT: Readdir of directory which contain directory entries is slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546649 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1689173 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 [Bug 1689173] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 08:55:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 08:55:44 +0000 Subject: [Bugs] [Bug 1698716] Regression job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698716 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com --- Comment #1 from Atin Mukherjee --- I don't think full regression votes back into the gerrit. However even a normal regression run isn't doing either and a bug https://bugzilla.redhat.com/show_bug.cgi?id=1698694 is filed for the same. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 11:45:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 11:45:43 +0000 Subject: [Bugs] [Bug 1698861] New: Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 Bug ID: 1698861 Summary: Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Running the following .t results in both olddir and newdir visible from the mount point and listing them shows no files. Steps to Reproduce: #!/bin/bash . $(dirname $0)/../../include.rc . $(dirname $0)/../../volume.rc . $(dirname $0)/../../common-utils.rc cleanup TEST glusterd TEST pidof glusterd TEST $CLI volume create $V0 disperse 6 disperse-data 4 $H0:$B0/$V0-{1..24} force TEST $CLI volume start $V0 TEST glusterfs -s $H0 --volfile-id $V0 $M0 ls $M0/ mkdir $M0/olddir mkdir $M0/olddir/subdir touch $M0/olddir/file-{1..10} ls -lR TEST kill_brick $V0 $H0 $B0/$V0-1 TEST kill_brick $V0 $H0 $B0/$V0-2 TEST kill_brick $V0 $H0 $B0/$V0-7 TEST kill_brick $V0 $H0 $B0/$V0-8 TEST mv $M0/olddir $M0/newdir # Start all bricks TEST $CLI volume start $V0 force $CLI volume status # It takes a while for the client to reconnect to the brick sleep 5 ls -l $M0 # Cleanup #cleanup Version-Release number of selected component (if applicable): How reproducible: Consistently Actual results: [root at rhgs313-6 tests]# ls -lR /mnt/glusterfs/0/ /mnt/glusterfs/0/: total 8 drwxr-xr-x. 2 root root 4096 Apr 11 17:12 newdir drwxr-xr-x. 2 root root 4096 Apr 11 17:12 olddir /mnt/glusterfs/0/newdir: total 0 /mnt/glusterfs/0/olddir: total 0 [root at rhgs313-6 tests]# Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 12:50:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 12:50:42 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22547 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 13:42:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 13:42:54 +0000 Subject: [Bugs] [Bug 1698716] Regression job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698716 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- Yep, they don't (afaik). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 13:46:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 13:46:26 +0000 Subject: [Bugs] [Bug 1698694] regression job isn't voting back to gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- Mhh, I see: 17:51:40 Host key verification failed. Could be something related to reinstallation of the builder last time. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 13:51:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 13:51:28 +0000 Subject: [Bugs] [Bug 1698694] regression job isn't voting back to gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 --- Comment #3 from M. Scherer --- Ok so I suspect the problem is the following. We reinstalled the builder, so the home was erased. The script do run ssh review, but it was blocked on the key verification step, because no one was here to type "yes", since this was the first attempt. I did it (su - jenkins, ssh review.gluster.org), and I suspect this should be better. If the symptom no longer appear, then my hypothesis was good. I think the fix would be to change the ssh command to accept the key on first use, i will provide a patch. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 14:37:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 14:37:59 +0000 Subject: [Bugs] [Bug 1698694] regression job isn't voting back to gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 --- Comment #4 from M. Scherer --- https://review.gluster.org/#/c/build-jobs/+/22548 for my proposed fix -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 15:07:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:07:55 +0000 Subject: [Bugs] [Bug 1699025] New: Brick is not able to detach successfully in brick_mux environment Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Bug ID: 1699025 Summary: Brick is not able to detach successfully in brick_mux environment Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1698919 Blocks: 1699023 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698919 [Bug 1698919] Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699023 [Bug 1699023] Brick is not able to detach successfully in brick_mux environment -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 15:07:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:07:55 +0000 Subject: [Bugs] [Bug 1699023] New: Brick is not able to detach successfully in brick_mux environment Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699023 Bug ID: 1699023 Summary: Brick is not able to detach successfully in brick_mux environment Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1698919, 1699025 Target Milestone: --- Classification: Community Depends On: 1699025 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698919 [Bug 1698919] Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699025 [Bug 1699025] Brick is not able to detach successfully in brick_mux environment -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 15:08:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:08:11 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 15:16:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:16:12 +0000 Subject: [Bugs] [Bug 1697866] Provide a way to detach a failed node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-11 15:16:12 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22534 (glusterd: provide a way to detach failed node) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 15:20:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:20:53 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #621 from Worker Ant --- REVIEW: https://review.gluster.org/22528 (glusterd: remove glusterd_check_volume_exists() call) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 11 15:29:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:29:19 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22549 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 11 15:29:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 11 Apr 2019 15:29:20 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22549 (core: Brick is not able to detach successfully in brick_mux environment) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:11:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:11:52 +0000 Subject: [Bugs] [Bug 1699176] New: rebalance start command doesn't throw up error message if the command fails Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Bug ID: 1699176 Summary: rebalance start command doesn't throw up error message if the command fails Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi, srakonde at redhat.com Depends On: 1683526 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1683526 +++ Description of problem: When a rebalance start command fails, it doesn't throw up the error message back to CLI. Version-Release number of selected component (if applicable): release-6 How reproducible: Always Steps to Reproduce: 1. Create 1 X 1 volume, trigger rebalance start. Command fails as glusterd.log complains about following [2019-02-27 06:29:15.448303] E [MSGID: 106218] [glusterd-rebalance.c:462:glusterd_rebalance_cmd_validate] 0-glusterd: Volume test-vol5 is not a distribute type or contains only 1 brick But CLI doesn't throw up any error messages. Actual results: CLI doesn't throw up an error message. Expected results: CLI should throw up an error message. Additional info: --- Additional comment from Worker Ant on 2019-04-11 18:20:42 IST --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) posted (#1) for review on master by Sanju Rakonde Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 [Bug 1683526] rebalance start command doesn't throw up error message if the command fails -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:11:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:11:52 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699176 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 [Bug 1699176] rebalance start command doesn't throw up error message if the command fails -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:14:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:14:57 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 --- Comment #2 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) posted (#2) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:14:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:14:58 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22547 | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:15:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:15:00 +0000 Subject: [Bugs] [Bug 1699176] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22547 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:15:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:15:01 +0000 Subject: [Bugs] [Bug 1699176] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) posted (#2) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:15:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:15:43 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #14 from Worker Ant --- REVIEW: https://review.gluster.org/22491 (tests: make sure to traverse all of meta dir) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:18:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:18:46 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-12 02:18:46 --- Comment #3 from Sanju --- A patch has posted for review at upstream master to fix this issue. As we cannot fix this BZ in 4.1 closing this BZ as won't fix. link to the patch: https://review.gluster.org/#/c/glusterfs/+/22547/ Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:18:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:18:47 +0000 Subject: [Bugs] [Bug 1699176] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Bug 1699176 depends on bug 1683526, which changed state. Bug 1683526 Summary: rebalance start command doesn't throw up error message if the command fails https://bugzilla.redhat.com/show_bug.cgi?id=1683526 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:28:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:28:09 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22550 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:28:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:28:10 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #15 from Worker Ant --- REVIEW: https://review.gluster.org/22550 (tests: write a tests for testing strings in volfile) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:47:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:47:53 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 02:50:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:50:37 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22551 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 02:50:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 02:50:38 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #16 from Worker Ant --- REVIEW: https://review.gluster.org/22551 (tests: add tests for monitoring) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 03:48:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 03:48:36 +0000 Subject: [Bugs] [Bug 1698728] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(saraut at redhat.com | |) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 03:51:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 03:51:03 +0000 Subject: [Bugs] [Bug 1699176] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-12 03:51:03 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 03:54:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 03:54:52 +0000 Subject: [Bugs] [Bug 1697486] bug-1650403.t && bug-858215.t are throwing error "No such file" at the time of access glustershd pidfile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697486 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-12 03:54:52 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22529 (test: Change glustershd_pid update in .t file) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 04:01:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 04:01:06 +0000 Subject: [Bugs] [Bug 1699189] New: fix truncate lock to cover the write in tuncate clean Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699189 Bug ID: 1699189 Summary: fix truncate lock to cover the write in tuncate clean Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: ec_truncate_clean does writing under the lock granted for truncate, but the lock is calculated by ec_adjust_offset_up, so that, the write in ec_truncate_clean is out of lock. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 04:03:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 04:03:23 +0000 Subject: [Bugs] [Bug 1699189] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699189 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22552 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 04:03:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 04:03:24 +0000 Subject: [Bugs] [Bug 1699189] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699189 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22552 (ec: fix truncate lock to cover the write in tuncate clean) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 04:58:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 04:58:44 +0000 Subject: [Bugs] [Bug 1699198] New: Glusterfs create a flock lock by anonymous fd, but can't release it forever. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 Bug ID: 1699198 Summary: Glusterfs create a flock lock by anonymous fd, but can't release it forever. Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: protocol Keywords: Triaged Severity: high Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org, pkarampu at redhat.com, sasundar at redhat.com, skoduri at redhat.com, xiaoping.wu at nokia.com Depends On: 1390914 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1390914 +++ Description of problem: - when tcp connection is connected between server and client, but AFR didn't get CHILD_UP event. if application open a file for flock operation. AFR would send the open fop to the CHILD client. - But the flock fop could be sent to server with a anonymous fd(-2), server could flock successfully. - When application release the flock, client can't send the release request to server, so server can't release the flock forever. Version-Release number of selected component (if applicable): 3.6.9 How reproducible: - In my env, there are a replicate volume, one brick locate in sn-0 VM, another brick locate in sn-1 VM. - glusterfs client locate in lmn-0 VM, sn-0, sn-1. - This issue isn't easy to reproduce. Steps to Reproduce: 1. when restart sn-0/sn-1 at the same time. 2. glusterfs server on sn-0/sn-1 startup. 3. application on lmn-0 VM will open a file to do flock. Actual results: 1. AFR get flock from sn-0 successfully by actual fd. 2 AFR get flock form sn-1 successfully by anonymous fd(-2). 3. when application release flock, the flock on sn-1 can't be release because release request didn't sent to server. Additional info: --- Additional comment from xiaopwu on 2016-11-03 02:25 UTC --- sn-0_mnt-bricks-services-brick.1062.dump.1477882903 is dumped from glusterfs server on sn-0. sn-1mnt-bricks-services-brick.1066.dump.1477882904 is dumped from glusterfs server on sn-1. --- Additional comment from xiaopwu on 2016-11-03 03:10:10 UTC --- Attachments analyse as below: 1. below logs are copied from sn-1mnt-bricks-services-brick.1066.dump.1477882904. the granted flock didn't released on sn-1, but the flock was release on sn-0. [xlator.features.locks.services-locks.inode] path=/lightcm/locks/nodes.all mandatory=0 posixlk-count=2 posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 17537, owner=18a27aa2d1d2944f, client=0x19e6b60, connection-id=(null), granted at 2016-10-31 02:58:07 posixlk.posixlk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 17559, owner=18764b24e15b0520, client=0x19e6b60, connection-id=(null), blocked at 2016-10-31 02:58:07 2. This flock comes from lmn VM. below logs are copied from lmn_mnt-services.log. // application opened "nodes.all" file, the OPEN request was sent to sn-0, but didn't send to sn-1 because 0-services-client-1 didn't UP. [2016-10-31 02:57:50.816359] T [rpc-clnt.c:1381:rpc_clnt_record] 0-services-client-0: Auth Info: pid: 17536, uid: 0, gid: 0, owner: 0000000000000000 [2016-10-31 02:57:50.816371] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 92, payload: 24, rpc hdr: 68 [2016-10-31 02:57:50.816390] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0xc7b7aa Program: GlusterFS 3.3, ProgVers: 330, Proc: 11) to rpc-transport (services-client-0) //application flock the file. the FLOCK request was sent to sn-0. and got lock from sn-0 successfully. [2016-10-31 02:57:50.817424] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0xc7b7ab Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) to rpc-transport (services-client-0) [2016-10-31 02:57:51.277349] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-services-client-0: received rpc message (RPC XID: 0xc7b7ab Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) from rpc-transport (services-client-0) //AFR xlator send FLOCK fop to sn-1 although 0-services-client-1 still didn't UP. Because AFR didn't open "nodes.all" on SN-1, client xlator sent the fop to server with a anonymous fd(GF_ANON_FD_NO -2), and got lock from server successfully too. this lock didn't released. [2016-10-31 02:57:51.277397] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0xbe5479 Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) to rpc-transport (services-client-1) [2016-10-31 02:57:51.324258] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-services-client-1: received rpc message (RPC XID: 0xbe5479 Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) from rpc-transport (services-client-1) //application released flock, RELEASE fop was sent to sn-0 and lock is released on sn-0. But the RELEASE fop wasn't sent to sn-1. [2016-10-31 02:57:51.404366] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0xc7b7ec Program: GlusterFS 3.3, ProgVers: 330, Proc: 41) to rpc-transport (services-client-0) [2016-10-31 02:57:51.406273] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-services-client-0: received rpc message (RPC XID: 0xc7b7ec Program: GlusterFS 3.3, ProgVers: 330, Proc: 41) from rpc-transport (services-client-0) //notice 0-services-client-0 UP time, it was before "nodes.all" open. [2016-10-31 02:57:49.802345] I [client-handshake.c:1052:client_post_handshake] 0-services-client-0: 22 fds open - Delaying child_up until they are re-opened [2016-10-31 02:57:49.983331] I [client-handshake.c:674:client_child_up_reopen_done] 0-services-client-0: last fd open'd/lock-self-heal'd - notifying CHILD-UP //notice 0-services-client-1 UP time, it was after flock release. [2016-10-31 02:57:51.244367] I [client-handshake.c:1052:client_post_handshake] 0-services-client-1: 21 fds open - Delaying child_up until they are re-opened [2016-10-31 02:57:51.731500] I [client-handshake.c:674:client_child_up_reopen_done] 0-services-client-1: last fd open'd/lock-self-heal'd - notifying CHILD-UP 3. code //FLOCK fop was sent to server with a anonymous fd, if flock file didn't opened. client3_3_lk (call_frame_t *frame, xlator_t *this, void *data) { } // RELEASE fop was sent to server, if the file didn't opened. client3_3_release (call_frame_t *frame, xlator_t *this, void *data) { } 4. If any way to fix the issue? --- Additional comment from xiaopwu on 2016-11-08 08:33:05 UTC --- We added a patch for this issue, please check if it is ok. --- a/old/client-rpc-fops.c +++ b/new/client-rpc-fops.c @@ -5260,6 +5260,14 @@ client3_3_lk (call_frame_t *frame, xlator_t *this, CLIENT_GET_REMOTE_FD (this, args->fd, DEFAULT_REMOTE_FD, remote_fd, op_errno, unwind); + if(remote_fd < 0) + { + gf_log (this->name, GF_LOG_INFO, "Didn't open remote fd(%ld), but return EBADFD, AFR shall ignore such error. pid: %u ", remote_fd, frame->root->pid); + op_errno = EBADFD; + CLIENT_STACK_UNWIND (lk, frame, -1, op_errno, NULL, NULL); + return 0; + } + ret = client_cmd_to_gf_cmd (args->cmd, &gf_cmd); if (ret) { op_errno = EINVAL; --- Additional comment from Soumya Koduri on 2016-11-08 12:18:15 UTC --- CCin Pranith (AFR and Posix-locks code maintainer) --- Additional comment from Worker Ant on 2016-11-08 12:49:01 UTC --- REVIEW: http://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu at redhat.com) --- Additional comment from Pranith Kumar K on 2016-11-08 12:53:27 UTC --- (In reply to xiaopwu from comment #3) > We added a patch for this issue, please check if it is ok. > > --- a/old/client-rpc-fops.c > +++ b/new/client-rpc-fops.c > @@ -5260,6 +5260,14 @@ client3_3_lk (call_frame_t *frame, xlator_t *this, > CLIENT_GET_REMOTE_FD (this, args->fd, DEFAULT_REMOTE_FD, > remote_fd, op_errno, unwind); > > + if(remote_fd < 0) > + { > + gf_log (this->name, GF_LOG_INFO, "Didn't open remote fd(%ld), > but return EBADFD, AFR shall ignore such error. pid: %u ", remote_fd, > frame->root->pid); > + op_errno = EBADFD; > + CLIENT_STACK_UNWIND (lk, frame, -1, op_errno, NULL, NULL); > + return 0; > + } > + > ret = client_cmd_to_gf_cmd (args->cmd, &gf_cmd); > if (ret) { > op_errno = EINVAL; Awesome debugging!! I think the fix can be generic, I posted the fix to the bug I introduced a while back! Thanks for raising this bug!. I will port this patch to lower branches once this one passes regressions. Pranith --- Additional comment from Pranith Kumar K on 2016-11-09 11:11:44 UTC --- Regressions are failing on the change I submitted, will look into it a bit more and update --- Additional comment from Worker Ant on 2017-09-04 10:57:10 UTC --- REVIEW: https://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu at redhat.com) --- Additional comment from Worker Ant on 2019-03-28 12:28:26 UTC --- REVIEW: https://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) posted (#3) for review on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-03-31 02:57:49 UTC --- REVIEW: https://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) merged (#8) on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1390914 [Bug 1390914] Glusterfs create a flock lock by anonymous fd, but can't release it forever. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 04:58:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 04:58:44 +0000 Subject: [Bugs] [Bug 1390914] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1390914 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699198 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 [Bug 1699198] Glusterfs create a flock lock by anonymous fd, but can't release it forever. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 05:00:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 05:00:41 +0000 Subject: [Bugs] [Bug 1699198] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22553 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 05:00:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 05:00:42 +0000 Subject: [Bugs] [Bug 1699198] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22553 (protocol/client: Do not fallback to anon-fd if fd is not open) posted (#1) for review on release-6 by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 05:07:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 05:07:50 +0000 Subject: [Bugs] [Bug 1697930] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-12 05:07:50 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22537 (cluster/afr: Thin-arbiter SHD fixes) merged (#4) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 05:48:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 05:48:29 +0000 Subject: [Bugs] [Bug 1698728] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698728 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rhinduja at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 06:53:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 06:53:56 +0000 Subject: [Bugs] [Bug 1628194] tests/dht: Additional tests for dht operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628194 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22545 (tests/dht: Test that lookups are sent post brick up) merged (#2) on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 11:38:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 11:38:25 +0000 Subject: [Bugs] [Bug 1699189] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699189 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22552 (ec: fix truncate lock to cover the write in tuncate clean) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 11:43:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 11:43:48 +0000 Subject: [Bugs] [Bug 1699309] New: Gluster snapshot fails with systemd autmounted bricks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699309 Bug ID: 1699309 Summary: Gluster snapshot fails with systemd autmounted bricks Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: snapshot Assignee: bugs at gluster.org Reporter: hunter86_bg at yahoo.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Gluster v5.5 (oVirt 4.3.2) fails to create a snapshot when the gluster bricks have an ".automount" unit. Version-Release number of selected component (if applicable): glusterfs-5.5-1.el7.x86_64 glusterfs-api-5.5-1.el7.x86_64 glusterfs-api-devel-5.5-1.el7.x86_64 glusterfs-cli-5.5-1.el7.x86_64 glusterfs-client-xlators-5.5-1.el7.x86_64 glusterfs-coreutils-0.2.0-1.el7.x86_64 glusterfs-devel-5.5-1.el7.x86_64 glusterfs-events-5.5-1.el7.x86_64 glusterfs-extra-xlators-5.5-1.el7.x86_64 glusterfs-fuse-5.5-1.el7.x86_64 glusterfs-geo-replication-5.5-1.el7.x86_64 glusterfs-libs-5.5-1.el7.x86_64 glusterfs-rdma-5.5-1.el7.x86_64 glusterfs-resource-agents-5.5-1.el7.noarch glusterfs-server-5.5-1.el7.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.6.x86_64 nfs-ganesha-gluster-2.7.2-1.el7.x86_64 python2-gluster-5.5-1.el7.x86_64 vdsm-gluster-4.30.11-1.el7.x86_64 How reproducible: Always. Steps to Reproduce: 1.Create brick mount & automount units Ex: [root at ovirt1 system]# systemctl cat gluster_bricks-isos.mount # /etc/systemd/system/gluster_bricks-isos.mount [Unit] Description=Mount glusterfs brick - ISOS Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target [Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_isos Where=/gluster_bricks/isos Type=xfs Options=inode64,noatime,nodiratime [Install] WantedBy=glusterd.service [root at ovirt1 system]# systemctl cat gluster_bricks-isos.automount # /etc/systemd/system/gluster_bricks-isos.automount [Unit] Description=automount for gluster brick ISOS [Automount] Where=/gluster_bricks/isos [Install] WantedBy=multi-user.target 2.Create a gluster volume on the bricks. Ex: Volume Name: isos Type: Replicate Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/isos/isos Brick2: ovirt2:/gluster_bricks/isos/isos Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter) Options Reconfigured: cluster.granular-entry-heal: enable performance.strict-o-direct: on network.ping-timeout: 30 storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable 3.Create snapshot: gluster snapshot create isos-snap-2019-04-11 isos description TEST Actual results: Error in logs and console: [2019-04-12 07:56:54.526508] E [MSGID: 106077] [glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get pool name for device systemd-1 [2019-04-12 07:56:54.527509] E [MSGID: 106121] [glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: Failed to pre validate [2019-04-12 07:56:54.527525] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV. [2019-04-12 07:56:54.527539] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed [2019-04-12 07:56:54.527552] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed [2019-04-12 07:56:54.527568] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node [2019-04-12 07:56:54.527583] E [MSGID: 106121] [glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre Validation Failed Expected results: Gluster to exclude entries of type "autofs" in /proc/mounts and create the snapshot. Additional info: Disabling the automount units and restarting the mount units' fixes the issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 12:12:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 12:12:47 +0000 Subject: [Bugs] [Bug 1699319] New: Thin-Arbiter SHD minor fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 Bug ID: 1699319 Summary: Thin-Arbiter SHD minor fixes Product: GlusterFS Version: 6 Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org Depends On: 1697930 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1697930 +++ Description of problem: Address post-merge review comments for commit 5784a00f997212d34bd52b2303e20c097240d91c Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 [Bug 1697930] Thin-Arbiter SHD minor fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 12:12:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 12:12:47 +0000 Subject: [Bugs] [Bug 1697930] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697930 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699319 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 [Bug 1699319] Thin-Arbiter SHD minor fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 12:12:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 12:12:47 +0000 Subject: [Bugs] [Bug 1699319] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 12:17:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 12:17:47 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699319 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 [Bug 1699319] Thin-Arbiter SHD minor fixes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 12:27:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 12:27:05 +0000 Subject: [Bugs] [Bug 1699319] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22555 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 13:09:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:09:03 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #622 from Worker Ant --- REVIEW: https://review.gluster.org/22281 (Replace memdup() with gf_memdup()) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 13:14:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:14:19 +0000 Subject: [Bugs] [Bug 1699339] New: With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Bug ID: 1699339 Summary: With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up Product: GlusterFS Version: mainline Status: NEW Component: glusterd Keywords: ZStream Severity: low Priority: low Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, moagrawa at redhat.com, nberry at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1652461 Blocks: 1652465 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1652461 [Bug 1652461] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up https://bugzilla.redhat.com/show_bug.cgi?id=1652465 [Bug 1652465] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 13:14:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:14:35 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 13:21:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:21:22 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22556 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 13:21:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:21:23 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22556 (glusterd[WIP]: Optimize glusterd handshaking code path) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 13:41:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 13:41:59 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #2 from manschwetus at cs-software-gmbh.de --- Today I encountered another segfault, same stack. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 15:06:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 15:06:45 +0000 Subject: [Bugs] [Bug 1699394] New: [geo-rep]: Geo-rep goes FAULTY with OSError Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Bug ID: 1699394 Summary: [geo-rep]: Geo-rep goes FAULTY with OSError Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Keywords: Regression Severity: urgent Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rallan at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, smali at redhat.com, storage-qa-internal at redhat.com Depends On: 1699271 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1699271 +++ Description of problem: ======================= Geo-replication goes faulty Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line 37, in init return Changes.cl_init() File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line 21, in __getattr__ from libgfchangelog import Changes as LChanges File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 18, in class Changes(object): File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 20, in Changes use_errno=True) File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__ self._handle = _dlopen(self._name, mode) OSError: libgfchangelog.so: cannot open shared object file: No such file or directory Version-Release number of selected component (if applicable): ============================================================= mainline How reproducible: ================ Always Actual results: =============== Geo-rep status is FAULTY Expected results: ================= Geo-rep status should be ACTIVE/PASSIVE Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699271 [Bug 1699271] [geo-rep]: Geo-rep FAULTY in RHGS 3.5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 15:07:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 15:07:25 +0000 Subject: [Bugs] [Bug 1699394] [geo-rep]: Geo-rep goes FAULTY with OSError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 15:09:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 15:09:46 +0000 Subject: [Bugs] [Bug 1699394] [geo-rep]: Geo-rep goes FAULTY with OSError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 15:10:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 15:10:02 +0000 Subject: [Bugs] [Bug 1699394] [geo-rep]: Geo-rep goes FAULTY with OSError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22557 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 15:10:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 15:10:03 +0000 Subject: [Bugs] [Bug 1699394] [geo-rep]: Geo-rep goes FAULTY with OSError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22557 (libgfchangelog : use find_library to locate shared library) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 21:41:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 21:41:51 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 anuradha.stalur at gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 12 23:55:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 23:55:41 +0000 Subject: [Bugs] [Bug 1699499] New: fix truncate lock to cover the write in tuncate clean Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699499 Bug ID: 1699499 Summary: fix truncate lock to cover the write in tuncate clean Product: GlusterFS Version: 6 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1699189 I am copying this bug because: Description of problem: ec_truncate_clean does writing under the lock granted for truncate, but the lock is calculated by ec_adjust_offset_up, so that, the write in ec_truncate_clean is out of lock. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 23:56:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 23:56:36 +0000 Subject: [Bugs] [Bug 1699500] New: fix truncate lock to cover the write in tuncate clean Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699500 Bug ID: 1699500 Summary: fix truncate lock to cover the write in tuncate clean Product: GlusterFS Version: 5 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1699189 I am copying this bug because: Description of problem: ec_truncate_clean does writing under the lock granted for truncate, but the lock is calculated by ec_adjust_offset_up, so that, the write in ec_truncate_clean is out of lock. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 23:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 23:59:33 +0000 Subject: [Bugs] [Bug 1699499] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699499 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22559 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 12 23:59:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 12 Apr 2019 23:59:34 +0000 Subject: [Bugs] [Bug 1699499] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699499 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22559 (ec: fix truncate lock to cover the write in tuncate clean) posted (#1) for review on release-6 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 13 00:04:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 13 Apr 2019 00:04:55 +0000 Subject: [Bugs] [Bug 1699500] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699500 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22560 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 13 00:04:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 13 Apr 2019 00:04:56 +0000 Subject: [Bugs] [Bug 1699500] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699500 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22560 (ec: fix truncate lock to cover the write in tuncate clean) posted (#1) for review on release-5 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 13 01:58:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 13 Apr 2019 01:58:38 +0000 Subject: [Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 Darrell changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(budic at onholygroun | |d.com) | --- Comment #3 from Darrell --- While things were in the state I described above, peer status was normal, as it is now: [root at boneyard telsin]# gluster peer status Number of Peers: 2 Hostname: ossuary-san Uuid: 0ecbf953-681b-448f-9746-d1c1fe7a0978 State: Peer in Cluster (Connected) Other names: 10.50.3.12 Hostname: necropolis-san Uuid: 5d082bda-bb00-48d4-9f51-ea0995066c6f State: Peer in Cluster (Connected) Other names: 10.50.3.10 There's a 'gluster vol status gvOvirt' from the time there were multiple fsd processes running in the original ticket. At the moment, everything is normal, so I can't get you another while unusual things are happening. At the moment, it looks like: [root at boneyard telsin]# gluster vol status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick necropolis-san:/v0/bricks/gv0 49154 0 Y 10425 Brick boneyard-san:/v0/bricks/gv0 49152 0 Y 8504 Brick ossuary-san:/v0/bricks/gv0 49152 0 Y 13563 Self-heal Daemon on localhost N/A N/A Y 22864 Self-heal Daemon on ossuary-san N/A N/A Y 5815 Self-heal Daemon on necropolis-san N/A N/A Y 13859 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: gvOvirt Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick boneyard-san:/v0/gbOvirt/b0 49153 0 Y 9108 Brick necropolis-san:/v0/gbOvirt/b0 49155 0 Y 10510 Brick ossuary-san:/v0/gbOvirt/b0 49153 0 Y 13577 Self-heal Daemon on localhost N/A N/A Y 22864 Self-heal Daemon on ossuary-san N/A N/A Y 5815 Self-heal Daemon on necropolis-san N/A N/A Y 13859 Task Status of Volume gvOvirt ------------------------------------------------------------------------------ There are no active volume tasks Also of note, it appears to have corrupted my Ovirt Hosted Engine VM. Full logs are attached, hope it helps! Sorry about some of the large files, for some reason this system wasn't rotating them properly until I did some cleanup. I can take this cluster to 6.1 as soon as it appears in testing, or leave it a bit longer and try restarting some volumes or rebooting to see if I can recreate if it would help? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 13 02:03:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 13 Apr 2019 02:03:10 +0000 Subject: [Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 --- Comment #4 from Darrell --- Logs were to big to attach, find them here: https://tower.ohgnetworks.com/index.php/s/UCj5amzjQdQsE5C -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 13 08:25:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 13 Apr 2019 08:25:56 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Bug 1699339 depends on bug 1652461, which changed state. Bug 1652461 Summary: With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up https://bugzilla.redhat.com/show_bug.cgi?id=1652461 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CANTFIX |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 14 08:53:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 14 Apr 2019 08:53:25 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #623 from Worker Ant --- REVIEW: https://review.gluster.org/22541 (glusterd-volgen.c: skip fetching some vol settings in a bricks loop.) merged (#6) on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 14 13:58:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 14 Apr 2019 13:58:14 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-14 13:58:14 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22549 (core: Brick is not able to detach successfully in brick_mux environment) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 14 13:58:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 14 Apr 2019 13:58:14 +0000 Subject: [Bugs] [Bug 1699023] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699023 Bug 1699023 depends on bug 1699025, which changed state. Bug 1699025 Summary: Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699025 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 02:19:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 02:19:31 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22456 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 02:19:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 02:19:32 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #17 from Worker Ant --- REVIEW: https://review.gluster.org/22456 (marker-quota: remove dead code) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 02:24:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 02:24:17 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-15 02:24:17 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22540 (posix/ctime: Fix stat(time attributes) inconsistency during readdirp) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 02:26:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 02:26:00 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #624 from Worker Ant --- REVIEW: https://review.gluster.org/22415 (graph.c: remove extra gettimeofday() - reuse the graph dob.) merged (#11) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 03:24:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:24:48 +0000 Subject: [Bugs] [Bug 1699703] New: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Bug ID: 1699703 Summary: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Product: GlusterFS Version: 6 Status: NEW Component: ctime Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Depends On: 1698078 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1698078 +++ Description of problem: On latest master where ctime feature is enabled by default, creation of tar file throws warning that 'file changed as we read it' Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create a replica 1*3 gluster volume and mount it. 2. Untar a file onto gluster mount #tar xvf ~/linux-5.0.6.tar.xz -C /gluster-mnt/ 3. Create tar file from untarred files #mkdir /gluster-mnt/test-untar/ #cd /gluster-mnt #tar -cvf ./test-untar/linux.tar ./linux-5.0.6 Actual results: Creation of tar file from gluster mount throws warning 'file changed as we read it' Expected results: Creation of tar file from gluster mount should not throw any warning. Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 03:24:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:24:48 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699703 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:25:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:25:19 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 03:28:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:28:50 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22561 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:28:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:28:51 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22561 (posix/ctime: Fix stat(time attributes) inconsistency during readdirp) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:55:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:55:07 +0000 Subject: [Bugs] [Bug 1699709] New: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699709 Bug ID: 1699709 Summary: ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: posix Assignee: rabhat at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1698078 Blocks: 1699703 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1698078 +++ Description of problem: On latest master where ctime feature is enabled by default, creation of tar file throws warning that 'file changed as we read it' Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create a replica 1*3 gluster volume and mount it. 2. Untar a file onto gluster mount #tar xvf ~/linux-5.0.6.tar.xz -C /gluster-mnt/ 3. Create tar file from untarred files #mkdir /gluster-mnt/test-untar/ #cd /gluster-mnt #tar -cvf ./test-untar/linux.tar ./linux-5.0.6 Actual results: Creation of tar file from gluster mount throws warning 'file changed as we read it' Expected results: Creation of tar file from gluster mount should not throw any warning. Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" https://bugzilla.redhat.com/show_bug.cgi?id=1699703 [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:55:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:55:07 +0000 Subject: [Bugs] [Bug 1698078] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698078 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699709 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699709 [Bug 1699709] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:55:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:55:07 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699709 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699709 [Bug 1699709] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 03:56:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 03:56:27 +0000 Subject: [Bugs] [Bug 1699709] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699709 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|rabhat at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:11:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:11:40 +0000 Subject: [Bugs] [Bug 1699712] New: regression job is voting Success even in case of failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699712 Bug ID: 1699712 Summary: regression job is voting Success even in case of failure Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Check : https://build.gluster.org/job/centos7-regression/5596/consoleFull ---- 09:15:07 1 test(s) failed 09:15:07 ./tests/basic/uss.t 09:15:07 09:15:07 0 test(s) generated core 09:15:07 09:15:07 09:15:07 2 test(s) needed retry 09:15:07 ./tests/basic/quick-read-with-upcall.t 09:15:07 ./tests/basic/uss.t 09:15:07 09:15:07 Result is 124 09:15:07 09:15:07 tar: Removing leading `/' from member names 09:15:10 kernel.core_pattern = /%e-%p.core 09:15:10 + RET=0 09:15:10 + '[' 0 = 0 ']' 09:15:10 + V=+1 09:15:10 + VERDICT=SUCCESS ---- Version-Release number of selected component (if applicable): latest master -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:27:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:27:38 +0000 Subject: [Bugs] [Bug 1699713] New: glusterfs build is failing on rhel-6 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Bug ID: 1699713 Summary: glusterfs build is failing on rhel-6 Product: GlusterFS Version: 6 Status: NEW Component: build Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Depends On: 1696512 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1696512 +++ Description of problem: glusterfs build is failing on RHEL 6. Version-Release number of selected component (if applicable): How reproducible: Run make for glusterfs on RHEL-6 make us throwing below error .libs/glusterd_la-glusterd-utils.o: In function `glusterd_get_volopt_content': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:13333: undefined reference to `dlclose' .libs/glusterd_la-glusterd-utils.o: In function `glusterd_get_value_for_vme_entry': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:12890: undefined reference to `dlclose' .libs/glusterd_la-glusterd-volgen.o: In function `_gd_get_option_type': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-volgen.c:6902: undefined reference to `dlclose' .libs/glusterd_la-glusterd-quota.o: In function `_glusterd_validate_quota_opts': /root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-quota.c:1947: undefined reference to `dlclose' collect2: ld returned 1 exit status Steps to Reproduce: 1. 2. 3. Actual results: glusterfs build is failing Expected results: the build should not fail Additional info: --- Additional comment from Worker Ant on 2019-04-05 03:52:10 UTC --- REVIEW: https://review.gluster.org/22510 (build: glusterfs build is failing on RHEL-6) posted (#1) for review on master by MOHIT AGRAWAL --- Additional comment from Worker Ant on 2019-04-10 03:27:41 UTC --- REVIEW: https://review.gluster.org/22510 (build: glusterfs build is failing on RHEL-6) merged (#3) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 [Bug 1696512] glusterfs build is failing on rhel-6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:27:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:27:38 +0000 Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696512 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699713 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 [Bug 1699713] glusterfs build is failing on rhel-6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:27:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:27:51 +0000 Subject: [Bugs] [Bug 1699713] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:29:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:29:36 +0000 Subject: [Bugs] [Bug 1699713] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22562 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:29:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:29:37 +0000 Subject: [Bugs] [Bug 1699713] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22562 (build: glusterfs build is failing on RHEL-6) posted (#1) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:30:51 +0000 Subject: [Bugs] [Bug 1699714] New: Brick is not able to detach successfully in brick_mux environment Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Bug ID: 1699714 Summary: Brick is not able to detach successfully in brick_mux environment Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1698919 Blocks: 1699023, 1699025 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1698919 +++ Description of problem: Brick is not detached successfully while brick_mux is enabled. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Setup 3 node cluster environment 2. Enable brick_mux 3. Run below loop to setup 50 volumes for i in {1..50}; do gluster v create testvol$i replica 3 :/home/testvol/b$i :/home/testvol/b$i :/home/testvol/b$i force; gluster v start testvol$i;done 4. Run below loop to stop volume for i in {2..50}; do gluster v stop testvol$i --mode=script; sleep 1; ;done 5. After run above loop check brick in running process ls -lrth /proc/`pgrep glusterfsd`/fd | grep b | grep -v .glusterfs the command is showing multiple bricks are still part of the running process. Actual results: Bricks are not detached successfully. Expected results: Bricks should be detached successfully. Additional info: --- Additional comment from RHEL Product and Program Management on 2019-04-11 12:42:35 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698919 [Bug 1698919] Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699023 [Bug 1699023] Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699025 [Bug 1699025] Brick is not able to detach successfully in brick_mux environment -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:30:51 +0000 Subject: [Bugs] [Bug 1699023] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699023 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699714 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 [Bug 1699714] Brick is not able to detach successfully in brick_mux environment -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:30:51 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699714 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 [Bug 1699714] Brick is not able to detach successfully in brick_mux environment -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:30:51 +0000 Subject: [Bugs] [Bug 1699714] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:31:17 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-15 04:31:17 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22495 (core: Log level changes do not effect on running client process) merged (#10) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:31:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:31:18 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1696046, which changed state. Bug 1696046 Summary: Log level changes do not take effect until the process is restarted https://bugzilla.redhat.com/show_bug.cgi?id=1696046 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:33:20 +0000 Subject: [Bugs] [Bug 1699714] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22563 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:33:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:33:21 +0000 Subject: [Bugs] [Bug 1699714] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22563 (core: Brick is not able to detach successfully in brick_mux environment) posted (#1) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 04:36:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:36:05 +0000 Subject: [Bugs] [Bug 1699715] New: Log level changes do not take effect until the process is restarted Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 Bug ID: 1699715 Summary: Log level changes do not take effect until the process is restarted Product: GlusterFS Version: 6 Status: NEW Component: core Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, nbalacha at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, vbellur at redhat.com Depends On: 1695081 Blocks: 1696046 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1695081 [Bug 1695081] Log level changes do not take effect until the process is restarted https://bugzilla.redhat.com/show_bug.cgi?id=1696046 [Bug 1696046] Log level changes do not take effect until the process is restarted -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 04:36:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 04:36:05 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699715 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 [Bug 1699715] Log level changes do not take effect until the process is restarted -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 05:08:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:08:53 +0000 Subject: [Bugs] [Bug 1699715] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22564 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:08:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:08:54 +0000 Subject: [Bugs] [Bug 1699715] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22564 (core: Log level changes do not effect on running client process) posted (#2) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:09:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:09:33 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1471742 Depends On|1471742 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1471742 [Bug 1471742] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 05:12:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:12:09 +0000 Subject: [Bugs] [Bug 1699715] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22564 (core: Log level changes do not effect on running client process) posted (#4) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:16:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:16:15 +0000 Subject: [Bugs] [Bug 1699712] regression job is voting Success even in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699712 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #1 from Deepshikha khandelwal --- Thank you Amar for pointing this out. It turned out to behave like this because of the changes in config we made last Friday evening. Now that it is fixed, I re-triggered on three of the impacted patches. Sorry for this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:27:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:27:02 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1652465 |1652461 Depends On|1652461 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1652461 [Bug 1652461] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up https://bugzilla.redhat.com/show_bug.cgi?id=1652465 [Bug 1652465] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 05:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:30:59 +0000 Subject: [Bugs] [Bug 1694637] Geo-rep: Rename to an existing file name destroys its content on slave In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694637 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED CC| |sunkumar at redhat.com QA Contact| |sunkumar at redhat.com --- Comment #1 from Sunny Kumar --- We are working on this issue and a bug is already in place for mainline which can be tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:32:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:32:23 +0000 Subject: [Bugs] [Bug 1694637] Geo-rep: Rename to an existing file name destroys its content on slave In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694637 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |sunkumar at redhat.com QA Contact|sunkumar at redhat.com | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 05:47:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 05:47:21 +0000 Subject: [Bugs] [Bug 1696633] GlusterFs v4.1.5 Tests from /tests/bugs/ module failing on Intel In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696633 --- Comment #1 from Cnaik --- Hi Team, Is there any update on these tests failing on Intel? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:03:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:03:00 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-15 06:03:00 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#6) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:06:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:06:43 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22514 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:06:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:06:44 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1604 from Worker Ant --- REVIEW: https://review.gluster.org/22514 (shd/mux: Fix coverity issues introduced by shd mux patch) merged (#7) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:10:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:10:56 +0000 Subject: [Bugs] [Bug 1699731] New: Fops hang when inodelk fails on the first fop Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 Bug ID: 1699731 Summary: Fops hang when inodelk fails on the first fop Product: GlusterFS Version: 6 Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Depends On: 1696599 Blocks: 1688395 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1696599 +++ Description of problem: Steps: glusterd gluster peer probe localhost.localdomain peer probe: success. Probe on localhost not needed gluster --mode=script --wignore volume create r3 replica 3 localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1 localhost.localdomain:/home/gfs/r3_2 volume create: r3: success: please start the volume to access data gluster --mode=script volume start r3 volume start: r3: success mkdir: cannot create directory ?/mnt/r3?: File exists mount -t glusterfs localhost.localdomain:/r3 /mnt/r3 First terminal: # cd /mnt/r3 # touch abc Attach the mount process in gdb and put a break point on function afr_lock() >From second terminal: # exec 200>abc # echo abc >&200 # When the break point is hit, on third terminal execute "gluster volume stop r3" # quit gdb # execute "gluster volume start r3 force" # On the first terminal execute "exec abc >&200" again and this command hangs. Version-Release number of selected component (if applicable): How reproducible: Always Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-04-05 08:37:54 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-04-15 06:03:00 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#6) on master by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 [Bug 1696599] Fops hang when inodelk fails on the first fop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:10:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:10:56 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699731 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 [Bug 1699731] Fops hang when inodelk fails on the first fop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:20:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:20:05 +0000 Subject: [Bugs] [Bug 1699731] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22565 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:21:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:21:24 +0000 Subject: [Bugs] [Bug 1699736] New: Fops hang when inodelk fails on the first fop Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 Bug ID: 1699736 Summary: Fops hang when inodelk fails on the first fop Product: GlusterFS Version: 5 Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Depends On: 1699731, 1696599 Blocks: 1688395 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1699731 +++ +++ This bug was initially created as a clone of Bug #1696599 +++ Description of problem: Steps: glusterd gluster peer probe localhost.localdomain peer probe: success. Probe on localhost not needed gluster --mode=script --wignore volume create r3 replica 3 localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1 localhost.localdomain:/home/gfs/r3_2 volume create: r3: success: please start the volume to access data gluster --mode=script volume start r3 volume start: r3: success mkdir: cannot create directory ?/mnt/r3?: File exists mount -t glusterfs localhost.localdomain:/r3 /mnt/r3 First terminal: # cd /mnt/r3 # touch abc Attach the mount process in gdb and put a break point on function afr_lock() >From second terminal: # exec 200>abc # echo abc >&200 # When the break point is hit, on third terminal execute "gluster volume stop r3" # quit gdb # execute "gluster volume start r3 force" # On the first terminal execute "exec abc >&200" again and this command hangs. Version-Release number of selected component (if applicable): How reproducible: Always Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-04-05 08:37:54 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-04-15 06:03:00 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#6) on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-04-15 06:20:05 UTC --- REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on release-6 by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 [Bug 1696599] Fops hang when inodelk fails on the first fop https://bugzilla.redhat.com/show_bug.cgi?id=1699731 [Bug 1699731] Fops hang when inodelk fails on the first fop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:21:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:21:24 +0000 Subject: [Bugs] [Bug 1699731] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699736 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 [Bug 1699736] Fops hang when inodelk fails on the first fop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:21:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:21:24 +0000 Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696599 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699736 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 [Bug 1699736] Fops hang when inodelk fails on the first fop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:31:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:31:32 +0000 Subject: [Bugs] [Bug 1691617] clang-scan tests are failing nightly. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691617 --- Comment #2 from Amar Tumballi --- I guess it is fine to depend on f29 or f30. I know there are some warnings which we need to fix. But those are good to fix anyways. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:33:13 +0000 Subject: [Bugs] [Bug 1691617] clang-scan tests are failing nightly. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691617 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-04-15 06:33:13 --- Comment #3 from Amar Tumballi --- Jobs are now running! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:43:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:43:25 +0000 Subject: [Bugs] [Bug 1699736] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22567 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:43:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:43:26 +0000 Subject: [Bugs] [Bug 1699736] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22567 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on release-5 by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 06:48:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 06:48:14 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #3 from manschwetus at cs-software-gmbh.de --- I just checked another corefile from a second system, same stack, that issue gets quite annoying. Could some pls look into it. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 08:47:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 08:47:52 +0000 Subject: [Bugs] [Bug 1696721] geo-replication failing after upgrade from 5.5 to 6.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696721 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sunkumar at redhat.com --- Comment #1 from Sunny Kumar --- Hi, Can you please check all brick status, it looks like brick/s is/are not up. Please do a force gluster vol start it should work. If it does not please share all logs form master and slave volumes. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 08:49:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 08:49:01 +0000 Subject: [Bugs] [Bug 1696721] geo-replication failing after upgrade from 5.5 to 6.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696721 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 11:28:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 11:28:25 +0000 Subject: [Bugs] [Bug 1686461] Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686461 Frank R?hlemann changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ruehlemann at itsc.uni-luebeck | |.de --- Comment #1 from Frank R?hlemann --- We experience the same problem: less /var/log/quotad.log ? [2019-04-14 04:26:40.249481] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-04-14 04:26:40.249513] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 80088 times between [2019-04-14 04:26:40.249481] and [2019-04-14 04:28:40.120911] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 80088 times between [2019-04-14 04:26:40.249513] and [2019-04-14 04:28:40.120912] [2019-04-14 04:28:40.122513] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-04-14 04:28:40.122545] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 37540 times between [2019-04-14 04:28:40.122513] and [2019-04-14 04:30:39.718206] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 37540 times between [2019-04-14 04:28:40.122545] and [2019-04-14 04:30:39.718207] [2019-04-14 04:30:42.000322] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-04-14 04:30:42.000350] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 49409 times between [2019-04-14 04:30:42.000322] and [2019-04-14 04:32:40.046852] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 49409 times between [2019-04-14 04:30:42.000350] and [2019-04-14 04:32:40.046853] [2019-04-14 04:32:40.345351] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-04-14 04:32:40.345382] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 29429 times between [2019-04-14 04:32:40.345351] and [2019-04-14 04:34:40.098453] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 29429 times between [2019-04-14 04:32:40.345382] and [2019-04-14 04:34:40.098454] ? And more in every second. We updated from Gluster 3.12.14 to 4.1.7 at March 21st and that's the first appearance of these log mesages: zless quotad.4.log ? [2019-03-21 15:28:45.195091] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-03-21 15:28:45.195100] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-03-21 15:28:45.195103] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] [2019-03-21 15:28:45.195155] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] [2019-03-21 15:28:45.195268] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-03-21 15:28:45.195305] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] ? Now some typical information about our system. # cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" # gluster --version glusterfs 4.1.7 # gluster volume info $VOLUME Volume Name: $VOLUME Type: Distributed-Disperse Volume ID: 0a64f278-f432-4793-8188-346557a4e146 Status: Started Snapshot Count: 0 Number of Bricks: 15 x (4 + 2) = 90 Transport-type: tcp Bricks: Brick1: gluster01.:/srv/glusterfs/bricks/$VOLUME0100/data Brick2: gluster01.:/srv/glusterfs/bricks/$VOLUME0101/data Brick3: gluster02.:/srv/glusterfs/bricks/$VOLUME0200/data Brick4: gluster02.:/srv/glusterfs/bricks/$VOLUME0201/data Brick5: gluster05.:/srv/glusterfs/bricks/$VOLUME0500/data Brick6: gluster05.:/srv/glusterfs/bricks/$VOLUME0501/data Brick7: gluster01.:/srv/glusterfs/bricks/$VOLUME0102/data Brick8: gluster01.:/srv/glusterfs/bricks/$VOLUME0103/data Brick9: gluster02.:/srv/glusterfs/bricks/$VOLUME0202_new/data Brick10: gluster02.:/srv/glusterfs/bricks/$VOLUME0203/data Brick11: gluster05.:/srv/glusterfs/bricks/$VOLUME0502/data Brick12: gluster05.:/srv/glusterfs/bricks/$VOLUME0503/data Brick13: gluster01.:/srv/glusterfs/bricks/$VOLUME0104/data Brick14: gluster01.:/srv/glusterfs/bricks/$VOLUME0105/data Brick15: gluster02.:/srv/glusterfs/bricks/$VOLUME0204/data Brick16: gluster02.:/srv/glusterfs/bricks/$VOLUME0205/data Brick17: gluster05.:/srv/glusterfs/bricks/$VOLUME0504/data Brick18: gluster05.:/srv/glusterfs/bricks/$VOLUME0505/data Brick19: gluster01.:/srv/glusterfs/bricks/$VOLUME0106/data Brick20: gluster01.:/srv/glusterfs/bricks/$VOLUME0107/data Brick21: gluster02.:/srv/glusterfs/bricks/$VOLUME0206/data Brick22: gluster02.:/srv/glusterfs/bricks/$VOLUME0207/data Brick23: gluster05.:/srv/glusterfs/bricks/$VOLUME0506/data Brick24: gluster05.:/srv/glusterfs/bricks/$VOLUME0507/data Brick25: gluster01.:/srv/glusterfs/bricks/$VOLUME0108/data Brick26: gluster01.:/srv/glusterfs/bricks/$VOLUME0109/data Brick27: gluster02.:/srv/glusterfs/bricks/$VOLUME0208/data Brick28: gluster02.:/srv/glusterfs/bricks/$VOLUME0209/data Brick29: gluster05.:/srv/glusterfs/bricks/$VOLUME0508/data Brick30: gluster05.:/srv/glusterfs/bricks/$VOLUME0509/data Brick31: gluster01.:/srv/glusterfs/bricks/$VOLUME0110/data Brick32: gluster01.:/srv/glusterfs/bricks/$VOLUME0111/data Brick33: gluster02.:/srv/glusterfs/bricks/$VOLUME0210/data Brick34: gluster02.:/srv/glusterfs/bricks/$VOLUME0211/data Brick35: gluster05.:/srv/glusterfs/bricks/$VOLUME0510/data Brick36: gluster05.:/srv/glusterfs/bricks/$VOLUME0511/data Brick37: gluster06.:/srv/glusterfs/bricks/$VOLUME0600/data Brick38: gluster06.:/srv/glusterfs/bricks/$VOLUME0601/data Brick39: gluster07.:/srv/glusterfs/bricks/$VOLUME0700/data Brick40: gluster07.:/srv/glusterfs/bricks/$VOLUME0701/data Brick41: gluster08.:/srv/glusterfs/bricks/$VOLUME0800/data Brick42: gluster08.:/srv/glusterfs/bricks/$VOLUME0801/data Brick43: gluster06.:/srv/glusterfs/bricks/$VOLUME0602/data Brick44: gluster06.:/srv/glusterfs/bricks/$VOLUME0603/data Brick45: gluster07.:/srv/glusterfs/bricks/$VOLUME0702/data Brick46: gluster07.:/srv/glusterfs/bricks/$VOLUME0703/data Brick47: gluster08.:/srv/glusterfs/bricks/$VOLUME0802/data Brick48: gluster08.:/srv/glusterfs/bricks/$VOLUME0803/data Brick49: gluster06.:/srv/glusterfs/bricks/$VOLUME0604/data Brick50: gluster06.:/srv/glusterfs/bricks/$VOLUME0605/data Brick51: gluster07.:/srv/glusterfs/bricks/$VOLUME0704/data Brick52: gluster07.:/srv/glusterfs/bricks/$VOLUME0705/data Brick53: gluster08.:/srv/glusterfs/bricks/$VOLUME0804/data Brick54: gluster08.:/srv/glusterfs/bricks/$VOLUME0805/data Brick55: gluster06.:/srv/glusterfs/bricks/$VOLUME0606/data Brick56: gluster06.:/srv/glusterfs/bricks/$VOLUME0607/data Brick57: gluster07.:/srv/glusterfs/bricks/$VOLUME0706/data Brick58: gluster07.:/srv/glusterfs/bricks/$VOLUME0707/data Brick59: gluster08.:/srv/glusterfs/bricks/$VOLUME0806/data Brick60: gluster08.:/srv/glusterfs/bricks/$VOLUME0807/data Brick61: gluster06.:/srv/glusterfs/bricks/$VOLUME0608/data Brick62: gluster06.:/srv/glusterfs/bricks/$VOLUME0609/data Brick63: gluster07.:/srv/glusterfs/bricks/$VOLUME0708/data Brick64: gluster07.:/srv/glusterfs/bricks/$VOLUME0709/data Brick65: gluster08.:/srv/glusterfs/bricks/$VOLUME0808/data Brick66: gluster08.:/srv/glusterfs/bricks/$VOLUME0809/data Brick67: gluster06.:/srv/glusterfs/bricks/$VOLUME0610/data Brick68: gluster06.:/srv/glusterfs/bricks/$VOLUME0611/data Brick69: gluster07.:/srv/glusterfs/bricks/$VOLUME0710/data Brick70: gluster07.:/srv/glusterfs/bricks/$VOLUME0711/data Brick71: gluster08.:/srv/glusterfs/bricks/$VOLUME0810/data Brick72: gluster08.:/srv/glusterfs/bricks/$VOLUME0811/data Brick73: gluster01.:/srv/glusterfs/bricks/$VOLUME0112/data Brick74: gluster01.:/srv/glusterfs/bricks/$VOLUME0113/data Brick75: gluster02.:/srv/glusterfs/bricks/$VOLUME0212/data Brick76: gluster02.:/srv/glusterfs/bricks/$VOLUME0213/data Brick77: gluster05.:/srv/glusterfs/bricks/$VOLUME0512/data Brick78: gluster05.:/srv/glusterfs/bricks/$VOLUME0513/data Brick79: gluster01.:/srv/glusterfs/bricks/$VOLUME0114/data Brick80: gluster01.:/srv/glusterfs/bricks/$VOLUME0115/data Brick81: gluster02.:/srv/glusterfs/bricks/$VOLUME0214/data Brick82: gluster02.:/srv/glusterfs/bricks/$VOLUME0215/data Brick83: gluster05.:/srv/glusterfs/bricks/$VOLUME0514/data Brick84: gluster05.:/srv/glusterfs/bricks/$VOLUME0515/data Brick85: gluster01.:/srv/glusterfs/bricks/$VOLUME0116/data Brick86: gluster01.:/srv/glusterfs/bricks/$VOLUME0117/data Brick87: gluster02.:/srv/glusterfs/bricks/$VOLUME0216/data Brick88: gluster02.:/srv/glusterfs/bricks/$VOLUME0217/data Brick89: gluster05.:/srv/glusterfs/bricks/$VOLUME0516/data Brick90: gluster05.:/srv/glusterfs/bricks/$VOLUME0517/data Options Reconfigured: performance.md-cache-timeout: 5 performance.rda-cache-limit: 100MB performance.parallel-readdir: off performance.cache-refresh-timeout: 1 features.quota-deem-statfs: on features.inode-quota: on features.quota: on auth.allow: $IPSPACE transport.address-family: inet nfs.disable: on -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 11:31:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 11:31:23 +0000 Subject: [Bugs] [Bug 1691357] core archive link from regression jobs throw not found error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691357 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #2 from Deepshikha khandelwal --- It is fixed now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 11:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 11:31:44 +0000 Subject: [Bugs] [Bug 1691357] core archive link from regression jobs throw not found error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691357 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-15 11:31:44 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 11:33:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 11:33:35 +0000 Subject: [Bugs] [Bug 1693295] rpc.statd not started on builder204.aws.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693295 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |DUPLICATE Last Closed| |2019-04-15 11:33:35 --- Comment #2 from Deepshikha khandelwal --- *** This bug has been marked as a duplicate of bug 1691789 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 11:33:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 11:33:35 +0000 Subject: [Bugs] [Bug 1691789] rpc-statd service stops on AWS builders In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691789 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nbalacha at redhat.com --- Comment #1 from Deepshikha khandelwal --- *** Bug 1693295 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 12:09:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:09:03 +0000 Subject: [Bugs] [Bug 1699866] New: I/O error on writes to a disperse volume when replace-brick is executed Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Bug ID: 1699866 Summary: I/O error on writes to a disperse volume when replace-brick is executed Product: GlusterFS Version: mainline Status: NEW Component: disperse Severity: high Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create a disperse volume 2. Kill one brick 3. Open fd on a subdirectory 4. Do a replace brick of the killed brick 5. Write on the previous file Actual results: The write fails with I/O error Expected results: The write should succeed Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 12:22:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:22:58 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |jahernan at redhat.com --- Comment #1 from Xavi Hernandez --- The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop. Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 12:24:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:24:51 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22558 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 12:24:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:24:52 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) posted (#2) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 12:31:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:31:35 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1699917 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 12:31:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:31:35 +0000 Subject: [Bugs] [Bug 1699917] New: I/O error on writes to a disperse volume when replace-brick is executed Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Bug ID: 1699917 Summary: I/O error on writes to a disperse volume when replace-brick is executed Product: GlusterFS Version: 6 Status: NEW Component: disperse Severity: high Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Depends On: 1699866 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1699866 +++ Description of problem: An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create a disperse volume 2. Kill one brick 3. Open fd on a subdirectory 4. Do a replace brick of the killed brick 5. Write on the previous file Actual results: The write fails with I/O error Expected results: The write should succeed Additional info: --- Additional comment from Xavi Hernandez on 2019-04-15 14:22:58 CEST --- The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop. Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 12:32:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:32:09 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 12:35:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:35:47 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 12:35:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 12:35:47 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1699917 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 13:56:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 13:56:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22434 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 14:28:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 14:28:42 +0000 Subject: [Bugs] [Bug 1699394] [geo-rep]: Geo-rep goes FAULTY with OSError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-15 14:28:42 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22557 (libgfchangelog : use find_library to locate shared library) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 15 14:29:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 14:29:06 +0000 Subject: [Bugs] [Bug 1697764] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697764 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22532 (cluster/ec: Fix handling of heal info cases without locks) merged (#2) on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 18:19:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 18:19:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22554 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 18:19:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 18:19:41 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #625 from Worker Ant --- REVIEW: https://review.gluster.org/22554 (core: handle memory accounting correctly) posted (#2) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 15 18:47:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 18:47:34 +0000 Subject: [Bugs] [Bug 1700078] New: disablle + reenable of bitrot leads to files marked as bad Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 Bug ID: 1700078 Summary: disablle + reenable of bitrot leads to files marked as bad Product: GlusterFS Version: mainline Status: NEW Component: bitrot Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Docs Contact: bugs at gluster.org Description of problem: Disable and reenable of bit-rot feature on a gluster volume can lead to a situation where some files are marked as bad (even though they are not corrupted). Consider a gluster volume with bit-rot feature enabled and consisting of files that have been signed with the checksum. Now, disable the feature. The files still continue to contain the version and signature extended attributes. At this stage if some files are modified and later bit-rot feature is reenabled, then those files are which were modified while the feature was off, will be marked as bad by the scrubber after the feature is reenabled. This happens because of this reason. The modification of the file(s), while the feature was off, would not have resulted in calculation of the checksum of the file and that checksum being saved as part of the signature xattr. And the bit-rot daemon whenever is spawned (either restart or regular start due to feature enable) does a one shot crawling of the entire volume, where it skips calculating the checksum of the files (and saving that checksum as part of signature) if any file contains those xattrs already (assuming their value should be correct). So when scrubber does its job, it finds the on disk checksum and the calculated checksum to be different. This makes scrubber mark such a file as bad. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a gluster volume, start it and mount it 2. Enable bit-rot feature 3. Create a file with some data 4. Wait till the file is properly signed (it takes 2 minutes for proper signature to be saved as an xattr) 5. disable bit-rot 6. Modify the contents of the file. 7. Reenable the bit-rot feature 8. Start on-demand scrubbing. Actual results: File is marked as bad even though no corruption has happened due to bit-rot. Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Apr 15 18:50:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 18:50:01 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22572 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Apr 15 18:50:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 18:50:02 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22572 (features/bit-rot-stub: clean the mutex after cancelling the signer thread) posted (#1) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Apr 15 19:02:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 15 Apr 2019 19:02:48 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-15 19:02:48 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22556 (glusterd: Optimize glusterd handshaking code path) merged (#10) on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 03:24:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 03:24:50 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22573 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 03:24:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 03:24:51 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22573 (glusterd: fix op-version of glusterd.vol_count_per_thread) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 05:19:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 05:19:09 +0000 Subject: [Bugs] [Bug 1696721] geo-replication failing after upgrade from 5.5 to 6.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696721 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(chad.cropper at genu | |splc.com) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 06:41:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 06:41:24 +0000 Subject: [Bugs] [Bug 1663519] Memory leak when smb.conf has "store dos attributes = yes" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663519 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ryan at magenta.tv) --- Comment #4 from Anoop C S --- (In reply to ryan from comment #0) > Created attachment 1518442 [details] > Python 3 script to replicate issue > > --------------------------------------------------------------------------- > Description of problem: > If glusterfs VFS is used with Samba, and the global option "store dos > attributes = yes" is set, the SMBD rss memory usage balloons. > > If a FUSE mount is used with Samba, and the global option "store dos > attributes = yes" is set, the Gluster FUSE mount process rss memory usage > balloons. How did you manage to find out its because of "store dos attributes" parameter that RSS memory is shooting up to GBs? Following is the GlusterFS volume configuration on which I tried running the attached script to reproduce the issue which I couldn't as RSS value went till ~110 MB only. Volume Name: vol Type: Distributed-Replicate Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Options Reconfigured: performance.readdir-ahead: on performance.parallel-readdir: on performance.nl-cache-timeout: 600 performance.nl-cache: on performance.cache-samba-metadata: on network.inode-lru-limit: 200000 performance.md-cache-timeout: 600 performance.cache-invalidation: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on user.smb: enable diagnostics.brick-log-level: INFO performance.stat-prefetch: on transport.address-family: inet nfs.disable: on user.cifs: enable cluster.enable-shared-storage: disable smb.conf global parameters -------------------------- # Global parameters [global] clustering = Yes dns proxy = No kernel change notify = No log file = /usr/local/var/log/samba/log.%m security = USER server string = Samba Server fruit:aapl = yes idmap config * : backend = tdb include = /usr/local/etc/samba/smb-ext.conf kernel share modes = No posix locking = No Versions used ------------- Fairly recent mainline source for Samba and GlusterFS I could see couple of more volume set options from bug description which leaves us with more configurations to be tried on unless we are sure about "store dos attribute" parameter causing the high memory consumption. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 07:08:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 07:08:54 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 07:08:54 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/21058 (features/locks: error-out {inode,entry}lk fops with all-zero lk-owner) merged (#5) on master by Krutika Dhananjay -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 09:26:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 09:26:32 +0000 Subject: [Bugs] [Bug 1700295] New: The data couldn't be flushed immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync after glfs_write. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700295 Bug ID: 1700295 Summary: The data couldn't be flushed immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync after glfs_write. Product: GlusterFS Version: 6 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: xiubli at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: In gluster-block project we had hit a case where we will sometimes get old block metadata with blockGetMetaInfo(), which will glfs_read from the block file, after updating the block metadata with GB_METAUPDATE_OR_GOTO(), which will glfs_write the block file. In GB_METAUPDATE_OR_GOTO, it basically open the metafile with the O_SYNC, write the new details and close it in-place, which should flush the data to metafile. But looks like in glusterfs-api-devel-6.0-0.4.rc1.fc29.x86_64 it will not be flushed in time. Version-Release number of selected component (if applicable): How reproducible: In RHEL 7 by using the gluster-blocl/tests/basic.t script with glusterfs-api-devel-6.0-0.4.rc1.fc29.x86_64 it is very easy to reproduce, in my setups almost 40%. Steps to Reproduce: 1. git clone https://github.com/gluster/gluster-block.git 2. build it from source and install it 3. install the tcmu-runner package or use the upstream code 4. $ ./tests/basic.t Actual results: Delete will fail sometimes Expected results: Delete should be success Additional info: More detail please see https://github.com/gluster/gluster-block/issues/204 https://github.com/gluster/gluster-block/pull/209 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 09:31:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 09:31:08 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22574 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 09:31:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 09:31:09 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22574 (tests: Heal should fail when read/write fails) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:49:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:49:24 +0000 Subject: [Bugs] [Bug 1692957] rpclib: slow floating point math and libm In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22493 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:49:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:49:25 +0000 Subject: [Bugs] [Bug 1692957] rpclib: slow floating point math and libm In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-03 10:04:23 |2019-04-16 10:49:25 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22493 (rpclib: slow floating point math and libm) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:49:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:49:26 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1692957, which changed state. Bug 1692957 Summary: rpclib: slow floating point math and libm https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:49:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:49:26 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Bug 1692959 depends on bug 1692957, which changed state. Bug 1692957 Summary: rpclib: slow floating point math and libm https://bugzilla.redhat.com/show_bug.cgi?id=1692957 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:49:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:49:48 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22425 (afr: add client-pid to all gf_event() calls) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:50:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:50:12 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:50:12 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22467 (glusterd: fix txn-id mem leak) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:50:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:50:36 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY API) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:50:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:50:59 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:50:59 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22544 (glusterd: load ctime in the client graph only if it's not turned off) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:51:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:51:28 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22429 (cluster/ec: Don't enqueue an entry if it is already healing) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:52:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:52:22 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:52:22 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22446 (afr: thin-arbiter read txn fixes) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:52:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:52:22 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1693992, which changed state. Bug 1693992 Summary: Thin-arbiter minor fixes https://bugzilla.redhat.com/show_bug.cgi?id=1693992 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:53:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:53:02 +0000 Subject: [Bugs] [Bug 1699198] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:53:02 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22553 (protocol/client: Do not fallback to anon-fd if fd is not open) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:53:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:53:45 +0000 Subject: [Bugs] [Bug 1699319] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:53:45 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22555 (cluster/afr: Thin-arbiter SHD fixes) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:53:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:53:46 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1699319, which changed state. Bug 1699319 Summary: Thin-Arbiter SHD minor fixes https://bugzilla.redhat.com/show_bug.cgi?id=1699319 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:56:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:56:16 +0000 Subject: [Bugs] [Bug 1699713] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:56:16 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22562 (build: glusterfs build is failing on RHEL-6) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:56:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:56:40 +0000 Subject: [Bugs] [Bug 1699714] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:56:40 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22563 (core: Brick is not able to detach successfully in brick_mux environment) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:56:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:56:40 +0000 Subject: [Bugs] [Bug 1699023] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699023 Bug 1699023 depends on bug 1699714, which changed state. Bug 1699714 Summary: Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699714 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:56:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:56:41 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 Bug 1699025 depends on bug 1699714, which changed state. Bug 1699714 Summary: Brick is not able to detach successfully in brick_mux environment https://bugzilla.redhat.com/show_bug.cgi?id=1699714 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 10:57:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:57:38 +0000 Subject: [Bugs] [Bug 1699499] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699499 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22559 (ec: fix truncate lock to cover the write in tuncate clean) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 10:58:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 10:58:01 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 10:58:01 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22561 (posix/ctime: Fix stat(time attributes) inconsistency during readdirp) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 11:00:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:00:10 +0000 Subject: [Bugs] [Bug 1699715] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 11:00:10 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22564 (core: Log level changes do not effect on running client process) merged (#5) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 11:00:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:00:11 +0000 Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696046 Bug 1696046 depends on bug 1699715, which changed state. Bug 1699715 Summary: Log level changes do not take effect until the process is restarted https://bugzilla.redhat.com/show_bug.cgi?id=1699715 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 11:29:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:29:33 +0000 Subject: [Bugs] [Bug 1699731] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 11:29:33 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 11:29:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:29:34 +0000 Subject: [Bugs] [Bug 1699736] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 Bug 1699736 depends on bug 1699731, which changed state. Bug 1699731 Summary: Fops hang when inodelk fails on the first fop https://bugzilla.redhat.com/show_bug.cgi?id=1699731 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 11:54:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:54:10 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22578 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 11:54:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:54:12 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22578 (glusterd: fix loading ctime in client graph logic) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 11:54:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 11:54:13 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Bug 1698471 depends on bug 1697907, which changed state. Bug 1697907 Summary: ctime feature breaks old client to connect to new server https://bugzilla.redhat.com/show_bug.cgi?id=1697907 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 12:08:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 12:08:25 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22579 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 12:08:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 12:08:26 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22579 (glusterd: fix loading ctime in client graph logic) posted (#1) for review on release-6 by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 12:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 12:40:16 +0000 Subject: [Bugs] [Bug 1699339] With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699339 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-15 19:02:48 |2019-04-16 12:40:16 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22573 (glusterd: fix op-version of glusterd.vol_count_per_thread) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 13:39:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 13:39:30 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-16 13:39:30 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22487 (transport/socket: log shutdown msg occasionally) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 13:39:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 13:39:30 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1679904, which changed state. Bug 1679904 Summary: client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1679904 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 13:39:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 13:39:31 +0000 Subject: [Bugs] [Bug 1695416] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695416 Bug 1695416 depends on bug 1679904, which changed state. Bug 1679904 Summary: client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1679904 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 13:39:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 13:39:31 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Bug 1691616 depends on bug 1679904, which changed state. Bug 1679904 Summary: client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1679904 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 15:47:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 15:47:01 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #59 from Amgad --- Sanju / Shyam It has been three weeks now. What's the update on this. We're blocked and stuck not able to deploy 5.x because of the online rollback Appreciate your timely update! Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 16 16:09:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 16:09:23 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22580 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 16:09:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 16:09:24 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #626 from Worker Ant --- REVIEW: https://review.gluster.org/22580 (tests: Add changelog snapshot testcase) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 17:07:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 17:07:07 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22581 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 17:07:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 17:07:08 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22581 (Revert \"features/locks: error-out {inode,entry}lk fops with all-zero lk-owner\") posted (#2) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 19:02:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 19:02:10 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22582 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 16 19:02:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 16 Apr 2019 19:02:11 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22582 (features/sdfs: Assign unique lk-owner for entrylk fop) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 03:28:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 03:28:01 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22572 (features/bit-rot-stub: clean the mutex after cancelling the signer thread) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Apr 17 03:40:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 03:40:59 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22583 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 03:41:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 03:41:00 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #627 from Worker Ant --- REVIEW: https://review.gluster.org/22583 (build-aux/pkg-version: provide option for depth=1) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 05:44:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:44:44 +0000 Subject: [Bugs] [Bug 1700656] New: Glusterd did not start by default after node reboot Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 Bug ID: 1700656 Summary: Glusterd did not start by default after node reboot Product: GlusterFS Version: mainline Status: NEW Component: glusterd Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bmekala at redhat.com, bugs at gluster.org, moagrawa at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, saraut at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com, ubansal at redhat.com, vbellur at redhat.com Depends On: 1699835 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1699835 [Bug 1699835] Glusterd did not start by default after node reboot -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 05:44:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:44:59 +0000 Subject: [Bugs] [Bug 1700656] Glusterd did not start by default after node reboot In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 05:50:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:50:49 +0000 Subject: [Bugs] [Bug 1700656] Glusterd did not start by default after node reboot In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22584 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 05:50:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:50:50 +0000 Subject: [Bugs] [Bug 1700656] Glusterd did not start by default after node reboot In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22584 (spec: Glusterd did not start by default after node reboot) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 05:56:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:56:24 +0000 Subject: [Bugs] [Bug 1696518] builder203 does not have a valid hostname set In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696518 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-17 05:56:24 --- Comment #3 from Deepshikha khandelwal --- Misc did set it up. Closing this one. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 05:57:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 05:57:06 +0000 Subject: [Bugs] [Bug 1697890] centos-regression is not giving its vote In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697890 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-04-17 05:57:06 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 06:45:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 06:45:24 +0000 Subject: [Bugs] [Bug 1700656] Glusterd did not start by default after node reboot In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 --- Comment #3 from SATHEESARAN --- Description of problem: Did reboot on a server node, but glusterd process was not started automatically after node reboot. When checked the glusterd status it was shown as inactive(dead), but on doing glusterd start, the process was active again. Version-Release number of selected component (if applicable): 6.0-1 How reproducible: Always Steps to Reproduce: 1. Do reboot on a server node. 2. After the node is back up again, check the glusterd status using "systemctl status glusterd" 3. To start the glusterd process, do "systemctl start glusterd". Actual results: * When a server node is rebooted, and the node is back up online again, the glusterd service on the node does not start automatically. * Glusterd has to started using "systemctl start glusterd". Expected results: On rebooting a node, when the node comes back up online, glusterd should be active(running). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 07:19:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 07:19:12 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22585 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 07:19:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 07:19:14 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1605 from Worker Ant --- REVIEW: https://review.gluster.org/22585 (features/locks: fix coverity issues) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 07:44:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 07:44:53 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(nbalacha at redhat.c | |om) --- Comment #4 from Yaniv Kaul --- What's the status of this BZ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 07:53:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 07:53:00 +0000 Subject: [Bugs] [Bug 1700695] New: smoke is failing for build https://review.gluster.org/#/c/glusterfs/+/22584/ Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700695 Bug ID: 1700695 Summary: smoke is failing for build https://review.gluster.org/#/c/glusterfs/+/22584/ Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Smoke is failing for build https://review.gluster.org/#/c/glusterfs/+/22584/ Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 08:18:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 08:18:12 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22581 (Revert \"features/locks: error-out {inode,entry}lk fops with all-zero lk-owner\") merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 08:27:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 08:27:48 +0000 Subject: [Bugs] [Bug 1700695] smoke is failing for build https://review.gluster.org/#/c/glusterfs/+/22584/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700695 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-04-17 08:27:48 --- Comment #1 from Deepshikha khandelwal --- Correct set of the firewall was not applied after reboot. Misc fixed it. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 08:38:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 08:38:25 +0000 Subject: [Bugs] [Bug 1697923] CI: collect core file in a job artifacts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697923 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-17 08:38:25 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:01:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:01:36 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22586 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:01:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:01:37 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22586 (cluster/afr: Set lk-owner before inodelk/entrylk/lk) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:26:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:26:23 +0000 Subject: [Bugs] [Bug 1174016] network.compression fails simple '--ioengine=sync' fio test In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1174016 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ndevos at redhat.com | |) --- Comment #1 from Yaniv Kaul --- Same fate as https://bugzilla.redhat.com/show_bug.cgi?id=1073763 ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:28:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:28:36 +0000 Subject: [Bugs] [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1200264 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(skoduri at redhat.co | |m) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:29:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:29:22 +0000 Subject: [Bugs] [Bug 1215022] Populate message IDs with recommended action. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215022 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(hchiramm at redhat.c | |om) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:29:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:29:39 +0000 Subject: [Bugs] [Bug 1148262] [gluster-nagios] Nagios plugins for volume services should work with SELinux enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1148262 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sabose at redhat.com | |) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:35:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:35:01 +0000 Subject: [Bugs] [Bug 1336514] shard: compiler warning format string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336514 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-04-17 09:35:01 --- Comment #1 from Yaniv Kaul --- [ykaul at ykaul shard]$ make Making all in src CC shard.lo CCLD shard.la make[1]: Nothing to be done for 'all-am'. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:37:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:37:08 +0000 Subject: [Bugs] [Bug 1220031] glusterfs-cli should depend on the glusterfs package In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1220031 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 09:37:08 --- Comment #1 from Yaniv Kaul --- vagrant at node-1 ~]$ sudo yum remove glusterfs-cli Loaded plugins: fastestmirror Resolving Dependencies --> Running transaction check ---> Package glusterfs-cli.x86_64 0:7dev-0.153.gite5ff6cc.el7 will be erased --> Processing Dependency: glusterfs-cli(x86-64) = 7dev-0.153.gite5ff6cc.el7 for package: glusterfs-server-7dev-0.153.gite5ff6cc.el7.x86_64 --> Running transaction check ---> Package glusterfs-server.x86_64 0:7dev-0.153.gite5ff6cc.el7 will be erased --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================================================================================================================== Removing: glusterfs-cli x86_64 7dev-0.153.gite5ff6cc.el7 @gluster-nightly-master 491 k Removing for dependencies: glusterfs-server x86_64 7dev-0.153.gite5ff6cc.el7 @gluster-nightly-master 6.1 M Transaction Summary ============================================================================================================================================================================================================================================== Remove 1 Package (+1 Dependent package) -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:37:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:37:09 +0000 Subject: [Bugs] [Bug 1700695] smoke is failing for build https://review.gluster.org/#/c/glusterfs/+/22584/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700695 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Resolution|NOTABUG |CURRENTRELEASE --- Comment #2 from M. Scherer --- yeah, freebsd was still using chrono, the test firewall (I did set it up for the migration to nftables). For some reason, that firewall (that I almost decomissioned yesterday) do not seems to route packet anymore. The rules are in place, the config is the same as the 2 prod firewall. We need also some better monitoring for that. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:39:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:39:43 +0000 Subject: [Bugs] [Bug 1215022] Populate message IDs with recommended action. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215022 Humble Chirammal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(hchiramm at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:45:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:45:07 +0000 Subject: [Bugs] [Bug 1073763] network.compression fails simple '--ioengine=sync' fio test In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1073763 Bug 1073763 depends on bug 1174016, which changed state. Bug 1174016 Summary: network.compression fails simple '--ioengine=sync' fio test https://bugzilla.redhat.com/show_bug.cgi?id=1174016 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:45:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:45:07 +0000 Subject: [Bugs] [Bug 1174016] network.compression fails simple '--ioengine=sync' fio test In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1174016 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Flags|needinfo?(ndevos at redhat.com | |) | Last Closed| |2019-04-17 09:45:07 --- Comment #2 from Niels de Vos --- This problem may still exist, if it is indeed caused by the network.compression (cdc) xlator. This feature is rarely used, and not much tested. There is currently no intention to improve the compression functionality. Of course we'll happily accept patches, but there is no plan to look into this bug any time soon. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:54:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:54:28 +0000 Subject: [Bugs] [Bug 1336506] JBR: compiler warning fmt string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336506 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-17 09:54:28 --- Comment #2 from Yaniv Kaul --- commit 8293d21280fd6ddfc9bb54068cf87794fc6be207 Author: Amar Tumballi Date: Thu Dec 6 12:29:25 2018 +0530 all: remove code which is not being considered in build These xlators are now removed from build as per discussion/announcement done at https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html * move rot-13 to playground, as it is used only as demo purpose, and is documented in many places. * Removed code of below xlators: - cluster/stripe - cluster/tier - features/changetimerecorder - features/glupy - performance/symlink-cache - encryption/crypt - storage/bd - experimental/posix2 - experimental/dht2 - experimental/fdl - experimental/jbr <------- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:55:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:55:34 +0000 Subject: [Bugs] [Bug 1191072] ipv6 enabled on the peer, but dns resolution fails with ipv6 and gluster does not fall back to ipv4 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1191072 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ndevos at redhat.com | |) --- Comment #2 from Yaniv Kaul --- Same fate as https://bugzilla.redhat.com/show_bug.cgi?id=1190551 ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 09:56:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:56:09 +0000 Subject: [Bugs] [Bug 1510685] Python modules not found when multiple versions of Python installed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1510685 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |avishwan at redhat.com Flags| |needinfo?(avishwan at redhat.c | |om) --- Comment #3 from Yaniv Kaul --- Where is it upstream? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:56:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:56:50 +0000 Subject: [Bugs] [Bug 1336511] trace: compiler warning format string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336511 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 09:56:50 --- Comment #1 from Yaniv Kaul --- [ykaul at ykaul src]$ make CC trace.lo CCLD trace.la -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:59:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:59:15 +0000 Subject: [Bugs] [Bug 1336513] changelog: compiler warning format string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336513 --- Comment #2 from Yaniv Kaul --- ykaul at ykaul src]$ make CC libgfchangelog_la-gf-changelog.lo CC libgfchangelog_la-gf-changelog-journal-handler.lo CC libgfchangelog_la-gf-changelog-helpers.lo CC libgfchangelog_la-gf-changelog-api.lo CC libgfchangelog_la-gf-history-changelog.lo CC libgfchangelog_la-gf-changelog-rpc.lo CC libgfchangelog_la-gf-changelog-reborp.lo gf-changelog-reborp.c:396:35: warning: initialization of ?int (*)(rpcsvc_request_t *)? {aka ?int (*)(struct rpcsvc_request *)?} from ?int? makes pointer from integer without a cast [-Wint-conversion] 396 | CHANGELOG_REV_PROC_EVENT, | ^~~~~~~~~~~~~~~~~~~~~~~~ gf-changelog-reborp.c:396:35: note: (near initialization for ?gf_changelog_reborp_actors[1].actor?) gf-changelog-reborp.c:397:35: warning: initialization of ?int (*)(int, ssize_t *, char *, char *)? {aka ?int (*)(int, long int *, char *, char *)?} from incompatible pointer type ?int (*)(rpcsvc_request_t *)? {aka ?int (*)(struct rpcsvc_request *)?} [-Wincompatible-pointer-types] 397 | gf_changelog_reborp_handle_event, NULL, 0, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ gf-changelog-reborp.c:397:35: note: (near initialization for ?gf_changelog_reborp_actors[1].vector_sizer?) gf-changelog-reborp.c:397:69: warning: initialization of ?int? from ?void *? makes integer from pointer without a cast [-Wint-conversion] 397 | gf_changelog_reborp_handle_event, NULL, 0, | ^~~~ gf-changelog-reborp.c:397:69: note: (near initialization for ?gf_changelog_reborp_actors[1].procnum?) CC libgfchangelog_la-changelog-rpc-common.lo CCLD libgfchangelog.la (As of e5ff6cc397e7a23dff4024efb6806cb004a89ee6 ) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 09:59:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 09:59:52 +0000 Subject: [Bugs] [Bug 1215022] Populate message IDs with recommended action. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215022 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(hchiramm at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 10:00:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:00:18 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |srangana at redhat.com Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #6 from Yaniv Kaul --- (In reply to Shyamsundar from comment #5) > Release 3.12 has been EOLd and this bug was still found to be in the NEW > state, hence moving the version to mainline, to triage the same and take > appropriate actions. Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 10:03:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:03:28 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(srangana at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 10:06:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:06:06 +0000 Subject: [Bugs] [Bug 1428098] Change SSL key path to /var/lib/glusterd/ssl/* In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428098 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 10:06:06 --- Comment #1 from Yaniv Kaul --- https://review.gluster.org/#/c/glusterfs/+/16378/ was merged in 2017. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 10:12:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:12:48 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com Flags|needinfo?(srangana at redhat.c |needinfo?(khiremat at redhat.c |om) |om) --- Comment #7 from Shyamsundar --- (In reply to Yaniv Kaul from comment #6) > (In reply to Shyamsundar from comment #5) > > Release 3.12 has been EOLd and this bug was still found to be in the NEW > > state, hence moving the version to mainline, to triage the same and take > > appropriate actions. > > Status? Will need to check with the assignee or component maintainer, which is Kotresh in both cases. @Kotresh request an update here? Thanks. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 10:14:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:14:24 +0000 Subject: [Bugs] [Bug 1510685] Python modules not found when multiple versions of Python installed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1510685 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(avishwan at redhat.c | |om) | --- Comment #4 from Aravinda VK --- (In reply to Yaniv Kaul from comment #3) > Where is it upstream? Product and version of bug is changed to upstream > Product: Red Hat Gluster Storage ? GlusterFS > Version: unspecified ? mainline site-lib directory to install python library is fetched using below command in glusterfs spec file(https://github.com/gluster/glusterfs/blob/master/glusterfs.spec.in#L165). ``` %{!?python2_sitelib: %global python2_sitelib %(python2 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} ``` If rpms are built with Python 2.6(May be default in build machine) then after gluster rpms install it will install to the same directory even though User installed the newer version of Python. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 10:22:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:22:30 +0000 Subject: [Bugs] [Bug 1428099] Fix merge error which broke all the things In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428099 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 10:22:30 --- Comment #1 from Yaniv Kaul --- Merged early 2017, closing. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 10:26:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:26:37 +0000 Subject: [Bugs] [Bug 1510685] Python modules not found when multiple versions of Python installed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1510685 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(avishwan at redhat.c | |om) --- Comment #5 from Yaniv Kaul --- Luckily I believe we've all moved to Python 2.7? Is that something we still need to handle (ignoring Python 2 / Python 3 for this discussion) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 10:28:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:28:58 +0000 Subject: [Bugs] [Bug 1428102] libglusterfs: Change gmtime_r -> localtime_r for better log readability In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428102 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-17 10:28:58 --- Comment #3 from Yaniv Kaul --- (In reply to Joe Julian from comment #2) > I disagree. When we used to do localtime collating logs between clients and > servers was often impossible to get a user to do. The change to always using > gmtime everywhere has made debugging much easier. Closing, as the patch was also not accepted. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 10:34:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 10:34:13 +0000 Subject: [Bugs] [Bug 1510685] Python modules not found when multiple versions of Python installed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1510685 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(avishwan at redhat.c | |om) | --- Comment #6 from Aravinda VK --- (In reply to Yaniv Kaul from comment #5) > Luckily I believe we've all moved to Python 2.7? Is that something we still > need to handle (ignoring Python 2 / Python 3 for this discussion) Yes, Python 2.7 is available in CentOS 7. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 11:33:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:33:55 +0000 Subject: [Bugs] [Bug 1510685] Python modules not found when multiple versions of Python installed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1510685 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 11:33:55 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 11:42:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:42:21 +0000 Subject: [Bugs] [Bug 1350238] Vagrant environment for tests should configure DNS for VMs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350238 --- Comment #3 from Yaniv Kaul --- I think Vagrant on libvirt should provide you all you need. If not, I assume using Ansible could solve this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 11:45:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:45:25 +0000 Subject: [Bugs] [Bug 1428049] dict_t: make dict_t a real dictionary In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428049 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 11:45:25 --- Comment #2 from Yaniv Kaul --- Merged long time ago. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 11:46:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:46:12 +0000 Subject: [Bugs] [Bug 1428062] core: Disable the memory pooler in Gluster via a build flag In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428062 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 11:46:12 --- Comment #1 from Yaniv Kaul --- Merged late 2016. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 11:47:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:47:15 +0000 Subject: [Bugs] [Bug 1379544] glusterd creates all rpc_clnts with the same name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1379544 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(kaushal at redhat.co | |m) --- Comment #2 from Yaniv Kaul --- Current status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 11:49:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:49:44 +0000 Subject: [Bugs] [Bug 1370921] Improve robustness by checking result of pthread_mutex_lock() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1370921 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Improvement, StudentProject -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 11:53:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:53:36 +0000 Subject: [Bugs] [Bug 1700656] Glusterd did not start by default after node reboot In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700656 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-17 11:53:36 --- Comment #4 from Mohit Agrawal --- This is not a bug, it is expected behaviour. Services are not allowed to get automatically enabled through RPM scriptlets. Distributions that want to enable glusterd by default should provide a systemd preset as explained in https://www.freedesktop.org/wiki/Software/systemd/Preset/ . so closing the same. Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 11:57:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 11:57:21 +0000 Subject: [Bugs] [Bug 1524048] gluster volume set is very slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1524048 --- Comment #3 from Yaniv Kaul --- I've reported upstream similar issue, happening because of gf_store_save_value() that each open, write, flush and close the file... -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:11:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:11:48 +0000 Subject: [Bugs] [Bug 1615604] tests: Spurious failure in test tests/bugs/bug-1110262.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615604 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #1 from Yaniv Kaul --- Still relevant? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:11:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:11:57 +0000 Subject: [Bugs] [Bug 1618915] Spurious failure in tests/basic/ec/ec-1468261.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1618915 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(pkarampu at redhat.c | |om) --- Comment #1 from Yaniv Kaul --- Still relevant? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:14:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:14:12 +0000 Subject: [Bugs] [Bug 1403156] Memory leak on graph switch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1403156 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sabose at redhat.com | |) --- Comment #4 from Yaniv Kaul --- I believe we've done some good work overall on memory leaks, with Coverity and ASAN, clang-scan and reduction of GCC warnings. While I don't know if we fixed this specific issue, we need more focused bug. Agreed? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:15:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:15:21 +0000 Subject: [Bugs] [Bug 1441697] [HC] - Can't acquire SPM due to Sanlock Exception (Sanlock lockspace add failure 'Input/output error'): Cannot acquire host id In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1441697 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed|2017-11-07 10:41:56 |2019-04-17 12:15:21 --- Comment #21 from Yaniv Kaul --- Please re-open if still relevant. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:16:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:16:31 +0000 Subject: [Bugs] [Bug 1488863] Application VMs goes in to non-responding state In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1488863 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-04-17 12:16:31 --- Comment #26 from Yaniv Kaul --- We are not pushing libgfapi for oVirt at the moment. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:16:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:16:33 +0000 Subject: [Bugs] [Bug 1515149] Application VMs goes in to non-responding state In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1515149 Bug 1515149 depends on bug 1488863, which changed state. Bug 1488863 Summary: Application VMs goes in to non-responding state https://bugzilla.redhat.com/show_bug.cgi?id=1488863 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 12:18:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:18:56 +0000 Subject: [Bugs] [Bug 1428056] performance/io-threads: Reduce the number of timing calls in iot_worker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428056 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-17 12:18:56 --- Comment #1 from Yaniv Kaul --- Merged end of 2016. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:25:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:25:00 +0000 Subject: [Bugs] [Bug 1449971] use extra spinlock and hashtable to relieve lock contention in rpc_clnt->conn->saved_frams In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1449971 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-17 12:25:00 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 12:25:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:25:18 +0000 Subject: [Bugs] [Bug 1531407] dict data type mismatches in the logs (for "link-count") In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1531407 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|dict data type mismatches |dict data type mismatches |in the logs |in the logs (for | |"link-count") -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:26:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:26:51 +0000 Subject: [Bugs] [Bug 1428052] performance/io-threads: Eliminate spinlock contention via fops-per-thread-ratio In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428052 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(vbellur at redhat.co | |m) --- Comment #2 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 12:27:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:27:47 +0000 Subject: [Bugs] [Bug 1659374] posix_janitor_thread_proc has bug that can't go into the janitor_walker if change the system time forward and change back In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659374 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-04-17 12:27:47 --- Comment #1 from Yaniv Kaul --- *** This bug has been marked as a duplicate of bug 1659378 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:27:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:27:47 +0000 Subject: [Bugs] [Bug 1659378] posix_janitor_thread_proc has bug that can't go into the janitor_walker if change the system time forward and change back In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659378 --- Comment #1 from Yaniv Kaul --- *** Bug 1659374 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:27:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:27:58 +0000 Subject: [Bugs] [Bug 1659371] posix_janitor_thread_proc has bug that can't go into the janitor_walker if change the system time forward and change back In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659371 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-04-17 12:27:58 --- Comment #1 from Yaniv Kaul --- *** This bug has been marked as a duplicate of bug 1659378 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:27:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:27:58 +0000 Subject: [Bugs] [Bug 1659378] posix_janitor_thread_proc has bug that can't go into the janitor_walker if change the system time forward and change back In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659378 --- Comment #2 from Yaniv Kaul --- *** Bug 1659371 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 12:31:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 12:31:29 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(nbalacha at redhat.c | |om) | --- Comment #5 from Nithya Balachandran --- (In reply to Yaniv Kaul from comment #4) > What's the status of this BZ? Susant (spalai) is looking into the issue downstream. The fix will be posted upstream. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 13:22:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:22:11 +0000 Subject: [Bugs] [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1200264 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(skoduri at redhat.co | |m) | --- Comment #2 from Soumya Koduri --- Still a todo..No one has worked on it yet. Also since upcall notifications are now consumed by multiple cache layers (like md-cache, nl-cache etc) and not just nfs-ganesha, we need to check with those xlator maintainers if making them asynchronous doesn't pose any issue wrt cache-coherency expected. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 13:44:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:44:27 +0000 Subject: [Bugs] [Bug 1493656] Storage hiccup (inaccessible a short while) when a single brick go down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1493656 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ko_co_ten_1992 at ya | |hoo.com) --- Comment #15 from Yaniv Kaul --- Did you perform the tuning and did it help? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 13:45:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:45:07 +0000 Subject: [Bugs] [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1200264 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(skoduri at redhat.co | |m) --- Comment #3 from Yaniv Kaul --- (In reply to Soumya Koduri from comment #2) > Still a todo..No one has worked on it yet. > > Also since upcall notifications are now consumed by multiple cache layers > (like md-cache, nl-cache etc) and not just nfs-ganesha, we need to check > with those xlator maintainers if making them asynchronous doesn't pose any > issue wrt cache-coherency expected. Since this was filed ~4 years ago - do we expect anyone to work on this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 13:48:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:48:22 +0000 Subject: [Bugs] [Bug 1690950] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690950 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22385 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) merged (#5) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 13:51:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:51:52 +0000 Subject: [Bugs] [Bug 1615604] tests: Spurious failure in test tests/bugs/bug-1110262.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615604 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |INSUFFICIENT_DATA Flags|needinfo?(srangana at redhat.c | |om) | Last Closed| |2019-04-17 13:51:52 --- Comment #2 from Shyamsundar --- Checked fstat for the last 2 months, the 2 tests mentioned here do not fail. Closing this issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 13:54:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:54:45 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srangana at redhat.c | |om) | --- Comment #3 from Shyamsundar --- The problem needs to be solved, as otherwise a future symbol leak is not preventable. If required we may need an additional job that does not enable-debug (or add a task to an existing job) and checks for symbols. Is there further information required to resolve the problem? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 13:59:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:59:15 +0000 Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-17 13:59:15 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22488 (geo-rep: IPv6 support) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 13:59:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 13:59:38 +0000 Subject: [Bugs] [Bug 1695445] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-17 13:59:38 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22489 (geo-rep: fix integer config validation) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 14:39:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 14:39:06 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 --- Comment #4 from Yaniv Kaul --- (In reply to Shyamsundar from comment #3) > The problem needs to be solved, as otherwise a future symbol leak is not > preventable. > > If required we may need an additional job that does not enable-debug (or add > a task to an existing job) and checks for symbols. > > Is there further information required to resolve the problem? Commitment to solve it - it was entered 1.5 years ago, and no one worked on it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 14:53:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 14:53:50 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 --- Comment #5 from Shyamsundar --- (In reply to Yaniv Kaul from comment #4) > (In reply to Shyamsundar from comment #3) > > The problem needs to be solved, as otherwise a future symbol leak is not > > preventable. > > > > If required we may need an additional job that does not enable-debug (or add > > a task to an existing job) and checks for symbols. > > > > Is there further information required to resolve the problem? > > Commitment to solve it - it was entered 1.5 years ago, and no one worked on > it. Are you looking at commitment from me to resolve this? Asking to understand as it was marked NEEDINFO against me. If so do let me know, I can do what is required and see how best to provide the details to the infra team to add to the smoke jobs. (although I have to add, with the information provided I would assume we know what needs to be done) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 15:02:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:02:00 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 --- Comment #6 from M. Scherer --- yeah, it seems to have been forgotten with all more urgent fire, sorry. However, I miss lots of context on this and can't find the symbol-check.sh script anywhere. I guess our best bet would be to split the symbol check in a separate jobs, so we can do the debug build and do the test, rather than bundle that with the regular regression test. This would permit faster feedback on that matter. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 15:21:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:21:51 +0000 Subject: [Bugs] [Bug 1697907] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697907 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-10 03:27:13 |2019-04-17 15:21:51 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22578 (glusterd: fix loading ctime in client graph logic) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:21:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:21:52 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Bug 1698471 depends on bug 1697907, which changed state. Bug 1697907 Summary: ctime feature breaks old client to connect to new server https://bugzilla.redhat.com/show_bug.cgi?id=1697907 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:22:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:22:13 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-16 10:50:59 |2019-04-17 15:22:13 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22579 (glusterd: fix loading ctime in client graph logic) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:36:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:36:37 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22590 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:36:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:36:38 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22590 (doc: Added release notes for 6.1) posted (#1) for review on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:40:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:40:00 +0000 Subject: [Bugs] [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1200264 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |pgurusid at redhat.com Resolution|--- |WONTFIX Flags|needinfo?(skoduri at redhat.co | |m) | Last Closed| |2019-04-17 15:40:00 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 15:40:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:40:01 +0000 Subject: [Bugs] [Bug 1200262] Upcall framework support along with cache_invalidation usecase handled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1200262 Bug 1200262 depends on bug 1200264, which changed state. Bug 1200264 Summary: Upcall: Support to handle upcall notifications asynchronously https://bugzilla.redhat.com/show_bug.cgi?id=1200264 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 15:44:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:44:33 +0000 Subject: [Bugs] [Bug 1508025] symbol-check.sh is not failing for legitimate reasons In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1508025 --- Comment #7 from Niels de Vos --- The script is path or the glusterfs.git repository: https://github.com/gluster/glusterfs/blob/master/tests/basic/symbol-check.sh -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 15:52:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:52:27 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jahernan at redhat.com --- Comment #6 from Xavi Hernandez --- The hang is caused by a log message sent from inside a locked region in a memory allocation function, which also makes use of dynamic memory, causing a deadlock (Susant has already posted a patch [1] to avoid the deadlock) However the log message should never be triggered because it means that something is not working fine. I found another case where this also happens [2]. Debugging it, I found an issue in memory accounting management. I fixed it in another patch [3]. [1] https://review.gluster.org/c/glusterfs/+/22589 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1663375 [3] https://review.gluster.org/c/glusterfs/+/22554 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 15:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 15:59:33 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 --- Comment #7 from Xavi Hernandez --- To be clear, the referenced bug is not related to this issue, but I found it when I started debugging the original problem. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 16:03:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 16:03:18 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #628 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22554 (core: handle memory accounting correctly) posted (#4) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 16:03:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 16:03:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22554 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 16:03:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 16:03:21 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22554 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 16:03:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 16:03:22 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22554 (core: handle memory accounting correctly) posted (#4) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 16:05:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 16:05:12 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 --- Comment #9 from Xavi Hernandez --- I've referenced this bug in my patch. I think Sasant's patch should also be added. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 17:32:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 17:32:42 +0000 Subject: [Bugs] [Bug 1428052] performance/io-threads: Eliminate spinlock contention via fops-per-thread-ratio In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428052 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(vbellur at redhat.co | |m) | --- Comment #3 from Vijay Bellur --- The patch did not progress as we failed to get an update from Facebook about the nature of their performance testing. In our performance tests, we were not able to observe significant gains. I will revisit this patch to see if we can get any gains in performance now. Thanks! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 17 18:10:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 18:10:35 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-17 18:10:35 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22590 (doc: Added release notes for 6.1) merged (#1) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 17 18:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 18:33:20 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22360 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Apr 17 18:33:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 17 Apr 2019 18:33:21 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22360 (features/bit-rot: Unconditionally sign the files during oneshot crawl) posted (#2) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Apr 18 02:12:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 02:12:51 +0000 Subject: [Bugs] [Bug 1437780] don't send lookup in fuse_getattr() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1437780 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed|2018-10-23 15:06:00 |2019-04-18 02:12:51 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 06:26:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 06:26:58 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |6 Component|replicate |distribute CC| |bugs at gluster.org Assignee|ksubrahm at redhat.com |bugs at gluster.org QA Contact|nchilaka at redhat.com | Product|Red Hat Gluster Storage |GlusterFS --- Comment #2 from Karthik U S --- Since you are using glusterfs 6.0, I am changing the product & version accordingly and since it is a distribution issue changing the component as well and assigning it to the right person. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 06:27:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 06:27:46 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ksubrahm at redhat.com Assignee|bugs at gluster.org |spalai at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:14:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:14:10 +0000 Subject: [Bugs] [Bug 1410100] Package arequal-checksum for broader community use In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1410100 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |StudentProject -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:14:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:14:33 +0000 Subject: [Bugs] [Bug 1428083] Repair cluster prove tests for FB environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428083 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(vbellur at redhat.co | |m) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:17:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:17:10 +0000 Subject: [Bugs] [Bug 1428097] Repair more cluster tests in FB IPv6 environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428097 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(vbellur at redhat.co | |m) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:42:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:42:26 +0000 Subject: [Bugs] [Bug 1664398] ./tests/00-geo-rep/00-georep-verify-setup.t does not work with ./run-tests-in-vagrant.sh In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664398 --- Comment #1 from Yaniv Kaul --- I can't even get vagrant running... [ykaul at ykaul vagrant-template-fedora]$ vagrant up Bringing machine 'vagrant-testVM' up with 'libvirt' provider... ==> vagrant-testVM: Box 'gluster-dev-fedora' could not be found. Attempting to find and install... vagrant-testVM: Box Provider: libvirt vagrant-testVM: Box Version: >= 0 ==> vagrant-testVM: Loading metadata for box 'http://download.gluster.org/pub/gluster/glusterfs/vagrant/gluster-dev-fedora/boxes/gluster-dev-fedora.json' vagrant-testVM: URL: http://download.gluster.org/pub/gluster/glusterfs/vagrant/gluster-dev-fedora/boxes/gluster-dev-fedora.json ==> vagrant-testVM: Adding box 'gluster-dev-fedora' (v0.3.0) for provider: libvirt vagrant-testVM: Downloading: http://download.gluster.org/pub/gluster/glusterfs/vagrant/gluster-dev-fedora/boxes/gluster-dev-fedora_0.3.box ==> vagrant-testVM: Box download is resuming from prior download progress An error occurred while downloading the remote file. The error message, if any, is reproduced below. Please fix this error and try again. transfer closed with 1039582069 bytes remaining to read -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:45:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:45:43 +0000 Subject: [Bugs] [Bug 1683317] ./tests/bugs/glusterfs/bug-866459.t failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683317 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(dalefu at gmail.com) Severity|unspecified |medium --- Comment #1 from Yaniv Kaul --- AIO is not available on s/390? So it's not about this test, it's about Gluster not working when AIO is not available? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:50:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:50:26 +0000 Subject: [Bugs] [Bug 1338991] RIO: Tracker bug In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1338991 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #13 from Yaniv Kaul --- Can we close this BZ? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 09:51:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:51:13 +0000 Subject: [Bugs] [Bug 1696633] GlusterFs v4.1.5 Tests from /tests/bugs/ module failing on Intel In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696633 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(chandranaik2 at gmai | |l.com) --- Comment #2 from Yaniv Kaul --- 4.1 is quite an old release, can you try on newer releases? latest 5.x or 6? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:52:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:52:50 +0000 Subject: [Bugs] [Bug 1364877] Source install: configure script should check for python-devel package and rpcbind In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1364877 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |StudentProject --- Comment #2 from Yaniv Kaul --- Kaleb, any intention to handle this, delegate to an intern, or shall we close-wontfix this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 09:56:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:56:03 +0000 Subject: [Bugs] [Bug 1635784] brick process segfault In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635784 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(hgichon at gmail.com | |) --- Comment #6 from Yaniv Kaul --- Does it still happen on newer releases? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 09:57:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 09:57:04 +0000 Subject: [Bugs] [Bug 1419415] Missing Compound FOP in the generator list of FOPs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1419415 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #3 from Yaniv Kaul --- Still relevant? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 10:02:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 10:02:21 +0000 Subject: [Bugs] [Bug 1670155] Tiered volume files disappear when a hot brick is failed/restored until the tier detached. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670155 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-18 10:02:21 --- Comment #2 from Yaniv Kaul --- (In reply to hari gowtham from comment #1) > Patch https://review.gluster.org/#/c/glusterfs/+/21331/ removes tier > functionality from GlusterFS. Therefore, CLOSE-WONTFIX this one. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 10:03:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 10:03:40 +0000 Subject: [Bugs] [Bug 1367770] Introduce graceful mode in stop-all-gluster-processes.sh In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1367770 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix, StudentProject -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 10:45:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 10:45:57 +0000 Subject: [Bugs] [Bug 1338991] RIO: Tracker bug In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1338991 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(srangana at redhat.c | |om) | Last Closed| |2019-04-18 10:45:57 --- Comment #14 from Shyamsundar --- This feature is not being worked on, and hence the tracker is being closed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 10:52:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 10:52:03 +0000 Subject: [Bugs] [Bug 1419415] Missing Compound FOP in the generator list of FOPs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1419415 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |EOL Flags|needinfo?(srangana at redhat.c | |om) | Last Closed| |2019-04-18 10:52:03 --- Comment #4 from Shyamsundar --- The initial intention was to be able to reduce code across xlators that have repeated per-FOP constructs. With the initial target as io-stats in this patch ( https://review.gluster.org/c/glusterfs/+/16586 ). The issue of not being able to add compound FOP to this list of auto generated code was why this bug was open. >From release-6 onwards, compound FOPs are removed from the code base as there are no users. Thus, this bug need not be fixed. The above context, instead of closing just the bug, was to elaborate what was intended and the amount of code-reduction that it would have provided to Gluster, as that is not yet done. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #28 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1690952] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690952 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-5.6 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-18 11:10:09 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694562 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694612 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695391 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695403 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 11:10:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:10:09 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 18 11:12:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:12:12 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |5.6 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report. glusterfs-5.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000123.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 11:14:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 11:14:27 +0000 Subject: [Bugs] [Bug 1701203] New: GlusterFS 6.2 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Bug ID: 1701203 Summary: GlusterFS 6.2 tracker Product: GlusterFS Version: 6 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Deadline: 2019-05-10 Classification: Community Tracker for the release 6.2 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 12:50:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 12:50:29 +0000 Subject: [Bugs] [Bug 1542072] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1542072 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-18 12:50:29 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/19502 (extras/hooks: syntactical errors in SELinux hooks, scipt logic improved) merged (#13) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 16:56:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 16:56:39 +0000 Subject: [Bugs] [Bug 1701337] New: issues with 'building' glusterfs packages if we do 'git clone --depth 1' Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701337 Bug ID: 1701337 Summary: issues with 'building' glusterfs packages if we do 'git clone --depth 1' Product: GlusterFS Version: mainline Status: ASSIGNED Component: build Severity: urgent Priority: urgent Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Right now, a full clone of glusterfs repo needs 148MB of size, but a clone with '--depth 1' option needs just 4MB. It makes sense to use this option in many of the smoke tests and CI/CD environments, so that we save time, bandwidth, and storage. But, right now, we can't 'build' any RPMs with this option. Version-Release number of selected component (if applicable): mainline (april-18th-2019) How reproducible: Steps to Reproduce: 1. git clone --depth 1 https://github.com/gluster/glusterfs.git glusterfs-depth-1 2. cd glusterfs-depth-1; ./autogen.sh && ./configure && make -C extras/LinuxRPM/ glusterrpms 3. You see the error like below ``` rpmbuild --define '_topdir /home/atumball/work/gluster/glusterfs-container-tests/extras/LinuxRPM/rpmbuild' -bs rpmbuild/SPECS/glusterfs.spec error: line 231: Empty tag: Version: make: *** [Makefile:573: srcrpm] Error 1 ``` Expected results: build should pass. Additional info: The issue is with executing ./build-aux/pkg-version script. [atumball at localhost glusterfs-depth-1]$ ./build-aux/pkg-version --full fatal: No names found, cannot describe anything. v- [atumball at localhost glusterfs-depth-1]$ ./build-aux/pkg-version --version 2>/dev/null -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 17:17:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 17:17:09 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #629 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22583 (build-aux/pkg-version: provide option for depth=1) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 17:17:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 17:17:12 +0000 Subject: [Bugs] [Bug 1701337] issues with 'building' glusterfs packages if we do 'git clone --depth 1' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701337 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22583 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 18 17:17:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 18 Apr 2019 17:17:13 +0000 Subject: [Bugs] [Bug 1701337] issues with 'building' glusterfs packages if we do 'git clone --depth 1' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701337 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22583 (build-aux/pkg-version: provide option for depth=1) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 19 02:53:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 02:53:09 +0000 Subject: [Bugs] [Bug 1336513] changelog: compiler warning format string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336513 kaixiangtech changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |xiang.gao at kaixiangtech.com Flags|needinfo?(kkeithle at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 19 03:49:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 03:49:45 +0000 Subject: [Bugs] [Bug 1419950] To generate the FOPs in io-stats xlator using a code-gen framework In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1419950 gaoxyt changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |gaoxyt at 163.com Flags| |needinfo?(menaka.m at outlook. | |com) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 19 05:54:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 05:54:37 +0000 Subject: [Bugs] [Bug 1701457] New: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Bug ID: 1701457 Summary: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Product: GlusterFS Version: mainline Status: NEW Component: ctime Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: With patch https://review.gluster.org/#/c/glusterfs/+/22540, the following log is printed many times during open https://github.com/gluster/glusterfs/blob/1ad201a9fd6748d7ef49fb073fcfe8c6858d557d/xlators/storage/posix/src/posix-metadata.c#L625 Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Logs are flooded with above msg with open Expected results: Logs should not be flooded unless there is real issue. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 19 05:54:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 05:54:50 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 19 06:10:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 06:10:23 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22591 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 19 06:10:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 06:10:24 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22591 (ctime: Fix log repeated logging during open) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 19 06:34:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 06:34:13 +0000 Subject: [Bugs] [Bug 1649252] duplicate performance.cache-size with different values In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1649252 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com, | |bugs at gluster.org Component|io-cache |io-cache Version|unspecified |mainline Product|Red Hat Gluster Storage |GlusterFS --- Comment #3 from Atin Mukherjee --- This is a RHBZ reported by an upstream user and product was selected wrong. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 19 16:09:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 19 Apr 2019 16:09:40 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1606 from Worker Ant --- REVIEW: https://review.gluster.org/22585 (features/locks: fix coverity issues) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 20 11:30:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 20 Apr 2019 11:30:57 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22592 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 20 11:30:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 20 Apr 2019 11:30:57 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #630 from Worker Ant --- REVIEW: https://review.gluster.org/22592 ([DNM][RFE] Padding of structures.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 21 05:38:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 21 Apr 2019 05:38:11 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #631 from Worker Ant --- REVIEW: https://review.gluster.org/22593 ([WIP][RFC]dht-common.h: reorder variables to reduce padding.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 21 05:38:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 21 Apr 2019 05:38:10 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22593 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 21 06:32:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 21 Apr 2019 06:32:43 +0000 Subject: [Bugs] [Bug 1431711] Tests missing for recent multiplexing fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1431711 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed|2018-10-08 10:21:13 |2019-04-21 06:32:43 --- Comment #7 from Yaniv Kaul --- (In reply to Amar Tumballi from comment #6) > I see these two tests are not merged, but are abandon'd. Will keep this as a > NEW bug, will need to address it sometime in future. I'm closing for the time being. If there's a will to resurrect the tests, please do so. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 00:57:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 00:57:38 +0000 Subject: [Bugs] [Bug 1564372] Setup Nagios server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1564372 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sankarshan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 03:18:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 03:18:12 +0000 Subject: [Bugs] [Bug 1699712] regression job is voting Success even in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699712 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 03:18:12 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 03:18:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 03:18:14 +0000 Subject: [Bugs] [Bug 1701808] New: weird reasons for a regression failure. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701808 Bug ID: 1701808 Summary: weird reasons for a regression failure. Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: https://review.gluster.org/#/c/glusterfs/+/22551/ was marked verified +1 and regression https://build.gluster.org/job/centos7-regression/5681/ failed with below reason. I don't believe any privilege violation has been attempted here. 08:40:13 Run the regression test 08:40:13 *********************** 08:40:13 08:40:13 08:40:13 We trust you have received the usual lecture from the local System 08:40:13 Administrator. It usually boils down to these three things: 08:40:13 08:40:13 #1) Respect the privacy of others. 08:40:13 #2) Think before you type. 08:40:13 #3) With great power comes great responsibility. 08:40:13 08:40:13 sudo: no tty present and no askpass program specified 08:40:15 08:40:15 We trust you have received the usual lecture from the local System 08:40:15 Administrator. It usually boils down to these three things: 08:40:15 08:40:15 #1) Respect the privacy of others. 08:40:15 #2) Think before you type. 08:40:15 #3) With great power comes great responsibility. 08:40:15 08:40:15 sudo: no tty present and no askpass program specified 08:40:17 08:40:17 We trust you have received the usual lecture from the local System 08:40:17 Administrator. It usually boils down to these three things: 08:40:17 08:40:17 #1) Respect the privacy of others. 08:40:17 #2) Think before you type. 08:40:17 #3) With great power comes great responsibility. 08:40:17 08:40:17 sudo: no tty present and no askpass program specified 08:40:19 + ssh -o StrictHostKeyChecking=no build at review.gluster.org gerrit review --message ''\''https://build.gluster.org/job/centos7-regression/5681/consoleFull : FAILED'\''' --project=glusterfs --label CentOS-regression=-1 ea59ec6c19a402717b9848e51bfe79eb4b9728a7 08:40:20 + exit 1 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 03:54:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 03:54:57 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/22554 (core: handle memory accounting correctly) merged (#5) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 04:00:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 04:00:43 +0000 Subject: [Bugs] [Bug 1701811] New: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701811 Bug ID: 1701811 Summary: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: core Assignee: atumball at redhat.com Reporter: amukherj at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1701457 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1701457 +++ Description of problem: With patch https://review.gluster.org/#/c/glusterfs/+/22540, the following log is printed many times during open https://github.com/gluster/glusterfs/blob/1ad201a9fd6748d7ef49fb073fcfe8c6858d557d/xlators/storage/posix/src/posix-metadata.c#L625 Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Logs are flooded with above msg with open Expected results: Logs should not be flooded unless there is real issue. Additional info: --- Additional comment from Worker Ant on 2019-04-19 06:10:24 UTC --- REVIEW: https://review.gluster.org/22591 (ctime: Fix log repeated logging during open) posted (#1) for review on master by Kotresh HR Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 04:00:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 04:00:43 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1701811 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701811 [Bug 1701811] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 04:01:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 04:01:56 +0000 Subject: [Bugs] [Bug 1701811] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701811 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST CC|bugs at gluster.org | Assignee|atumball at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 04:36:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 04:36:09 +0000 Subject: [Bugs] [Bug 1698861] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 05:29:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 05:29:11 +0000 Subject: [Bugs] [Bug 1701818] New: Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701818 Bug ID: 1701818 Summary: Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) Product: GlusterFS Version: 6 Hardware: All OS: Linux Status: NEW Component: scripts Severity: medium Assignee: bugs at gluster.org Reporter: anoopcs at redhat.com CC: bugs at gluster.org, mzink at redhat.com Depends On: 1542072 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1542072 +++ Description of problem: * Syntax errors similar as in Bug 1533342 - post/S10selinux-label-brick.sh * Fix globbing problem when using grep for brick info files in both scripts Version-Release number of selected component (if applicable): How reproducible: Run scripts from commandline Steps to Reproduce: 1. /var/lib/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh --volname vol050-vg10 2. /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh --volname vol050-vg10 Actual results: 1. ValueError: File context for /rhgs/vol050-vg10/brick01/fs(/.*)? is not defined 2. grep: /var/lib/glusterd/vols/vol050-vg10/bricks/*: No such file or directory Expected results: # /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh --volname vol050-vg10 # semanage fcontext --list | grep rhgs /rhgs(/.*)? all files system_u:object_r:glusterd_brick_t:s0 /rhgs/vol050-vg10/brick01/fs\(/.*\)? all files system_u:object_r:glusterd_brick_t:s0 # /var/lib/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh --volname vol050-vg10 # semanage fcontext --list | grep rhgs /rhgs(/.*)? all files system_u:object_r:glusterd_brick_t:s0 --- Additional comment from Worker Ant on 2018-02-05 19:36:36 IST --- REVIEW: https://review.gluster.org/19502 (Bug 1542072 - syntactical errors in SELinux hooks) posted (#1) for review on master by --- Additional comment from Worker Ant on 2018-08-23 15:01:40 IST --- REVIEW: https://review.gluster.org/19502 (extras/hooks: syntactical errors in SELinux hooks, scipt logic improved) posted (#6) for review on master by Anoop C S --- Additional comment from Worker Ant on 2019-04-18 18:20:29 IST --- REVIEW: https://review.gluster.org/19502 (extras/hooks: syntactical errors in SELinux hooks, scipt logic improved) merged (#13) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1542072 [Bug 1542072] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 05:29:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 05:29:11 +0000 Subject: [Bugs] [Bug 1542072] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1542072 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1701818 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701818 [Bug 1701818] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 05:30:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 05:30:41 +0000 Subject: [Bugs] [Bug 1542072] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1542072 Anoop C S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1686800 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 05:32:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 05:32:10 +0000 Subject: [Bugs] [Bug 1701818] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22594 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 05:32:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 05:32:11 +0000 Subject: [Bugs] [Bug 1701818] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22594 (extras/hooks: syntactical errors in SELinux hooks, scipt logic improved) posted (#1) for review on release-6 by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 06:13:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:13:07 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #60 from Sanju --- Amgad, Sorry for the delay in response. According to https://bugzilla.redhat.com/show_bug.cgi?id=1676812 heal command says "Launching heal operation to perform index self heal on volume has been unsuccessful: Commit failed on . Please check log file for details" when any of the brick in the volume is down. But in background heal operation will continue to happen. Here, the error message is misleading. I request you to take a look at https://review.gluster.org/22209 where we tried to change this message but retained ourselves from doing it based on the discussions over the patch. I believe in your setup also, if you check the files in bricks they will be healing. and, we never tested the rollback scenario's in our testing. But everything should be fine after rollback. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 06:14:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:14:58 +0000 Subject: [Bugs] [Bug 1148262] [gluster-nagios] Nagios plugins for volume services should work with SELinux enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1148262 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(sabose at redhat.com | |) | Last Closed| |2019-04-22 06:14:58 --- Comment #2 from Sahina Bose --- Closing this as the nagios pluging are not a high priority. Monitoring of gluster deployments can be done via RHGS Web Administration console (Tendrl) or gluster-prometheus -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 06:18:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:18:23 +0000 Subject: [Bugs] [Bug 1403156] Memory leak on graph switch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1403156 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Flags|needinfo?(amarts at redhat.com | |) | |needinfo?(sabose at redhat.com | |) | Last Closed| |2019-04-22 06:18:23 --- Comment #5 from Sahina Bose --- Closing this. Will open specific bugs as encountered with latest version -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 06:35:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:35:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22530 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 06:35:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:35:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1607 from Worker Ant --- REVIEW: https://review.gluster.org/22530 (GlusterD:Checking for null value to void explicit dereferencing of null pointer) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 06:58:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 06:58:15 +0000 Subject: [Bugs] [Bug 1126823] Remove brick if(op_errno == ESTALE)... In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1126823 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rkothiya at redhat.com Assignee|bugs at gluster.org |rkothiya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 07:12:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 07:12:46 +0000 Subject: [Bugs] [Bug 1126823] Remove brick if(op_errno == ESTALE)... In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1126823 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 07:13:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 07:13:27 +0000 Subject: [Bugs] [Bug 1126823] Remove brick if(op_errno == ESTALE)... In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1126823 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-22 07:13:27 --- Comment #1 from Rinku --- Thanks for the suggestion, the idea of removing the brick when we get ESTALE error seems logical but cannot be implemented because the ESTALE error can be thrown by any xlator and if we remove the brick like this then it can have other undesirable repercussion. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 07:17:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 07:17:01 +0000 Subject: [Bugs] [Bug 1618915] Spurious failure in tests/basic/ec/ec-1468261.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1618915 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Flags|needinfo?(pkarampu at redhat.c | |om) | Last Closed| |2019-04-22 07:17:01 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 07:17:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 07:17:37 +0000 Subject: [Bugs] [Bug 1618915] Spurious failure in tests/basic/ec/ec-1468261.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1618915 --- Comment #2 from Pranith Kumar K --- Not observed anymore. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 08:47:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 08:47:34 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #9 from Worker Ant --- REVIEW: https://review.gluster.org/22586 (cluster/afr: Set lk-owner before inodelk/entrylk/lk) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 08:49:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 08:49:35 +0000 Subject: [Bugs] [Bug 1701808] weird reasons for a regression failure. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701808 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com Assignee|bugs at gluster.org |dkhandel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 12:54:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 12:54:08 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22597 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 12:54:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 12:54:09 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #18 from Worker Ant --- REVIEW: https://review.gluster.org/22597 (tests: add .t file to increase cli code coverage) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1690950] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690950 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:06 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1691187] fix Coverity CID 1399758 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:06 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1692957] rpclib: slow floating point math and libm In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:06 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:10 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1693155, which changed state. Bug 1693155 Summary: Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1693155 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:06 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:06 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694561 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694610 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695436 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1695445] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695445 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1697764] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697764 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:13 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:13 +0000 Subject: [Bugs] [Bug 1698471] ctime feature breaks old client to connect to new server In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698471 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699198] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699198 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699319] Thin-Arbiter SHD minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699319 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699499] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699499 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-22 13:33:17 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699703] ctime: Creation of tar file on gluster mount throws warning "file changed as we read it" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699703 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699713] glusterfs build is failing on rhel-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699713 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699714] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699714 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699715] Log level changes do not take effect until the process is restarted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699715 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1699731] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699731 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.1 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 13:54:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 13:54:32 +0000 Subject: [Bugs] [Bug 1701936] New: comment-on-issue smoke job is experiencing crashes in certain conditions Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701936 Bug ID: 1701936 Summary: comment-on-issue smoke job is experiencing crashes in certain conditions Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: The smoke job at [1] failed with the following stack trace, and hence smoke is failing for the patch under consideration. The stack in the logs looks as follows, 03:04:53 [comment-on-issue] $ /bin/sh -xe /tmp/jenkins7951448835552971908.sh 03:04:53 + echo https://review.gluster.org/22471 03:04:53 https://review.gluster.org/22471 03:04:53 + echo master 03:04:53 master 03:04:53 + /opt/qa/github/handle_github.py --repo glusterfs -c 03:04:55 Issues found in the commit message: [{u'status': u'Fixes', u'id': u'647'}] 03:04:55 Bug fix, no extra flags required 03:04:55 No issues found in the commit message 03:04:55 Old issues: [] 03:04:55 Traceback (most recent call last): 03:04:55 File "/opt/qa/github/handle_github.py", line 187, in 03:04:55 main(ARGS.repo, ARGS.dry_run, ARGS.comment_file) 03:04:55 File "/opt/qa/github/handle_github.py", line 165, in main 03:04:55 github.comment_on_issues(newissues, commit_msg) 03:04:55 File "/opt/qa/github/handle_github.py", line 47, in comment_on_issues 03:04:55 self._comment_on_issue(issue['id'], comment) 03:04:55 TypeError: string indices must be integers Request an analysis and fix of the same as required. [1] Smoke job link: https://build.gluster.org/job/comment-on-issue/13672/console -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 14:04:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:04:12 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/22582 (features/sdfs: Assign unique lk-owner for entrylk fop) merged (#3) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 14:04:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:04:37 +0000 Subject: [Bugs] [Bug 1313852] Locks xl must use unique keys when filling in lock_count, dom_lock_count etc requested In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1313852 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(kdhananj at redhat.c | |om) --- Comment #1 from Yaniv Kaul --- Issue still seems to exist. Do we plan to fix it? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:05:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:05:04 +0000 Subject: [Bugs] [Bug 1343022] prevent conflicting meta locks to be accpeted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1343022 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(spalai at redhat.com | |) --- Comment #3 from Yaniv Kaul --- Both were not merged. Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:05:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:05:33 +0000 Subject: [Bugs] [Bug 1201239] DHT : disabling rebalance on specific files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1201239 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(jthottan at redhat.c | |om) --- Comment #4 from Yaniv Kaul --- None were merged. Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:06:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:06:15 +0000 Subject: [Bugs] [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1194546 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rtalur at redhat.com Flags| |needinfo?(rtalur at redhat.com | |) --- Comment #6 from Yaniv Kaul --- None of the above patches were merged. What's the status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:09:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:09:56 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 --- Comment #1 from Yaniv Kaul --- We are in a slightly better state now (6.1): { "module" : "ManPages", "order" : 45, "results" : [ { "arch" : "armv7hl,x86_64", "code" : "ManPageMissing", "diag" : "No man page for /usr/sbin/conf.py", "subpackage" : "glusterfs-server" }, { "arch" : "armv7hl,x86_64", "code" : "ManPageMissing", "diag" : "No man page for /usr/sbin/gcron.py", "subpackage" : "glusterfs-server" }, { "arch" : "armv7hl,x86_64", "code" : "ManPageMissing", "diag" : "No man page for /usr/sbin/gf_attach", "subpackage" : "glusterfs-server" }, { "arch" : "armv7hl,x86_64", "code" : "ManPageMissing", "diag" : "No man page for /usr/sbin/glfsheal", "subpackage" : "glusterfs-server" }, { "arch" : "armv7hl,x86_64", "code" : "ManPageMissing", "diag" : "No man page for /usr/sbin/snap_scheduler.py", "subpackage" : "glusterfs-server" } ], "run_time" : 0, "status" : "completed" }, { "module" : "RpmScripts", "order" : 90, "results" : [ { "arch" : "src", "code" : "UseraddNoUid", "context" : { "excerpt" : [ "useradd -r -g gluster -d %{_rundir}/gluster -s /sbin/nologin -c "GlusterFS daemons" gluster" ], "lineno" : 948, "path" : "glusterfs.spec", "sub" : "%pre" }, "diag" : "Invocation of useradd without specifying a UID; this may be OK, because /usr/share/doc/setup/uidgid defines no UID for gluster" } ], "run_time" : 0, "status" : "completed" }, { "module" : "Setxid", "order" : 91, "results" : [ { "arch" : "armv7hl,x86_64", "code" : "UnauthorizedSetxid", "context" : { "path" : "/usr/bin/fusermount-glusterfs" }, "diag" : "File /usr/bin/fusermount-glusterfs is setuid root but is not on the setxid whitelist.", "subpackage" : "glusterfs-fuse" } ], "run_time" : 0, -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:11:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:11:08 +0000 Subject: [Bugs] [Bug 1297203] readdir false-failure with non-Linux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1297203 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |pkarampu at redhat.com Flags| |needinfo?(pkarampu at redhat.c | |om) --- Comment #14 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:11:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:11:36 +0000 Subject: [Bugs] [Bug 1098991] Dist-geo-rep: Invalid slave url (::: three or more colons) error out with unclear error message. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098991 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com Flags| |needinfo?(khiremat at redhat.c | |om) --- Comment #4 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:13:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:13:07 +0000 Subject: [Bugs] [Bug 1593337] Perform LINK fop under locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593337 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(kdhananj at redhat.c | |om) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:18:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:18:18 +0000 Subject: [Bugs] [Bug 1697866] Provide a way to detach a failed node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697866 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1696334 Depends On|1696334 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1696334 [Bug 1696334] Provide a way to detach a failed node -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:18:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:18:36 +0000 Subject: [Bugs] [Bug 1129609] Values for cache-priority should be validated In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1129609 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-04-22 14:18:36 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 14:26:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:26:48 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22598 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 14:26:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 14:26:49 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #19 from Worker Ant --- REVIEW: https://review.gluster.org/22598 (tier: remove tier code to increase code coverage in cli) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 15:14:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:14:54 +0000 Subject: [Bugs] [Bug 1343022] prevent conflicting meta locks to be accpeted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1343022 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE Flags|needinfo?(spalai at redhat.com | |) | Last Closed| |2019-04-22 15:14:54 --- Comment #4 from Susant Kumar Palai --- (In reply to Yaniv Kaul from comment #3) > Both were not merged. Status? Fixed by https://review.gluster.org/#/c/glusterfs/+/21603/. Closing the bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:15:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:15:36 +0000 Subject: [Bugs] [Bug 1593337] Perform LINK fop under locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593337 Krutika Dhananjay changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(kdhananj at redhat.c | |om) | --- Comment #2 from Krutika Dhananjay --- I don't plan to fix this anytime soon. It's a good bug to fix for beginners as it provides some amount of exposure to the way shard and locks translators work. I'll keep this and any other not-so-urgent bzs raised in future reserved for using to mentor someone. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:16:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:16:26 +0000 Subject: [Bugs] [Bug 1593337] Perform LINK fop under locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593337 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix, StudentProject -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:20:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:20:28 +0000 Subject: [Bugs] [Bug 1313852] Locks xl must use unique keys when filling in lock_count, dom_lock_count etc requested In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1313852 --- Comment #2 from Krutika Dhananjay --- (In reply to Yaniv Kaul from comment #1) > Issue still seems to exist. Do we plan to fix it? That's true. The issue does still exist. I need to check if there are any consumers today that will request multiple such counts as part of the same fop. And then again, it also needs to be fixed in a backward-compatible way since the key-requesting translators will be on the client side and their values are served by locks translator which sits on the server side. I just found this while reading code and raised it a while ago. Let me check if we need this. Keeping the needinfo on me intact until then. -Krutika -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:22:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:22:37 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22599 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 15:22:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:22:38 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #20 from Worker Ant --- REVIEW: https://review.gluster.org/22599 (tests: add .t files to increase cli code coverage) posted (#1) for review on master by Rishubh Jain -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 15:24:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:24:40 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 --- Comment #3 from Susant Kumar Palai --- Will need the following initial pieces of information to analyze the issue. 1- gluster volume info 2- run the following command on the root of all the bricks on all servers. "getfattr -m . -de hex " 3- disk usage of each brick Susant -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:42:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:42:14 +0000 Subject: [Bugs] [Bug 1701983] New: Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701983 Bug ID: 1701983 Summary: Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: disperse Severity: high Assignee: aspandey at redhat.com Reporter: nbalacha at redhat.com QA Contact: nchilaka at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1698861 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1698861 +++ Description of problem: Running the following .t results in both olddir and newdir visible from the mount point and listing them shows no files. Steps to Reproduce: #!/bin/bash . $(dirname $0)/../../include.rc . $(dirname $0)/../../volume.rc . $(dirname $0)/../../common-utils.rc cleanup TEST glusterd TEST pidof glusterd TEST $CLI volume create $V0 disperse 6 disperse-data 4 $H0:$B0/$V0-{1..24} force TEST $CLI volume start $V0 TEST glusterfs -s $H0 --volfile-id $V0 $M0 ls $M0/ mkdir $M0/olddir mkdir $M0/olddir/subdir touch $M0/olddir/file-{1..10} ls -lR TEST kill_brick $V0 $H0 $B0/$V0-1 TEST kill_brick $V0 $H0 $B0/$V0-2 TEST kill_brick $V0 $H0 $B0/$V0-7 TEST kill_brick $V0 $H0 $B0/$V0-8 TEST mv $M0/olddir $M0/newdir # Start all bricks TEST $CLI volume start $V0 force $CLI volume status # It takes a while for the client to reconnect to the brick sleep 5 ls -l $M0 # Cleanup #cleanup Version-Release number of selected component (if applicable): How reproducible: Consistently Actual results: [root at rhgs313-6 tests]# ls -lR /mnt/glusterfs/0/ /mnt/glusterfs/0/: total 8 drwxr-xr-x. 2 root root 4096 Apr 11 17:12 newdir drwxr-xr-x. 2 root root 4096 Apr 11 17:12 olddir /mnt/glusterfs/0/newdir: total 0 /mnt/glusterfs/0/olddir: total 0 [root at rhgs313-6 tests]# Expected results: Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 [Bug 1698861] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 15:42:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 15:42:14 +0000 Subject: [Bugs] [Bug 1698861] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1701983 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701983 [Bug 1701983] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 18:16:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 18:16:34 +0000 Subject: [Bugs] [Bug 1701936] comment-on-issue smoke job is experiencing crashes in certain conditions In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701936 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |dkhandel at redhat.com Resolution|--- |CURRENTRELEASE Severity|high |unspecified Last Closed| |2019-04-22 18:16:34 --- Comment #2 from Deepshikha khandelwal --- It is fixed now: https://build.gluster.org/job/comment-on-issue/13697/console -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 18:38:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 18:38:05 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22601 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 18:38:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 18:38:05 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #632 from Worker Ant --- REVIEW: https://review.gluster.org/22601 ([WIP] options.c,h: minor changes to GF_OPTION_RECONF) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 19:45:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 19:45:11 +0000 Subject: [Bugs] [Bug 1702043] New: Newly created files are inaccessible via FUSE Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702043 Bug ID: 1702043 Summary: Newly created files are inaccessible via FUSE Product: GlusterFS Version: 6 OS: Linux Status: NEW Component: fuse Severity: high Assignee: bugs at gluster.org Reporter: bio.erikson at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Newly created files/dirs will be inaccessible to the local FUSE mount after file IO is completed. I have recently started to experience this problem after upgrading to gluster 6.0, and did not previously experience this problem. I have two nodes running glusterfs, each with a FUSE mount pointed to localhost. ``` #/etc/fstab localhost:/gv0 /data/ glusterfs lru-limit=0,defaults,_netdev,acl 0 0 ``` I have ran in to this problem with rsync, random file creation with dd, and mkdir/touch. I have noticed that files are accessible while being written too, and become inaccessible once the file IO is complete. It usually happens in 'chunks' of sequential files. After some period of time >15 min the problem resolves itself. The files on the local bricks ls just fine. The problematic files/dirs are accessible via FUSE mounts on other machines. Heal doesn't report any problems. Small file workloads seem to make the problem worse. Overwriting existing files seems to not create problematic files. *Gluster Info* Volume Name: gv0 Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: ... Options Reconfigured: cluster.self-heal-daemon: enable server.ssl: on client.ssl: on auth.ssl-allow: * transport.address-family: inet nfs.disable: on user.smb: disable performance.write-behind: on diagnostics.latency-measurement: off diagnostics.count-fop-hits: off cluster.lookup-optimize: on features.cache-invalidation: on features.cache-invalidation-timeout: 600 performance.nl-cache: on cluster.readdir-optimize: on storage.build-pgfid: off diagnostics.brick-log-level: ERROR diagnostics.brick-sys-log-level: ERROR diagnostics.client-log-level: ERROR *Client Log* The FUSE log is flooded with: ``` [2019-04-22 19:12:39.231654] D [MSGID: 0] [io-stats.c:2227:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x7f535ca5c728, gv0 returned -1 error: No such file or directory [No such file or directory] ``` Version-Release number of selected component (if applicable): apt list | grep gluster bareos-filedaemon-glusterfs-plugin/stable 16.2.4-3+deb9u2 amd64 bareos-storage-glusterfs/stable 16.2.4-3+deb9u2 amd64 glusterfs-client/unknown 6.1-1 amd64 [upgradable from: 6.0-1] glusterfs-common/unknown 6.1-1 amd64 [upgradable from: 6.0-1] glusterfs-dbg/unknown 6.1-1 amd64 [upgradable from: 6.0-1] glusterfs-server/unknown 6.1-1 amd64 [upgradable from: 6.0-1] tgt-glusterfs/stable 1:1.0.69-1 amd64 uwsgi-plugin-glusterfs/stable,stable 2.0.14+20161117-3+deb9u2 amd64 How reproducible: Always Steps to Reproduce: 1. Upgrade from 5.6 to either 6.0 or 6.1, with the described configuration. 2. Run a small file intensive workload. Actual results: ``` dd if=/dev/urandom bs=1024 count=10240 | split -a 4 -b 1k - file. 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 18.3999 s, 57.0 kB/s ls: cannot access 'file.abbd': No such file or directory ls: cannot access 'file.aabb': No such file or directory ls: cannot access 'file.aadh': No such file or directory ls: cannot access 'file.aafq': No such file or directory ... total 845 -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaa -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaab -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaac -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaad -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaae -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaf -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaag -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaah -????????? ? ? ? ? ? file.aaai -????????? ? ? ? ? ? file.aaaj -????????? ? ? ? ? ? file.aaak -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaal -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaam -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaan -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaao -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaap -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaq -????????? ? ? ? ? ? file.aaar -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaas -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaat -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaau -????????? ? ? ? ? ? file.aaav -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaaw -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaax -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaay -????????? ? ? ? ? ? file.aaaz -????????? ? ? ? ? ? file.aaba -????????? ? ? ? ? ? file.aabb -rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aabc ... # Wait 10 mins total 1024 -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaa -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaab -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaac -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaad -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaae -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaf -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaag -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaah -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaai -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaj -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaak -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaal -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaam -rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaan ... Expected results: All files to be accessible immediately. Additional info: There was nothing of interest in the other logs when changed to INFO. Seems similar to Bug 1647229 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 22 21:51:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 21:51:49 +0000 Subject: [Bugs] [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1194546 Raghavendra Talur changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rtalur at redhat.com |needinfo?(anoopcs at redhat.co |) |m) --- Comment #7 from Raghavendra Talur --- The patch posted(http://review.gluster.org/10350) for review handles the case where: If both process A and B are on the same Gluster client machine, then it ensures write-behind orders write and lock requests from both the processes in the right order. On review, Raghavendra G commented with the following example and review: A write w1 is done and is cached in write-behind. A mandatory lock is held by same thread which conflicts with w1 (Is that even a valid case? If not, probably we don't need this patch at all). This mandatory lock goes through write-behind and locks xlator grants this lock. Now write-behind flushes w1 and posix-locks fails w1 as a conflicting mandatory lock is held. But now that I think of it, it seems like an invalid (exotic at its best) use-case. Anoop/Raghavendra G, >From mandatory locking and write-behind perspective, is it still an exotic case? If so, we can close this bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 22 21:52:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 22 Apr 2019 21:52:49 +0000 Subject: [Bugs] [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1194546 Raghavendra Talur changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rgowdapp at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 01:14:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 01:14:59 +0000 Subject: [Bugs] [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1194546 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(anoopcs at redhat.co | |m) | |needinfo?(rgowdapp at redhat.c | |om) | --- Comment #8 from Raghavendra G --- (In reply to Raghavendra Talur from comment #7) > The patch posted(http://review.gluster.org/10350) for review handles the > case where: > If both process A and B are on the same Gluster client machine, then it > ensures write-behind orders write and lock requests from both the processes > in the right order. > > > On review, Raghavendra G commented with the following example and review: > A write w1 is done and is cached in write-behind. > A mandatory lock is held by same thread which conflicts with w1 (Is that > even a valid case? If not, probably we don't need this patch at all). This > mandatory lock goes through write-behind and locks xlator grants this lock. > Now write-behind flushes w1 and posix-locks fails w1 as a conflicting > mandatory lock is held. > But now that I think of it, it seems like an invalid (exotic at its best) > use-case. What I missed above is when write and lock requests happen from two different processes on same mount point (which the commit msg says). For that case, this patch is still required. > > > Anoop/Raghavendra G, > > From mandatory locking and write-behind perspective, is it still an exotic > case? If so, we can close this bug. No. I was wrong. This patch is required for multiple process scenario. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 02:25:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 02:25:43 +0000 Subject: [Bugs] [Bug 1701983] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701983 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |amukherj at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 03:31:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 03:31:12 +0000 Subject: [Bugs] [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1194546 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rtalur at redhat.com | |) --- Comment #9 from Raghavendra G --- I've restored the patch, but it ran into conflict. Can you refresh? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 03:36:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 03:36:33 +0000 Subject: [Bugs] [Bug 1702131] New: Two files left in EC volume after a rename when glusterfsd out of service Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702131 Bug ID: 1702131 Summary: Two files left in EC volume after a rename when glusterfsd out of service Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: $subject Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. create a 4x2 ec volume, 2. ganesha exports a directory, and nfs client mount it at /mnt/nfs; mkdir /mnt/nfs/a mkdir /mnt/nfs/a/dir mkdir /mnt/nfs/b killall glusterfsd mv /mnt/nfs/a/dir /mnt/nfs/b service glusterd restart ls /mnt/nfs/*/ Actual results: /mnt/nfs/a and /mnt/nfs/b contains two directory named "dir", and the heal won't even complete. Expected results: Only /mnt/nfs/b contains directory named "dir". Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 03:44:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 03:44:28 +0000 Subject: [Bugs] [Bug 1702131] The source file is left in EC volume after rename when glusterfsd out of service In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702131 Kinglong Mee changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|Two files left in EC volume |The source file is left in |after a rename when |EC volume after rename when |glusterfsd out of service |glusterfsd out of service -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 03:50:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 03:50:11 +0000 Subject: [Bugs] [Bug 1702131] The source file is left in EC volume after rename when glusterfsd out of service In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702131 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22602 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 03:50:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 03:50:12 +0000 Subject: [Bugs] [Bug 1702131] The source file is left in EC volume after rename when glusterfsd out of service In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702131 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22602 (ec-heal: check parent gfid when deleting stale name) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 04:12:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 04:12:37 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #61 from Amgad --- Thanks Sanju: We do automate the procedure, we'll need to have a successful check. What command you recommend then to check that the heal is successful during our automated rollback? We can't just ignore the unsuccessful message because it can be real as well. Appreciate your prompt answer. Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 04:21:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 04:21:07 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #62 from Amgad --- Please go thru my data on comment - 2019-03-24 03:55:36 UTC where it shows heal is not happening till the 2nd node is rolled-back as well to 3.12.15 -- so till 2 nodes at 3.12.15,heal doesn't start -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 04:44:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 04:44:59 +0000 Subject: [Bugs] [Bug 1700295] The data couldn't be flushed immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync after glfs_write. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700295 Prasanna Kumar Kalever changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Regression CC| |prasanna.kalever at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 04:46:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 04:46:31 +0000 Subject: [Bugs] [Bug 1700295] The data couldn't be flushed immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync after glfs_write. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700295 Prasanna Kumar Kalever changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|Regression | Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 04:46:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 04:46:44 +0000 Subject: [Bugs] [Bug 1700295] The data couldn't be flushed immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync after glfs_write. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700295 Prasanna Kumar Kalever changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 05:09:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 05:09:41 +0000 Subject: [Bugs] [Bug 1701808] weird reasons for a regression failure. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701808 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-23 05:09:41 --- Comment #1 from Deepshikha khandelwal --- Jenkins user on regression machines had been moved from 'wheel' group to 'mock' secondary group by ansible playbook and hence lost it's sudo permissions from sudoers config file. The change in playbook has been reverted and is fixed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 05:21:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 05:21:25 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22603 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 05:21:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 05:21:26 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22603 (geo-rep: Fix entries and metadata counters in geo-rep status) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 05:27:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 05:27:23 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(khiremat at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 05:56:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 05:56:29 +0000 Subject: [Bugs] [Bug 1201239] DHT : disabling rebalance on specific files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1201239 Jiffin changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(jthottan at redhat.c | |om) | Last Closed| |2019-04-23 05:56:29 --- Comment #5 from Jiffin --- (In reply to Yaniv Kaul from comment #4) > None were merged. Status? The bug was opened in favour of trash translator in gluster, since there is not much active development for that translator, I am closing this bug as won't fix -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 06:00:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 06:00:31 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22604 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 06:00:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 06:00:32 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 --- Comment #11 from Worker Ant --- REVIEW: https://review.gluster.org/22604 (features/locks: error-out {inode,entry}lk fops with all-zero lk-owner) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 06:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 06:23:06 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 --- Comment #4 from Eng Khalid Jamal --- (In reply to Susant Kumar Palai from comment #3) > Will need the following initial pieces of information to analyze the issue. > 1- gluster volume info > 2- run the following command on the root of all the bricks on all servers. > "getfattr -m . -de hex " > 3- disk usage of each brick > > Susant Thank you susant i find solution for my issue, my issue is because i forget when i create my volume i did not enable sharding feature for that the data not distributed to all brick equally , now i solve my issue like below : 1- move my all vm disk to another storage domain. 2- put my disk in maintenance mode. 3- stopped my storage doamin . 4-in here i have two steps one of th is remove all storage and creating it again or i can just enable the sharding options in here i do the second choice because of my storage dos not have huge data . 5-starting my storage domain . . 6- now the data distributing to all brick correctly . thanks again -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 06:25:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 06:25:33 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 Eng Khalid Jamal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-04-23 06:25:33 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 06:30:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 06:30:04 +0000 Subject: [Bugs] [Bug 1654753] A distributed-disperse volume crashes when a symbolic link is renamed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654753 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED CC| |spalai at redhat.com Assignee|bugs at gluster.org |spalai at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:10:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:10:30 +0000 Subject: [Bugs] [Bug 1701039] gluster replica 3 arbiter Unfortunately data not distributed equally In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701039 --- Comment #5 from Susant Kumar Palai --- (In reply to Eng Khalid Jamal from comment #4) > (In reply to Susant Kumar Palai from comment #3) > > Will need the following initial pieces of information to analyze the issue. > > 1- gluster volume info > > 2- run the following command on the root of all the bricks on all servers. > > "getfattr -m . -de hex " > > 3- disk usage of each brick > > > > Susant > > Thank you susant > > i find solution for my issue, my issue is because i forget when i create my > volume i did not enable sharding feature for that the data not distributed > to all brick equally , now i solve my issue like below : > > 1- move my all vm disk to another storage domain. > 2- put my disk in maintenance mode. > 3- stopped my storage doamin . > 4-in here i have two steps one of th is remove all storage and creating it > again or i can just enable the sharding options in here i do the second > choice because of my storage dos not have huge data . > 5-starting my storage domain . . > 6- now the data distributing to all brick correctly . > > thanks again Great! Thanks for the update. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 07:17:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:17:58 +0000 Subject: [Bugs] [Bug 1098991] Dist-geo-rep: Invalid slave url (::: three or more colons) error out with unclear error message. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098991 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22605 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 07:18:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:18:00 +0000 Subject: [Bugs] [Bug 1098991] Dist-geo-rep: Invalid slave url (::: three or more colons) error out with unclear error message. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098991 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22605 (cli: Validate invalid slave url) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 07:18:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:18:23 +0000 Subject: [Bugs] [Bug 1098991] Dist-geo-rep: Invalid slave url (::: three or more colons) error out with unclear error message. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098991 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(khiremat at redhat.c | |om) | --- Comment #6 from Kotresh HR --- (In reply to Yaniv Kaul from comment #4) > Status? Posted the patch -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 07:18:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:18:52 +0000 Subject: [Bugs] [Bug 1297203] readdir false-failure with non-Linux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1297203 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |INSUFFICIENT_DATA Flags|needinfo?(pkarampu at redhat.c | |om) | Last Closed| |2019-04-23 07:18:52 --- Comment #15 from Pranith Kumar K --- I do not recollect any progress on the bz from the developers. At the moment there is not enough data to confirm about the problem and the fix. Closing for now with insufficient-data. Please re-open if this issue still persists so that it can be fixed correctly. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 07:18:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:18:53 +0000 Subject: [Bugs] [Bug 1369447] readdir false-failure with non-Linux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369447 Bug 1369447 depends on bug 1297203, which changed state. Bug 1297203 Summary: readdir false-failure with non-Linux https://bugzilla.redhat.com/show_bug.cgi?id=1297203 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |INSUFFICIENT_DATA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:18:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:18:53 +0000 Subject: [Bugs] [Bug 1369448] readdir false-failure with non-Linux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369448 Bug 1369448 depends on bug 1297203, which changed state. Bug 1297203 Summary: readdir false-failure with non-Linux https://bugzilla.redhat.com/show_bug.cgi?id=1297203 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |INSUFFICIENT_DATA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:42:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:42:21 +0000 Subject: [Bugs] [Bug 1702185] New: coredump reported by test ./tests/bugs/glusterd/bug-1699339.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Bug ID: 1702185 Summary: coredump reported by test ./tests/bugs/glusterd/bug-1699339.t Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Upstream test ./tests/bugs/glusterd/bug-1699339.t failed the regression with a coredump. backtrace of the core can be tracked from https://build.gluster.org/job/regression-test-with-multiplex/1270/display/redirect?page=changes Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:44:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:44:37 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22606 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:44:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:44:39 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22606 (glusterd/shd: Keep a ref on volinfo until attach rpc execute cbk) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 07:47:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 07:47:28 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ksubrahm at redhat.com Flags| |needinfo?(ksubrahm at redhat.c | |om) --- Comment #63 from Sanju --- (In reply to Amgad from comment #61) > Thanks Sanju: > > We do automate the procedure, we'll need to have a successful check. What > command you recommend then to check that the heal is successful during our > automated rollback? You can check whether "Number of entries:" are reducing in "gluster volume heal info " output. Karthik, can you please confirm the above statement? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 08:03:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 08:03:21 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #4 from manschwetus at cs-software-gmbh.de --- Problem persists with 6.1, bt has changed a bit: Crashdump1: #0 0x00007fc3dcaa27f0 in ?? () from /lib/x86_64-linux-gnu/libuuid.so.1 #1 0x00007fc3dcaa2874 in ?? () from /lib/x86_64-linux-gnu/libuuid.so.1 #2 0x00007fc3ddb5cdcc in gf_uuid_unparse (out=0x7fc3c8005580 "c27a90a6-e68b-4b0b-af56-002ea7bf1fb4", uuid=0x8 ) at ./glusterfs/compat-uuid.h:55 #3 uuid_utoa (uuid=uuid at entry=0x8 ) at common-utils.c:2777 #4 0x00007fc3d688c529 in ioc_open_cbk (frame=0x7fc3a8b56208, cookie=, this=0x7fc3d001eb80, op_ret=0, op_errno=117, fd=0x7fc3c7678418, xdata=0x0) at io-cache.c:646 #5 0x00007fc3d6cb09b1 in ra_open_cbk (frame=0x7fc3a8b5b698, cookie=, this=, op_ret=, op_errno=, fd=0x7fc3c7678418, xdata=0x0) at read-ahead.c:99 #6 0x00007fc3d71b10b3 in afr_open_cbk (frame=0x7fc3a8b67d48, cookie=0x0, this=, op_ret=0, op_errno=0, fd=0x7fc3c7678418, xdata=0x0) at afr-open.c:97 #7 0x00007fc3d747c5f8 in client4_0_open_cbk (req=, iov=, count=, myframe=0x7fc3a8b58d18) at client-rpc-fops_v2.c:284 #8 0x00007fc3dd9013d1 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7fc3d0057dd0, pollin=pollin at entry=0x7fc386e7e2b0) at rpc-clnt.c:755 #9 0x00007fc3dd901773 in rpc_clnt_notify (trans=0x7fc3d0058090, mydata=0x7fc3d0057e00, event=, data=0x7fc386e7e2b0) at rpc-clnt.c:922 #10 0x00007fc3dd8fe273 in rpc_transport_notify (this=this at entry=0x7fc3d0058090, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=) at rpc-transport.c:542 #11 0x00007fc3d8333474 in socket_event_poll_in (notify_handled=true, this=0x7fc3d0058090) at socket.c:2522 #12 socket_event_handler (fd=fd at entry=11, idx=idx at entry=2, gen=gen at entry=4, data=data at entry=0x7fc3d0058090, poll_in=, poll_out=, poll_err=, event_thread_died=0 '\000') at socket.c:2924 #13 0x00007fc3ddbb2863 in event_dispatch_epoll_handler (event=0x7fc3cfffee54, event_pool=0x5570195807b0) at event-epoll.c:648 #14 event_dispatch_epoll_worker (data=0x5570195c5a80) at event-epoll.c:761 #15 0x00007fc3dd2bc6db in start_thread (arg=0x7fc3cffff700) at pthread_create.c:463 #16 0x00007fc3dcfe588f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Crashdump2: #0 0x00007fdd13fd97f0 in ?? () from /lib/x86_64-linux-gnu/libuuid.so.1 #1 0x00007fdd13fd9874 in ?? () from /lib/x86_64-linux-gnu/libuuid.so.1 #2 0x00007fdd15093dcc in gf_uuid_unparse (out=0x7fdd0805a2d0 "1f739bdc-f7c0-4133-84cc-554eb594ae81", uuid=0x8 ) at ./glusterfs/compat-uuid.h:55 #3 uuid_utoa (uuid=uuid at entry=0x8 ) at common-utils.c:2777 #4 0x00007fdd0d5c2529 in ioc_open_cbk (frame=0x7fdce44a9f88, cookie=, this=0x7fdd0801eb80, op_ret=0, op_errno=117, fd=0x7fdcf1ad9b78, xdata=0x0) at io-cache.c:646 #5 0x00007fdd0d9e69b1 in ra_open_cbk (frame=0x7fdce44d2a78, cookie=, this=, op_ret=, op_errno=, fd=0x7fdcf1ad9b78, xdata=0x0) at read-ahead.c:99 #6 0x00007fdd0dee70b3 in afr_open_cbk (frame=0x7fdce44a80a8, cookie=0x1, this=, op_ret=0, op_errno=0, fd=0x7fdcf1ad9b78, xdata=0x0) at afr-open.c:97 #7 0x00007fdd0e1b25f8 in client4_0_open_cbk (req=, iov=, count=, myframe=0x7fdce4462528) at client-rpc-fops_v2.c:284 #8 0x00007fdd14e383d1 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7fdd08054490, pollin=pollin at entry=0x7fdc942904d0) at rpc-clnt.c:755 #9 0x00007fdd14e38773 in rpc_clnt_notify (trans=0x7fdd08054750, mydata=0x7fdd080544c0, event=, data=0x7fdc942904d0) at rpc-clnt.c:922 #10 0x00007fdd14e35273 in rpc_transport_notify (this=this at entry=0x7fdd08054750, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=) at rpc-transport.c:542 #11 0x00007fdd0f86a474 in socket_event_poll_in (notify_handled=true, this=0x7fdd08054750) at socket.c:2522 #12 socket_event_handler (fd=fd at entry=10, idx=idx at entry=4, gen=gen at entry=4, data=data at entry=0x7fdd08054750, poll_in=, poll_out=, poll_err=, event_thread_died=0 '\000') at socket.c:2924 #13 0x00007fdd150e9863 in event_dispatch_epoll_handler (event=0x7fdd0f3e0e54, event_pool=0x55cf9c9277b0) at event-epoll.c:648 #14 event_dispatch_epoll_worker (data=0x55cf9c96ca20) at event-epoll.c:761 #15 0x00007fdd147f36db in start_thread (arg=0x7fdd0f3e1700) at pthread_create.c:463 #16 0x00007fdd1451c88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Please tell me if you need further information. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 09:41:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 09:41:54 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ksubrahm at redhat.c | |om) | --- Comment #64 from Karthik U S --- (In reply to Sanju from comment #63) > (In reply to Amgad from comment #61) > > Thanks Sanju: > > > > We do automate the procedure, we'll need to have a successful check. What > > command you recommend then to check that the heal is successful during our > > automated rollback? > > You can check whether "Number of entries:" are reducing in "gluster volume > heal info " output. > > Karthik, can you please confirm the above statement? Yes, if the heal is progressing, the number of entries should decrease in the heal info output. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 09:55:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 09:55:45 +0000 Subject: [Bugs] [Bug 1702240] New: coredump reported by test ./tests/bugs/glusterd/bug-1699339.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702240 Bug ID: 1702240 Summary: coredump reported by test ./tests/bugs/glusterd/bug-1699339.t Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: glusterd Assignee: amukherj at redhat.com Reporter: amukherj at redhat.com QA Contact: bmekala at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1702185 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1702185 +++ Description of problem: Upstream test ./tests/bugs/glusterd/bug-1699339.t failed the regression with a coredump. backtrace of the core can be tracked from https://build.gluster.org/job/regression-test-with-multiplex/1270/display/redirect?page=changes Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-04-23 07:44:39 UTC --- REVIEW: https://review.gluster.org/22606 (glusterd/shd: Keep a ref on volinfo until attach rpc execute cbk) posted (#1) for review on master by mohammed rafi kc Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 09:55:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 09:55:45 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702240 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702240 [Bug 1702240] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 09:57:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 09:57:11 +0000 Subject: [Bugs] [Bug 1702240] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702240 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC|bugs at gluster.org | Assignee|amukherj at redhat.com |rkavunga at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:23:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:23:13 +0000 Subject: [Bugs] [Bug 1702268] New: Memory accounting information is not always accurate Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 Bug ID: 1702268 Summary: Memory accounting information is not always accurate Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When a translator is terminated, its memory accounting information is not destroyed as there could be some memory blocks referencing it still in use. However the mutexes that protect updates of the memory accounting are destroyed. This causes that future updates of the accounting data may contend and do concurrent updates, causing corruption of the counters. Additionally, accounting of reallocs is not correctly computed. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:29:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:29:39 +0000 Subject: [Bugs] [Bug 1699866] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699866 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-23 11:29:39 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) merged (#7) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:29:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:29:39 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Bug 1699917 depends on bug 1699866, which changed state. Bug 1699866 Summary: I/O error on writes to a disperse volume when replace-brick is executed https://bugzilla.redhat.com/show_bug.cgi?id=1699866 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:31:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:31:58 +0000 Subject: [Bugs] [Bug 1702270] New: Memory accounting information is not always accurate Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Bug ID: 1702270 Summary: Memory accounting information is not always accurate Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: core Assignee: atumball at redhat.com Reporter: jahernan at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1702268 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 [Bug 1702268] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:31:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:31:58 +0000 Subject: [Bugs] [Bug 1702268] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702270 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 [Bug 1702270] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:33:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:33:48 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|atumball at redhat.com |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:34:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:34:36 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:35:31 +0000 Subject: [Bugs] [Bug 1702271] New: Memory accounting information is not always accurate Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Bug ID: 1702271 Summary: Memory accounting information is not always accurate Product: GlusterFS Version: 6 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Depends On: 1702268 Blocks: 1702270 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1702268 +++ Description of problem: When a translator is terminated, its memory accounting information is not destroyed as there could be some memory blocks referencing it still in use. However the mutexes that protect updates of the memory accounting are destroyed. This causes that future updates of the accounting data may contend and do concurrent updates, causing corruption of the counters. Additionally, accounting of reallocs is not correctly computed. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 [Bug 1702268] Memory accounting information is not always accurate https://bugzilla.redhat.com/show_bug.cgi?id=1702270 [Bug 1702270] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:35:31 +0000 Subject: [Bugs] [Bug 1702268] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702271 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 [Bug 1702271] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:35:31 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1702271 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 [Bug 1702271] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:38:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:38:24 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Blocks|1702270 | Assignee|bugs at gluster.org |jahernan at redhat.com Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 [Bug 1702270] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:38:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:38:24 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1702271 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 [Bug 1702271] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:21 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22607 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:22 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22607 (core: handle memory accounting correctly) posted (#2) for review on release-6 by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:51 +0000 Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702270, 1702271 --- Comment #11 from Xavi Hernandez --- *** Bug 1702268 has been marked as a duplicate of this bug. *** Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 [Bug 1702270] Memory accounting information is not always accurate https://bugzilla.redhat.com/show_bug.cgi?id=1702271 [Bug 1702271] Memory accounting information is not always accurate -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:51 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1659334 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 [Bug 1659334] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:51 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1659334 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1659334 [Bug 1659334] FUSE mount seems to be hung and not accessible -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:51 +0000 Subject: [Bugs] [Bug 1702268] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702268 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-04-23 11:55:51 --- Comment #1 from Xavi Hernandez --- This bug has been fixed as part of bug #1659334 *** This bug has been marked as a duplicate of bug 1659334 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:53 +0000 Subject: [Bugs] [Bug 1702270] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702270 Bug 1702270 depends on bug 1702268, which changed state. Bug 1702268 Summary: Memory accounting information is not always accurate https://bugzilla.redhat.com/show_bug.cgi?id=1702268 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 11:55:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 11:55:53 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Bug 1702271 depends on bug 1702268, which changed state. Bug 1702268 Summary: Memory accounting information is not always accurate https://bugzilla.redhat.com/show_bug.cgi?id=1702268 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 12:08:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:08:19 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22608 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 12:08:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:08:20 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22608 (cluster/ec: fix fd reopen) posted (#1) for review on release-6 by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 12:26:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:26:57 +0000 Subject: [Bugs] [Bug 1702289] New: Promotion failed for a0afd3e3-0109-49b7-9b74-ba77bf653aba.11229 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702289 Bug ID: 1702289 Summary: Promotion failed for a0afd3e3-0109-49b7-9b74-ba77bf653aba.11229 Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: tiering Assignee: bugs at gluster.org Reporter: p.stukalov at drweb.com QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community We have test cluster, with tiering on SSD and sharding. If i create dispersed volume with only SSD, i have 6-10K IOPS on random write. If i create volume with tiering, i have 6-1200 IOPS on random write. Network in cluster - 10Gbe with jumboframes. OS: Debian 4.9.144-3.1 With tiering volume, i have log messages, like that: [2019-04-23 11:38:00.844369] I [MSGID: 109038] [tier.c:1122:tier_migrate_using_query_file] 0-freezer-tier-dht: Promotion failed for a0afd3e3-0109-49b7-9b74-ba77bf653aba.9173(gfid:a7c0d9b3-f680-4f2a-a41c-98326840cb1d) [2019-04-23 11:38:01.055004] W [MSGID: 109023] [dht-rebalance.c:2058:dht_migrate_file] 0-freezer-tier-dht: Migrate file failed:/.shard/a0afd3e3-0109-49b7-9b74-ba77bf653aba.23351: failed to get xattr from freezer-cold-dht [No data available] I think,for some reason, chunks cannot promotion. root at dtln-ceph01:/# gluster volume status Status of volume: freezer Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick dtln-ceph03:/data/ssds/sdo/brick 49152 0 Y 3619528 Brick dtln-ceph02:/data/ssds/sdo/brick 49152 0 Y 3598217 Brick dtln-ceph01:/data/ssds/sdj/brick 49152 0 Y 141971 Cold Bricks: Brick dtln-ceph01:/data/disks/sdd/brick 49153 0 Y 141992 Brick dtln-ceph01:/data/disks/sde/brick 49154 0 Y 142013 Brick dtln-ceph01:/data/disks/sdg/brick 49155 0 Y 142034 Brick dtln-ceph01:/data/disks/sda/brick 49156 0 Y 142055 Brick dtln-ceph01:/data/disks/sdb/brick 49157 0 Y 142076 Brick dtln-ceph01:/data/disks/sdc/brick 49158 0 Y 142109 Brick dtln-ceph01:/data/disks/sdl/brick 49159 0 Y 142130 Brick dtln-ceph01:/data/disks/sdm/brick 49160 0 Y 142151 Brick dtln-ceph01:/data/disks/sdn/brick 49161 0 Y 142179 Brick dtln-ceph01:/data/disks/sdh/brick 49162 0 Y 142200 Brick dtln-ceph01:/data/disks/sdi/brick 49163 0 Y 142221 Brick dtln-ceph01:/data/disks/sdk/brick 49164 0 Y 142242 Brick dtln-ceph02:/data/disks/sdd/brick 49153 0 Y 3598238 Brick dtln-ceph02:/data/disks/sde/brick 49154 0 Y 3598259 Brick dtln-ceph02:/data/disks/sdf/brick 49155 0 Y 3598293 Brick dtln-ceph02:/data/disks/sda/brick 49156 0 Y 3598314 Brick dtln-ceph02:/data/disks/sdb/brick 49157 0 Y 3598341 Brick dtln-ceph02:/data/disks/sdc/brick 49158 0 Y 3598363 Brick dtln-ceph02:/data/disks/sdl/brick 49159 0 Y 3598384 Brick dtln-ceph02:/data/disks/sdm/brick 49160 0 Y 3598411 Brick dtln-ceph02:/data/disks/sdn/brick 49161 0 Y 3598432 Brick dtln-ceph02:/data/disks/sdh/brick 49162 0 Y 3598453 Brick dtln-ceph02:/data/disks/sdi/brick 49163 0 Y 3598474 Brick dtln-ceph02:/data/disks/sdk/brick 49164 0 Y 3598502 Brick dtln-ceph03:/data/disks/sdd/brick 49153 0 Y 3619549 Brick dtln-ceph03:/data/disks/sde/brick 49154 0 Y 3619570 Brick dtln-ceph03:/data/disks/sdf/brick 49155 0 Y 3619603 Brick dtln-ceph03:/data/disks/sda/brick 49156 0 Y 3619631 Brick dtln-ceph03:/data/disks/sdb/brick 49157 0 Y 3619652 Brick dtln-ceph03:/data/disks/sdc/brick 49158 0 Y 3619673 Brick dtln-ceph03:/data/disks/sdl/brick 49159 0 Y 3619694 Brick dtln-ceph03:/data/disks/sdm/brick 49160 0 Y 3619715 Brick dtln-ceph03:/data/disks/sdn/brick 49161 0 Y 3619737 Brick dtln-ceph03:/data/disks/sdh/brick 49162 0 Y 3619758 Brick dtln-ceph03:/data/disks/sdi/brick 49163 0 Y 3619779 Brick dtln-ceph03:/data/disks/sdk/brick 49164 0 Y 3619806 Tier Daemon on localhost N/A N/A Y 142368 Self-heal Daemon on localhost N/A N/A Y 142264 Tier Daemon on dtln-ceph02 N/A N/A Y 3598624 Self-heal Daemon on dtln-ceph02 N/A N/A Y 3598531 Tier Daemon on dtln-ceph03 N/A N/A Y 3619940 Self-heal Daemon on dtln-ceph03 N/A N/A Y 3619840 Task Status of Volume freezer ------------------------------------------------------------------------------ There are no active volume tasks root at dtln-ceph01:/# gluster volume tier freezer status Node Promoted files Demoted files Status run time in h:m:s --------- --------- --------- --------- --------- localhost 94 0 in progress 116:55:27 dtln-ceph03 79 0 in progress 116:55:27 dtln-ceph02 83 0 in progress 116:55:27 root at dtln-ceph01:/# glusterfs --version glusterfs 3.13.2 Repository revision: git://git.gluster.org/glusterfs.git -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:41:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:41:37 +0000 Subject: [Bugs] [Bug 1702299] New: Custom xattrs are not healed on newly added brick Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Bug ID: 1702299 Summary: Custom xattrs are not healed on newly added brick Product: GlusterFS Version: mainline Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, tdesala at redhat.com Depends On: 1702298 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702298 [Bug 1702298] Custom xattrs are not healed on newly added brick -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:52:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:52:39 +0000 Subject: [Bugs] [Bug 1702303] New: Enable enable fips-mode-rchecksum for new volumes by default Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Bug ID: 1702303 Summary: Enable enable fips-mode-rchecksum for new volumes by default Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: fips-mode-rchecksum option was provided in GD_OP_VERSION_4_0_0 to maintain backward compatibility with older AFR so that a cluster operating at an op version of less than GD_OP_VERSION_4_0_0 used MD5SUM instead of the SHA256 that would be used if this option was enabled. But in a freshly created setup with cluster op-version >=GD_OP_VERSION_4_0_0, we can directly go ahead and use SHA256 without asking the admin to explicitly set the volume option 'on'. In fact in downstream, this created quite a bit of confusion when QE would created a new glusterfs setup on a FIPS enabled machine and would try out self-heal test cases (without setting 'fips-mode-rchecksum' on), leading to crashes due to non-compliance. Ideally this fix should have been done as a part of the original commit: "6daa65356 - posix/afr: handle backward compatibility for rchecksum fop" but I guess it is better late than never. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:52:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:52:58 +0000 Subject: [Bugs] [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:54:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:54:23 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22520 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:54:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:54:25 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22520 (dht: Custom xattrs are not healed in case of add-brick) posted (#3) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 12:56:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:56:13 +0000 Subject: [Bugs] [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22609 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 12:56:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 12:56:14 +0000 Subject: [Bugs] [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22609 (glusterd: enable fips-mode-rchecksum for new volumes) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 13:46:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 13:46:49 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #65 from Amgad --- I confirm that "Number of entries:" was not decreasing and was stuck with the original number (129) till a second node was completely rolled-back to 3.12.15. If I don't roll back the second node, it stays there forever! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 13:47:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 13:47:32 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #66 from Amgad --- It is clear that some mismatch between the versions! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 15:58:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 15:58:37 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22610 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 15:58:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 15:58:39 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1608 from Worker Ant --- REVIEW: https://review.gluster.org/22610 (afr-transaction.c : fix Coverity CID 1398627) posted (#1) for review on master by Rinku Kothiya -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 18:11:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 18:11:27 +0000 Subject: [Bugs] [Bug 1546732] Bad stat performance after client upgrade from 3.10 to 3.12 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546732 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ingard at jotta.no) --- Comment #27 from Xavi Hernandez --- ingard, are you still experiencing this issue ? have you tested with newer versions ? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 23 19:43:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 19:43:57 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 Renich Bon Ciric changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |renich at woralelandia.com --- Comment #1 from Renich Bon Ciric --- Geo-replication is failing as well due to this: ==> cli.log <== [2019-04-23 19:37:29.048169] I [cli.c:845:main] 0-cli: Started running gluster with version 6.1 [2019-04-23 19:37:29.108778] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-04-23 19:37:29.109073] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 ==> cmd_history.log <== [2019-04-23 19:37:30.341565] : volume geo-replication mariadb 11.22.33.44::mariadb create push-pem : FAILED : Passwordless ssh login has not been setup with 11.22.33.44 for user root. ==> cli.log <== [2019-04-23 19:37:30.341932] I [input.c:31:cli_batch] 0-: Exiting with: -1 ==> glusterd.log <== The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory" repeated 2 times between [2019-04-23 19:36:27.419582] and [2019-04-23 19:36:27.419641] The message "E [MSGID: 106316] [glusterd-geo-rep.c:2890:glusterd_verify_slave] 0-management: Not a valid slave" repeated 2 times between [2019-04-23 19:35:42.340661] and [2019-04-23 19:37:30.340518] The message "E [MSGID: 106316] [glusterd-geo-rep.c:3282:glusterd_op_stage_gsync_create] 0-management: 11.22.33.44::mariadb is not a valid slave volume. Error: Passwordless ssh login has not been setup with 11.22.33.44 for user root." repeated 2 times between [2019-04-23 19:35:42.340803] and [2019-04-23 19:37:30.340611] The message "E [MSGID: 106301] [glusterd-syncop.c:1317:gd_stage_op_phase] 0-management: Staging of operation 'Volume Geo-replication Create' failed on localhost : Passwordless ssh login has not been setup with 11.22.33.44 for user root." repeated 2 times between [2019-04-23 19:35:42.340842] and [2019-04-23 19:37:30.340618] -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 00:11:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 00:11:39 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-24 00:11:39 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22606 (glusterd/shd: Keep a ref on volinfo until attach rpc execute cbk) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 02:56:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 02:56:58 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nbalacha at redhat.com --- Comment #3 from Nithya Balachandran --- (In reply to Mohammed Rafi KC from comment #0) > Description of problem: > > Upstream test ./tests/bugs/glusterd/bug-1699339.t failed the regression with > a coredump. backtrace of the core can be tracked from > https://build.gluster.org/job/regression-test-with-multiplex/1270/display/ > redirect?page=changes This will not be available always. Please make it a point to put the backtrace in a comment in the BZ for all crashes. > > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 03:27:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 03:27:32 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #633 from Worker Ant --- REVIEW: https://review.gluster.org/22302 (core: avoid dynamic TLS allocation when possible) merged (#8) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 03:31:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 03:31:56 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-24 03:31:56 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22591 (ctime: Fix log repeated logging during open) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 04:49:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 04:49:22 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #29 from Poornima G --- Can anyone verify if the issue is not seen in 5.6 anymore? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 08:00:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 08:00:36 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #30 from Alberto Bengoa --- (In reply to Poornima G from comment #29) > Can anyone verify if the issue is not seen in 5.6 anymore? I'm planning to test it soon. My environment is partially in production so I need to arrange a maintenance window to do that. I will send an update here as soon as I finish. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 08:19:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 08:19:24 +0000 Subject: [Bugs] [Bug 1667168] Thin Arbiter documentation refers commands don't exist "glustercli' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22612 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 08:19:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 08:19:25 +0000 Subject: [Bugs] [Bug 1667168] Thin Arbiter documentation refers commands don't exist "glustercli' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22612 (WIP: Thin Arbiter volume create CLI) posted (#1) for review on master by None -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:14:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:14:48 +0000 Subject: [Bugs] [Bug 1341355] quota information mismatch which glusterfs on zfs environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1341355 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(hgowtham at redhat.c | |om) --- Comment #4 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:15:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:15:14 +0000 Subject: [Bugs] [Bug 1094328] poor fio rand read performance with read-ahead enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1094328 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |csaba at redhat.com Flags| |needinfo?(csaba at redhat.com) --- Comment #14 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:16:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:16:26 +0000 Subject: [Bugs] [Bug 1393419] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393419 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(mchangir at redhat.c | |om) --- Comment #20 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:16:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:16:49 +0000 Subject: [Bugs] [Bug 1411598] Remove own-thread option entirely for SSL and use epoll event infrastructure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1411598 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(moagrawa at redhat.c | |om) --- Comment #6 from Yaniv Kaul --- So what's the next step here? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:19:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:19:35 +0000 Subject: [Bugs] [Bug 1393419] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393419 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mchangir at redhat.c |needinfo?(rgowdapp at redhat.c |om) |om) --- Comment #21 from Milind Changire --- Raghavendra G should be able to answer this aptly. Redirecting needinfo to him. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 09:25:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 09:25:49 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #31 from Hubert --- I'll so some tests probably next week, tcpdump included. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 10:27:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:27:43 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #5 from manschwetus at cs-software-gmbh.de --- got another set of cores, one for each system in my gluster setup over night: Core was generated by `/usr/sbin/glusterfs --process-name fuse --volfile-server=localhost --volfile-id'. Program terminated with signal SIGSEGV, Segmentation fault. #0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 65 ../nptl/pthread_mutex_lock.c: Datei oder Verzeichnis nicht gefunden. [Current thread is 1 (Thread 0x7faee404c700 (LWP 28792))] (gdb) bt #0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 #1 0x00007faee544e4b5 in ob_fd_free (ob_fd=0x7faebc054df0) at open-behind.c:198 #2 0x00007faee544edd6 in ob_inode_wake (this=this at entry=0x7faed8020d20, ob_fds=ob_fds at entry=0x7faee404bc90) at open-behind.c:355 #3 0x00007faee544f062 in open_all_pending_fds_and_resume (this=this at entry=0x7faed8020d20, inode=0x7faed037cf08, stub=0x7faebc008858) at open-behind.c:442 #4 0x00007faee544f4ff in ob_rename (frame=frame at entry=0x7faebc1ceae8, this=this at entry=0x7faed8020d20, src=src at entry=0x7faed03d9a70, dst=dst at entry=0x7faed03d9ab0, xdata=xdata at entry=0x0) at open-behind.c:1035 #5 0x00007faeed1b0ad0 in default_rename (frame=frame at entry=0x7faebc1ceae8, this=, oldloc=oldloc at entry=0x7faed03d9a70, newloc=newloc at entry=0x7faed03d9ab0, xdata=xdata at entry=0x0) at defaults.c:2631 #6 0x00007faee501f798 in mdc_rename (frame=frame at entry=0x7faebc210468, this=0x7faed80247d0, oldloc=oldloc at entry=0x7faed03d9a70, newloc=newloc at entry=0x7faed03d9ab0, xdata=xdata at entry=0x0) at md-cache.c:1852 #7 0x00007faeed1c6936 in default_rename_resume (frame=0x7faed02d2318, this=0x7faed8026430, oldloc=0x7faed03d9a70, newloc=0x7faed03d9ab0, xdata=0x0) at defaults.c:1897 #8 0x00007faeed14cc45 in call_resume (stub=0x7faed03d9a28) at call-stub.c:2555 #9 0x00007faee4e10cd8 in iot_worker (data=0x7faed8034780) at io-threads.c:232 #10 0x00007faeec88f6db in start_thread (arg=0x7faee404c700) at pthread_create.c:463 #11 0x00007faeec5b888f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 bCore was generated by `/usr/sbin/glusterfs --process-name fuse --volfile-server=localhost --volfile-id'. Program terminated with signal SIGSEGV, Segmentation fault. t#0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 65 ../nptl/pthread_mutex_lock.c: Datei oder Verzeichnis nicht gefunden. [Current thread is 1 (Thread 0x7f7944069700 (LWP 24067))] (gdb) bt #0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 #1 0x00007f794682c4b5 in ob_fd_free (ob_fd=0x7f79283e94e0) at open-behind.c:198 #2 0x00007f794682cdd6 in ob_inode_wake (this=this at entry=0x7f793801eee0, ob_fds=ob_fds at entry=0x7f7944068c90) at open-behind.c:355 #3 0x00007f794682d062 in open_all_pending_fds_and_resume (this=this at entry=0x7f793801eee0, inode=0x7f79301de788, stub=0x7f7928004578) at open-behind.c:442 #4 0x00007f794682d4ff in ob_rename (frame=frame at entry=0x7f79280ab2b8, this=this at entry=0x7f793801eee0, src=src at entry=0x7f7930558ea0, dst=dst at entry=0x7f7930558ee0, xdata=xdata at entry=0x0) at open-behind.c:1035 #5 0x00007f794e729ad0 in default_rename (frame=frame at entry=0x7f79280ab2b8, this=, oldloc=oldloc at entry=0x7f7930558ea0, newloc=newloc at entry=0x7f7930558ee0, xdata=xdata at entry=0x0) at defaults.c:2631 #6 0x00007f79463fd798 in mdc_rename (frame=frame at entry=0x7f7928363ae8, this=0x7f7938022990, oldloc=oldloc at entry=0x7f7930558ea0, newloc=newloc at entry=0x7f7930558ee0, xdata=xdata at entry=0x0) at md-cache.c:1852 #7 0x00007f794e73f936 in default_rename_resume (frame=0x7f7930298c28, this=0x7f79380245f0, oldloc=0x7f7930558ea0, newloc=0x7f7930558ee0, xdata=0x0) at defaults.c:1897 #8 0x00007f794e6c5c45 in call_resume (stub=0x7f7930558e58) at call-stub.c:2555 #9 0x00007f79461eecd8 in iot_worker (data=0x7f7938032940) at io-threads.c:232 #10 0x00007f794de086db in start_thread (arg=0x7f7944069700) at pthread_create.c:463 #11 0x00007f794db3188f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Core was generated by `/usr/sbin/glusterfs --process-name fuse --volfile-server=localhost --volfile-id'. Program terminated with signal SIGSEGV, Segmentation fault. #0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 65 ../nptl/pthread_mutex_lock.c: Datei oder Verzeichnis nicht gefunden. [Current thread is 1 (Thread 0x7f736ce78700 (LWP 42526))] (gdb) bt #0 __GI___pthread_mutex_lock (mutex=0x18) at ../nptl/pthread_mutex_lock.c:65 #1 0x00007f73733499f7 in fd_unref (fd=0x7f73544165a8) at fd.c:515 #2 0x00007f736c3fb618 in client_local_wipe (local=local at entry=0x7f73441ff8c8) at client-helpers.c:124 #3 0x00007f736c44a60a in client4_0_open_cbk (req=, iov=, count=, myframe=0x7f7344038b48) at client-rpc-fops_v2.c:284 #4 0x00007f73730d03d1 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f7368054490, pollin=pollin at entry=0x7f73601b2730) at rpc-clnt.c:755 #5 0x00007f73730d0773 in rpc_clnt_notify (trans=0x7f7368054750, mydata=0x7f73680544c0, event=, data=0x7f73601b2730) at rpc-clnt.c:922 #6 0x00007f73730cd273 in rpc_transport_notify (this=this at entry=0x7f7368054750, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=) at rpc-transport.c:542 #7 0x00007f736db02474 in socket_event_poll_in (notify_handled=true, this=0x7f7368054750) at socket.c:2522 #8 socket_event_handler (fd=fd at entry=10, idx=idx at entry=4, gen=gen at entry=1, data=data at entry=0x7f7368054750, poll_in=, poll_out=, poll_err=, event_thread_died=0 '\000') at socket.c:2924 #9 0x00007f7373381863 in event_dispatch_epoll_handler (event=0x7f736ce77e54, event_pool=0x55621c2917b0) at event-epoll.c:648 #10 event_dispatch_epoll_worker (data=0x55621c2d6a80) at event-epoll.c:761 #11 0x00007f7372a8b6db in start_thread (arg=0x7f736ce78700) at pthread_create.c:463 #12 0x00007f73727b488f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:38:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:38:59 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jahernan at redhat.com Flags| |needinfo?(manschwetus at cs-so | |ftware-gmbh.de) --- Comment #6 from Xavi Hernandez --- Can you share these core dumps and reference the exact version of gluster and ubuntu used for each one ? As per your comments, I understand that you are not doing anything special when this happens, right ? no volume reconfiguration or any other management task. Mount logs would also be helpful. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:46:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:46:40 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 manschwetus at cs-software-gmbh.de changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(manschwetus at cs-so | |ftware-gmbh.de) | --- Comment #7 from manschwetus at cs-software-gmbh.de --- Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-47-generic x86_64) administrator at ubuntu-docker:~$ apt list glusterfs-* Listing... Done glusterfs-client/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed,automatic] glusterfs-common/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed,automatic] glusterfs-dbg/bionic 6.1-ubuntu1~bionic1 amd64 glusterfs-server/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed] mount logs, not sure what you refer? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:50:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:50:13 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(manschwetus at cs-so | |ftware-gmbh.de) --- Comment #8 from Xavi Hernandez --- (In reply to manschwetus from comment #7) > Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-47-generic x86_64) > > administrator at ubuntu-docker:~$ apt list glusterfs-* > Listing... Done > glusterfs-client/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed,automatic] > glusterfs-common/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed,automatic] > glusterfs-dbg/bionic 6.1-ubuntu1~bionic1 amd64 > glusterfs-server/bionic,now 6.1-ubuntu1~bionic1 amd64 [installed] > > mount logs, not sure what you refer? In the server where gluster crashed, you should find a file like /var/log/glusterfs/swarm-volumes.log. I would also need the coredump files and ubuntu version. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:54:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:54:40 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 manschwetus at cs-software-gmbh.de changed: What |Removed |Added ---------------------------------------------------------------------------- Attachment|0 |1 #1558139 is| | obsolete| | Attachment|corefile (3rd) |corefile (older, uploaded #1558139| |not really intended) description| | --- Comment #9 from manschwetus at cs-software-gmbh.de --- Created attachment 1558139 --> https://bugzilla.redhat.com/attachment.cgi?id=1558139&action=edit corefile (older, uploaded not really intended) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:56:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:56:25 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 manschwetus at cs-software-gmbh.de changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(manschwetus at cs-so | |ftware-gmbh.de) | --- Comment #10 from manschwetus at cs-software-gmbh.de --- Created attachment 1558142 --> https://bugzilla.redhat.com/attachment.cgi?id=1558142&action=edit corefile (1st) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:57:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:57:10 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #11 from manschwetus at cs-software-gmbh.de --- Created attachment 1558143 --> https://bugzilla.redhat.com/attachment.cgi?id=1558143&action=edit corefile (2nd) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 10:58:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 10:58:02 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #12 from manschwetus at cs-software-gmbh.de --- Created attachment 1558144 --> https://bugzilla.redhat.com/attachment.cgi?id=1558144&action=edit corefile (3rd) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 11:00:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 11:00:05 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #13 from manschwetus at cs-software-gmbh.de --- administrator at ubuntu-docker:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic Same for the others -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 11:02:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 11:02:44 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #14 from manschwetus at cs-software-gmbh.de --- Created attachment 1558157 --> https://bugzilla.redhat.com/attachment.cgi?id=1558157&action=edit swarm-volumes log of time of crash -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 11:56:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 11:56:03 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #67 from Sanju --- Amgad, Did you change your op-version after downgrading node? If you're performing a downgrade you need to manually edit the op-version to a lesser op-version in glusterd.info file in all machines and restart glusterd's. So that glusterd will run with lower op-version. You can't set lower op-version using volume set operation. and, I would like to mention that, we can't promise anything about downgrade as we don't test/support downgrades. If you are going forward and performing a downgrade, I suggest you to perform a offline downgrade. After the downgrade, you should manually edit op-version in glusterd.info file and restart glusterd. After doing this also, things might go wrong as it is not something tested and supported. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 12:00:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 12:00:55 +0000 Subject: [Bugs] [Bug 1411598] Remove own-thread option entirely for SSL and use epoll event infrastructure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1411598 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED Flags|needinfo?(moagrawa at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 13:22:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 13:22:50 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(manschwetus at cs-so | |ftware-gmbh.de) --- Comment #15 from Xavi Hernandez --- I'm still analyzing the core dumps, but as a first test, could you disable open-behind feature using the following command ? # gluster volume set open-behind off -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 13:43:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 13:43:34 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 manschwetus at cs-software-gmbh.de changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(manschwetus at cs-so | |ftware-gmbh.de) | --- Comment #16 from manschwetus at cs-software-gmbh.de --- Ok, I modified it, lets see if it helps, hopefully it wont kill the performance. sudo gluster volume info swarm-vols Volume Name: swarm-vols Type: Replicate Volume ID: a103c1da-d651-4d65-8f86-a8731e2a670c Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.1.81:/gluster/data Brick2: 192.168.1.86:/gluster/data Brick3: 192.168.1.85:/gluster/data Options Reconfigured: performance.open-behind: off performance.cache-size: 1GB cluster.self-heal-daemon: enable performance.write-behind: off auth.allow: 127.0.0.1 transport.address-family: inet nfs.disable: on performance.cache-max-file-size: 1GB -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 13:46:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 13:46:54 +0000 Subject: [Bugs] [Bug 1341355] quota information mismatch which glusterfs on zfs environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1341355 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(hgowtham at redhat.c | |om) | Last Closed| |2019-04-24 13:46:54 --- Comment #5 from hari gowtham --- While we are about to deprecate quota, it doesn't make sense to work on supporting quota for a file system it didn't have support for. We don't have the bandwidth for working on this bug. Hence i'm closing it as wont fix. If someone can take it to completion, we can reopen this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 14:33:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 14:33:30 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #21 from Worker Ant --- REVIEW: https://review.gluster.org/22597 (tests: add .t file to increase cli code coverage) merged (#7) on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 14:59:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 14:59:18 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #22 from Worker Ant --- REVIEW: https://review.gluster.org/22599 (tests: add .t files to increase cli code coverage) merged (#4) on master by Rishubh Jain -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 15:23:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:23:36 +0000 Subject: [Bugs] [Bug 1696077] Add pause and resume test case for geo-rep In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696077 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-24 15:23:36 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22498 (tests/geo-rep: Add pause and resume test case for geo-rep) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 15:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:30:51 +0000 Subject: [Bugs] [Bug 1702734] New: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 Bug ID: 1702734 Summary: ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open Product: GlusterFS Version: 6 Status: NEW Component: ctime Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Depends On: 1701457 Blocks: 1701811 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1701457 +++ Description of problem: With patch https://review.gluster.org/#/c/glusterfs/+/22540, the following log is printed many times during open https://github.com/gluster/glusterfs/blob/1ad201a9fd6748d7ef49fb073fcfe8c6858d557d/xlators/storage/posix/src/posix-metadata.c#L625 Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Logs are flooded with above msg with open Expected results: Logs should not be flooded unless there is real issue. Additional info: --- Additional comment from Worker Ant on 2019-04-19 06:10:24 UTC --- REVIEW: https://review.gluster.org/22591 (ctime: Fix log repeated logging during open) posted (#1) for review on master by Kotresh HR --- Additional comment from Worker Ant on 2019-04-24 03:31:56 UTC --- REVIEW: https://review.gluster.org/22591 (ctime: Fix log repeated logging during open) merged (#3) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open https://bugzilla.redhat.com/show_bug.cgi?id=1701811 [Bug 1701811] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 15:30:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:30:51 +0000 Subject: [Bugs] [Bug 1701457] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701457 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702734 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 [Bug 1702734] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 15:31:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:31:13 +0000 Subject: [Bugs] [Bug 1702734] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 15:34:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:34:25 +0000 Subject: [Bugs] [Bug 1702734] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22614 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 15:34:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 15:34:26 +0000 Subject: [Bugs] [Bug 1702734] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22614 (ctime: Fix log repeated logging during open) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 16:18:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 16:18:18 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-24 16:18:18 --- Comment #9 from Worker Ant --- REVIEW: https://review.gluster.org/22603 (geo-rep: Fix entries and metadata counters in geo-rep status) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 16:44:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 16:44:54 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22615 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 16:44:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 16:44:56 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1609 from Worker Ant --- REVIEW: https://review.gluster.org/22615 (glusterd: coverity fixes) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Apr 24 19:45:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 19:45:09 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22617 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Apr 24 19:45:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 24 Apr 2019 19:45:10 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 --- Comment #11 from Worker Ant --- REVIEW: https://review.gluster.org/22617 (cloudsync: Fix bug in cloudsync-fops-c.py) posted (#1) for review on master by Anuradha Talur -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 25 04:12:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 04:12:37 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 --- Comment #15 from Worker Ant --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Apr 25 04:14:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 04:14:09 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #634 from Worker Ant --- REVIEW: https://review.gluster.org/22329 (logging.c/h: aggressively remove sprintfs()) merged (#23) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 04:17:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 04:17:02 +0000 Subject: [Bugs] [Bug 1701337] issues with 'building' glusterfs packages if we do 'git clone --depth 1' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701337 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22583 (build-aux/pkg-version: provide option for depth=1) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 05:20:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 05:20:20 +0000 Subject: [Bugs] [Bug 1700078] disablle + reenable of bitrot leads to files marked as bad In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1700078 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-25 05:20:20 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22360 (features/bit-rot: Unconditionally sign the files during oneshot crawl) merged (#7) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Thu Apr 25 06:35:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 06:35:29 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22619 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 06:35:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 06:35:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1610 from Worker Ant --- REVIEW: https://review.gluster.org/22619 (glusterd: put coverity annotations) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 06:35:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 06:35:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1611 from Worker Ant --- REVIEW: https://review.gluster.org/22615 (glusterd: coverity fixes) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:08:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:08:40 +0000 Subject: [Bugs] [Bug 1702952] New: remove tier related information from manual pages Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702952 Bug ID: 1702952 Summary: remove tier related information from manual pages Product: GlusterFS Version: mainline Status: NEW Component: tiering Assignee: bugs at gluster.org Reporter: srakonde at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: As tier feature has been deprecated from glusterfs, gluster manual pages should not have any information related to tier Version-Release number of selected component (if applicable): master How reproducible: always Steps to Reproduce: 1. man gluster | grep tier 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:11:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:11:28 +0000 Subject: [Bugs] [Bug 1702952] remove tier related information from manual pages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702952 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22620 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:11:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:11:29 +0000 Subject: [Bugs] [Bug 1702952] remove tier related information from manual pages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702952 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22620 (man/gluster: remove tier information from gluster manual page) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:33:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:33:02 +0000 Subject: [Bugs] [Bug 1703007] New: The telnet or something would cause high memory usage for glusterd & glusterfsd Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703007 Bug ID: 1703007 Summary: The telnet or something would cause high memory usage for glusterd & glusterfsd Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: i_chips at qq.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: I'm afraid that the telnet or something would cause high memory usage for glusterd & glusterfsd. For example, if the script is executed for days: while [[ 1 ]]; do echo "quit" | telnet xx.xx.xx.xx 24007 echo "quit" | telnet xx.xx.xx.xx 49152 done Later, the memory usage for glusterd & glusterfsd would be changed from 0.5% to 2.9%. Hope someone could help me. Thanks a lot. Version-Release number of selected component (if applicable): GlusterFS 5.6 or below How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:36:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:36:32 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22621 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 09:45:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 09:45:09 +0000 Subject: [Bugs] [Bug 1698716] Regression job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698716 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-04-25 09:45:09 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 10:11:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 10:11:23 +0000 Subject: [Bugs] [Bug 1703020] New: The cluster.heal-timeout option is unavailable for ec volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703020 Bug ID: 1703020 Summary: The cluster.heal-timeout option is unavailable for ec volume Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: # gluster volume get openfs1 cluster.heal-timeout Option Value ------ ----- cluster.heal-timeout 600 Actual results: [2019-04-25 16:33:57.899839] D [MSGID: 0] [ec-heald.c:334:ec_shd_index_healer] 0-test-disperse-0: finished index sweep on subvol test-client-0 [2019-04-25 16:34:57.000628] D [MSGID: 0] [ec-heald.c:330:ec_shd_index_healer] 0-test-disperse-0: starting index sweep on subvol test-client-0 [2019-04-25 16:34:58.477361] D [MSGID: 0] [ec-heald.c:334:ec_shd_index_healer] 0-test-disperse-0: finished index sweep on subvol test-client-0 [2019-04-25 16:35:58.000540] D [MSGID: 0] [ec-heald.c:330:ec_shd_index_healer] 0-test-disperse-0: starting index sweep on subvol test-client-0 [2019-04-25 16:35:58.954843] D [MSGID: 0] [ec-heald.c:334:ec_shd_index_healer] 0-test-disperse-0: finished index sweep on subvol test-client-0 Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 10:30:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 10:30:53 +0000 Subject: [Bugs] [Bug 1703020] The cluster.heal-timeout option is unavailable for ec volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703020 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22622 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 10:30:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 10:30:54 +0000 Subject: [Bugs] [Bug 1703020] The cluster.heal-timeout option is unavailable for ec volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703020 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22622 (cluster/ec: fix shd healer wait timeout) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 10:33:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 10:33:31 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Component|fuse |open-behind --- Comment #17 from Xavi Hernandez --- Right now the issue seems related to open-behind, so changing the component accordingly. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 10:38:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 10:38:58 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #24 from Worker Ant --- REVIEW: https://review.gluster.org/22598 (tier/cli: remove tier code to increase code coverage in cli) merged (#8) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 11:24:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 11:24:30 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22623 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 11:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 11:32:53 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22625 --- Comment #635 from Worker Ant --- REVIEW: https://review.gluster.org/22623 (tests: improve and fix some test scripts) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 11:32:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 11:32:54 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #636 from Worker Ant --- REVIEW: https://review.gluster.org/22625 (core: improve timer accuracy) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 11:41:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 11:41:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22626 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 11:41:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 11:41:47 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #637 from Worker Ant --- REVIEW: https://review.gluster.org/22626 (storage/posix: fix fresh file detection delay) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 12:03:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 12:03:29 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22629 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 12:03:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 12:03:30 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #25 from Worker Ant --- REVIEW: https://review.gluster.org/22629 (libglusterfs: remove compound-fop helper functions) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 12:25:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 12:25:36 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22627 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 12:25:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 12:25:37 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #26 from Worker Ant --- REVIEW: https://review.gluster.org/22627 (performance/decompounder: remove the translator as the feature is not used anymore) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:24:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:24:47 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22630 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:24:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:24:48 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #27 from Worker Ant --- REVIEW: https://review.gluster.org/22630 (tests: add .t files to increase cli code coverage) posted (#1) for review on master by Rishubh Jain -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:36:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:36:42 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22631 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:36:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:36:43 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #28 from Worker Ant --- REVIEW: https://review.gluster.org/22631 (tests/cli: add .t file to increase line coverage in cli) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:38:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:38:34 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22632 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:38:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:38:35 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #638 from Worker Ant --- REVIEW: https://review.gluster.org/22632 (core: reduce some timeouts) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 14:10:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 14:10:49 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22633 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 14:10:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 14:10:50 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #639 from Worker Ant --- REVIEW: https://review.gluster.org/22633 (rpc: implement reconnect back-off strategy) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 03:20:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 03:20:51 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22634 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 03:20:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 03:20:53 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1612 from Worker Ant --- REVIEW: https://review.gluster.org/22634 (glusterd: coverity fixes) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 03:38:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 03:38:43 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22616 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 05:03:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 05:03:13 +0000 Subject: [Bugs] [Bug 1703322] New: Need to document about fips-mode-rchecksum in gluster-7 release notes. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703322 Bug ID: 1703322 Summary: Need to document about fips-mode-rchecksum in gluster-7 release notes. Product: GlusterFS Version: 4.1 Status: NEW Component: doc Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: If a client older than glusterfs-4.x (i.e. 3.x clients) accesses a volume which has the `fips-mode-rchecksum` volume option enabled, it can cause erroneous checksum computation/ unwanted behaviour during afr self-heal. This option is to be enabled only when all clients are also >=4.x Once https://review.gluster.org/#/c/glusterfs/+/22609/, which is targeted at gluster-7, is merged, any new volumes created will have the fips-mode-rchecksum volume option on by default. That makes it important to document explicitly in the release notes that this option is to be enabled only if all clients also >=gluster-4.x. Hence creating a place-holder BZ for the same. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 05:35:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 05:35:05 +0000 Subject: [Bugs] [Bug 1702952] remove tier related information from manual pages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702952 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-26 05:35:05 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22620 (man/gluster: remove tier information from gluster manual page) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:07:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:07:45 +0000 Subject: [Bugs] [Bug 1703329] New: [Plus one scale]: Please create repo for plus one scale work Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 Bug ID: 1703329 Summary: [Plus one scale]: Please create repo for plus one scale work Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Please create repo for plus one scale work under gluster on github Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: repo on gluster github with following name - "plus-one-scale" Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:08:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:08:30 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|[Plus one scale]: Please |[gluster-infra]: Please |create repo for plus one |create repo for plus one |scale work |scale work -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:09:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:09:07 +0000 Subject: [Bugs] [Bug 1191072] ipv6 enabled on the peer, but dns resolution fails with ipv6 and gluster does not fall back to ipv4 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1191072 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ndevos at redhat.com |needinfo?(mchangir at redhat.c |) |om) --- Comment #3 from Niels de Vos --- I think we still want fix this is upcoming versions. Milind, could you close this BZ if it is not the case? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:12:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:12:15 +0000 Subject: [Bugs] [Bug 1191072] ipv6 enabled on the peer, but dns resolution fails with ipv6 and gluster does not fall back to ipv4 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1191072 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mchangir at redhat.c |needinfo?(rgowdapp at redhat.c |om) |om) --- Comment #4 from Milind Changire --- Forwarding needinfo to Raghavendra G. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:16:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:16:39 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |moagrawa at redhat.com Flags| |needinfo?(moagrawa at redhat.c | |om) --- Comment #2 from Niels de Vos --- Mohit, is there any reason for an administrator to execute /usr/sbin/gf_attach ? In case this is not a common, we do not need to add a man-page, but we can move the executable to /usr/libexec/ instead. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 06:17:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:17:59 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ravishankar at redhat.com Flags| |needinfo?(ravishankar at redha | |t.com) --- Comment #3 from Niels de Vos --- Ravi, is there any reason for an administrator to execute /usr/sbin/glfsheal ? In case this is not common, we do not need to add a man-page, but we can move the executable to /usr/libexec/ instead. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 06:23:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:23:39 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |avishwan at redhat.com Flags| |needinfo?(avishwan at redhat.c | |om) --- Comment #4 from Niels de Vos --- Aravinda, which of the python scripts does an administrator need to execute for the snapshot-scheduler? Can they be installed under the Python package directory (or maybe /usr/libexec/) instead? It is also not very common to have executables in the path with a .py extension, it would be nice to address that too. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 06:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:33:15 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #1 from Ashish Pandey --- The name could be "gluster-plus-one-scale" Let me know if you need any information regarding this project. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 06:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:37:05 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(moagrawa at redhat.c |needinfo?(avishwan at redhat.c |om) |om) |needinfo?(ravishankar at redha | |t.com) | |needinfo?(avishwan at redhat.c | |om) | --- Comment #5 from Ravishankar N --- (In reply to Niels de Vos from comment #3) > Ravi, is there any reason for an administrator to execute /usr/sbin/glfsheal > ? In case this is not common, we do not need to add a man-page, but we can > move the executable to /usr/libexec/ instead. glfsheal is invoked internally by the gluster CLI code with the appropriate arguments when the relevant heal info/ split-brain resolution gluster CLI commands are run. There is no need to expose glfsheal directly to the admins. Not sure if we need to make changes to the all invocations in gluster if we move the program to libexec: cli/src/cli-cmd-volume.c: runner_add_args(&runner, SBIN_DIR "/glfsheal", volname, NULL); -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 06:38:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:38:42 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(moagrawa at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 06:52:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 06:52:06 +0000 Subject: [Bugs] [Bug 1624701] error-out {inode, entry}lk fops with all-zero lk-owner In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624701 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-04-16 07:08:54 |2019-04-26 06:52:06 --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22604 (features/locks: error-out {inode,entry}lk fops with all-zero lk-owner) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 07:15:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:15:41 +0000 Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-26 07:15:41 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22519 (geo-rep: Fix rename with existing destination with same gfid) merged (#7) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 07:28:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:28:53 +0000 Subject: [Bugs] [Bug 1703343] New: Bricks fail to come online after node reboot on a scaled setup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Bug ID: 1703343 Summary: Bricks fail to come online after node reboot on a scaled setup Product: GlusterFS Version: mainline Status: NEW Whiteboard: brick-multiplexing Component: glusterd Keywords: ZStream Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, kramdoss at redhat.com, madam at redhat.com, moagrawa at redhat.com, rgeorge at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sarumuga at redhat.com, vbellur at redhat.com Depends On: 1638192 Blocks: 1637968 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1637968 [Bug 1637968] [RHGS] [Glusterd] Bricks fail to come online after node reboot on a scaled setup https://bugzilla.redhat.com/show_bug.cgi?id=1638192 [Bug 1638192] Bricks fail to come online after node reboot on a scaled setup -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 07:28:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:28:53 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 07:48:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:48:16 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 --- Comment #1 from Mohit Agrawal --- Multiple bricks are spawned on a node if the node is reboot during volumes starting from another node in the cluster Reproducer steps 1) Setup a cluster of 3 nodes 2) Enable brick_mux and create and start 50 volumes from node 1 3) Stop all the volumes from any node 4) Start all the volumes from node 2 after put 1 sec delay for i in {1..50}; do gluster v start testvol$i --mode=script; sleep 1; done 5) At the time of volumes are starting on node 2 run command on node 1 pkill -f gluster; glusterd 6) Wait some time to finish volumes startups and check the no. of glusterfsd are running on node1. RCA: At the time of glusterd starts it gets friend update request from a peer node and has version changes for the volumes those are started when node was down.glusterd deletes volfile and reference for old version volumes from glusterd internal data structures and create new volfile.glusterd was not able to attached volume because data structure changes were happening after brick start so data was going through RPC packet in attach request was not correct and brick process sending disconnect to glusterd then glusterd try to spawn a new brick so multiple brick processes are spawned Regards, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 07:52:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:52:55 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22635 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 07:52:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 07:52:56 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22635 (glusterd: Multiple bricks are spawned if a node is reboot) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 08:13:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 08:13:48 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- Yes, I would like to get more information, like what is it going to be used for (or more clearly, what is the project exactly), who need access, etc. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 08:23:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 08:23:27 +0000 Subject: [Bugs] [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-26 08:23:27 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22609 (glusterd: enable fips-mode-rchecksum for new volumes) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 11:16:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 11:16:08 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #3 from Ashish Pandey --- Problem: To increase the capacity of existing gluster volume (AFR or EC), we need to add more bricks on that volume. Currently it is required to add as many node as it is required to keep the volume fault tolerant, which depends on the configuration of the volume. This in turn requires adding more than 1 node to scale gluster volume. However, it is not always possible to buy and place lot of nodes/serves in one shot and provide it to scale our volume. To solve this we have to come up with some solution so that we can scale out volume even if we add one server with enough bricks on that one server. This tool is going to help user to move the drives from one server to other server so that all the bricks are properly distributed and new bricks can be added to volume with complete fault tolerance. Access is needed by following team members for now - 1 - vbellur at redhat.com 2 - atumball at redhat.com 3 - aspandey at redhat.com github issues related to this - https://github.com/gluster/glusterfs/issues/169 https://github.com/gluster/glusterfs/issues/497 https://github.com/gluster/glusterfs/issues/632 --- Ashish -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 12:08:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 12:08:59 +0000 Subject: [Bugs] [Bug 1703433] New: gluster-block: setup GCOV & LCOV job Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703433 Bug ID: 1703433 Summary: gluster-block: setup GCOV & LCOV job Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: prasanna.kalever at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: ### Kind of issue Infra request: Setup GCOV & LCOV job for gluster-block project which should run nightly or weekly to help visualize test & line coverage achieved, which intern help us understand: * how often each line of code executes * what lines of code are actually executed * how much computing time each section of code uses and work on possible optimizations. ### Other useful information Repo: https://github.com/gluster/gluster-block To start with, we can build like we do with [travis](https://github.com/gluster/gluster-block/blob/master/extras/docker/Dockerfile.fedora29) Need to install run time dependencies: # yum install glusterfs-server targetcli tcmu-runner then start glusterd # systemctl start glusterd Then run simple test case # [./tests/basic.t](https://github.com/gluster/gluster-block/blob/master/tests/basic.t) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 12:12:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 12:12:18 +0000 Subject: [Bugs] [Bug 1703435] New: gluster-block: Upstream Jenkins job which get triggered at PR level Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703435 Bug ID: 1703435 Summary: gluster-block: Upstream Jenkins job which get triggered at PR level Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: prasanna.kalever at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: ### Kind of issue Infra request: Need a Jenkins job for gluster-block project which should run per PR (and may be more events like, refresh of PR) to help figure out any possible regressions and to help build overall confidence of upstream master branch. ### Other useful information Repo: https://github.com/gluster/gluster-block To start with, we can build like we do with [travis](https://github.com/gluster/gluster-block/blob/master/extras/docker/Dockerfile.fedora29) Need to install run time dependencies: # yum install glusterfs-server targetcli tcmu-runner then start glusterd # systemctl start glusterd Then run simple test case # [./tests/basic.t](https://github.com/gluster/gluster-block/blob/master/tests/basic.t) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 13:26:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 13:26:11 +0000 Subject: [Bugs] [Bug 1353518] packaging: rpmlint warning and errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1353518 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(avishwan at redhat.c | |om) | |needinfo?(moagrawa at redhat.c | |om) | --- Comment #6 from Aravinda VK --- (In reply to Niels de Vos from comment #4) > Aravinda, which of the python scripts does an administrator need to execute > for the snapshot-scheduler? Can they be installed under the Python package > directory (or maybe /usr/libexec/) instead? It is also not very common to > have executables in the path with a .py extension, it would be nice to > address that too. I don't have much context on this component at the moment, I will look into the code and move to libexec. `conf.py` and `gcron.py` can be moved to other location but at least one binary(currently it is python file, but we can have symlink) is required in sbin because it is used by the end users.(I think libexec is only used for programs executed by non-users internally by applications) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 13:49:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 13:49:02 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #32 from Alberto Bengoa --- Hello Poornima, I did some tests today and, in my scenario, it seems fixed. What I did this time: - Mounted the new cluster (running 5.6 version) using a client running version 5.5 - Started a find . -type d on a directory with lots of directories. - It generated an outgoing traffic (on the client) of around 40mbps [1] Then I upgraded the client to version 5.6 and re-run the tests, and had around 800kbps network traffic[2]. Really good! I've made a couple of tests more, enabling quick read[3][4]. It may have slightly increased my network traffic, but nothing really significant. [1] - https://pasteboard.co/IbVwWTP.png [2] - https://pasteboard.co/IbVxgVU.png [3] - https://pasteboard.co/IbVxuaJ.png [4] - https://pasteboard.co/IbVxCbZ.png This is my current volume info: Volume Name: volume Type: Replicate Volume ID: 1d8f7d2d-bda6-4f1c-aa10-6ad29e0b7f5e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: fs02tmp:/var/data/glusterfs/volume/brick Brick2: fs01tmp:/var/data/glusterfs/volume/brick Options Reconfigured: network.ping-timeout: 10 performance.flush-behind: on performance.write-behind-window-size: 16MB performance.cache-size: 1900MB performance.io-thread-count: 32 transport.address-family: inet nfs.disable: on performance.client-io-threads: on server.allow-insecure: on server.event-threads: 4 client.event-threads: 4 performance.readdir-ahead: off performance.read-ahead: off performance.open-behind: on performance.write-behind: off performance.stat-prefetch: off performance.quick-read: off performance.strict-o-direct: on performance.io-cache: off performance.read-after-open: yes features.cache-invalidation: on features.cache-invalidation-timeout: 600 performance.cache-invalidation: on performance.md-cache-timeout: 600 network.inode-lru-limit: 200000 Let me know if you need anything else. Cheers, Alberto Bengoa -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 13:16:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 13:16:33 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1613 from Worker Ant --- REVIEW: https://review.gluster.org/22634 (glusterd: coverity fixes) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 15:11:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 15:11:35 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 --- Comment #13 from Worker Ant --- REVIEW: https://review.gluster.org/21771 (cloudsync/cvlt: Cloudsync plugin for commvault store) merged (#21) on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 15:39:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 15:39:28 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #4 from M. Scherer --- I created the repo, but neither atumball at redhat.com nor aspandey at redhat.com are valid email on github. I rather not guess that, so I would need either the gthub username, or the email used for the account to add them. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Apr 26 17:13:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 17:13:13 +0000 Subject: [Bugs] [Bug 1094328] poor fio rand read performance with read-ahead enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1094328 Csaba Henk changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(csaba at redhat.com) |needinfo?(rgowdapp at redhat.c | |om) --- Comment #15 from Csaba Henk --- I'm suggesting to close it as dupe of BZ 1676479. Asking Raghavendra for ack. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Apr 26 18:09:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 26 Apr 2019 18:09:18 +0000 Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642168 --- Comment #14 from Worker Ant --- REVIEW: https://review.gluster.org/22617 (cloudsync: Fix bug in cloudsync-fops-c.py) merged (#2) on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 27 01:59:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 01:59:54 +0000 Subject: [Bugs] [Bug 1703629] New: statedump is not capturing info related to glusterd Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Bug ID: 1703629 Summary: statedump is not capturing info related to glusterd Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: statedump is not capturing the glusterd's mem dump. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1. Take a statdump of glusterd kill -USR1 `pidof glusterd` 2. check for glusterd related information 3. Actual results: glusterd related information is missing Expected results: statedump should capture info related to glusterd Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 27 02:14:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 02:14:03 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |srakonde at redhat.com --- Comment #1 from Sanju --- RCA: (gdb) b gf_proc_dump_single_xlator_info Breakpoint 1 at 0x7f19954b7e20: file statedump.c, line 480. (gdb) c Continuing. [Switching to Thread 0x7f198bb2f700 (LWP 10078)] Thread 3 "glfs_sigwait" hit Breakpoint 1, gf_proc_dump_single_xlator_info (trav=trav at entry=0x133ae30) at statedump.c:480 480 { (gdb) n 482 char itable_key[1024] = { (gdb) 480 { (gdb) 482 char itable_key[1024] = { (gdb) 480 { (gdb) 481 glusterfs_ctx_t *ctx = trav->ctx; (gdb) 482 char itable_key[1024] = { (gdb) 486 if (trav->cleanup_starting) (gdb) 489 if (ctx->measure_latency) (gdb) 490 gf_proc_dump_latency_info(trav); (gdb) 492 gf_proc_dump_xlator_mem_info(trav); (gdb) 494 if (GF_PROC_DUMP_IS_XL_OPTION_ENABLED(inode) && (trav->itable)) { (gdb) 499 if (!trav->dumpops) { (gdb) l 494 if (GF_PROC_DUMP_IS_XL_OPTION_ENABLED(inode) && (trav->itable)) { 495 snprintf(itable_key, sizeof(itable_key), "%d.%s.itable", ctx->graph_id, 496 trav->name); 497 } 498 499 if (!trav->dumpops) { 500 return; 501 } 502 503 if (trav->dumpops->priv && GF_PROC_DUMP_IS_XL_OPTION_ENABLED(priv)) (gdb) p trav->dumpops $3 = (struct xlator_dumpops *) 0x0 (gdb) c Continuing. In gf_proc_dump_single_xlator_info() trav->dumpops is null and function is returned to caller at line #500. If we look at xlator_api in glusterd.c file, we missed giving .dumpops value here. That's why trav->dumpops is null. xlator_api_t xlator_api = { .init = init, .fini = fini, .mem_acct_init = mem_acct_init, .op_version = {1}, /* Present from the initial version */ .fops = &fops, .cbks = &cbks, .options = options, .identifier = "glusterd", .category = GF_MAINTAINED, }; We have defined dumpops as below. we should add this information in xlator api. struct xlator_dumpops dumpops = { .priv = glusterd_dump_priv, }; xlator_api_t xlator_api = { .init = init, .fini = fini, .mem_acct_init = mem_acct_init, .op_version = {1}, /* Present from the initial version */ .dumpops = &dumpops, //added here, now trav->dumpops won't be null for glusterd .fops = &fops, .cbks = &cbks, .options = options, .identifier = "glusterd", .category = GF_MAINTAINED, }; Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 27 02:19:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 02:19:18 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22640 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 27 02:19:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 02:19:19 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22640 (glusterd: add glusterd information in statedump) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 27 17:16:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 17:16:21 +0000 Subject: [Bugs] [Bug 1670334] Some memory leaks found in GlusterFS 5.3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670334 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(i_chips at qq.com) --- Comment #1 from Yaniv Kaul --- Thanks for the report - would you be able to send a patch to Gluster Gerrit? How did you find the leak? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Apr 27 17:17:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 17:17:50 +0000 Subject: [Bugs] [Bug 1122807] [RFE] Log a checksum of the new client volfile after a graph change. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1122807 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Improvement Status|NEW |CLOSED Resolution|--- |DEFERRED Summary|[enhancement]: Log a |[RFE] Log a checksum of the |checksum of the new client |new client volfile after a |volfile after a graph |graph change. |change. | Last Closed| |2019-04-27 17:17:50 --- Comment #3 from Yaniv Kaul --- Closing for the time being due to lack of priority and resources to implement. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 27 17:18:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 17:18:52 +0000 Subject: [Bugs] [Bug 1092178] RPM pre-uninstall scriptlet should not touch running services In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1092178 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-04-27 17:18:52 --- Comment #2 from Yaniv Kaul --- If Fedora packaging allow it, I assume we should keep it. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 27 19:34:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 19:34:50 +0000 Subject: [Bugs] [Bug 1158120] Data corruption due to lack of cache revalidation on open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158120 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #5 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Apr 27 19:35:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 27 Apr 2019 19:35:11 +0000 Subject: [Bugs] [Bug 1198746] Volume passwords are visible to remote users In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1198746 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(atumball at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 01:06:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 01:06:35 +0000 Subject: [Bugs] [Bug 1094328] poor fio rand read performance with read-ahead enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1094328 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rgowdapp at redhat.c | |om) | --- Comment #16 from Raghavendra G --- (In reply to Csaba Henk from comment #15) > I'm suggesting to close it as dupe of BZ 1676479. Asking Raghavendra for ack. I agree. Gluster read-ahead at best is redundant. Kernel read-ahead is more intelligent [1]. Since this bug is on fuse-mount, disabling read-ahead will introduce no regression. [1] https://lwn.net/Articles/155510/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 28 05:20:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 05:20:40 +0000 Subject: [Bugs] [Bug 1214644] Upcall: Migrate state during rebalance/tiering In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1214644 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(skoduri at redhat.co | |m) --- Comment #3 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 05:21:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 05:21:29 +0000 Subject: [Bugs] [Bug 1193174] flock does not observe group membership In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193174 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |hgowtham at redhat.com Flags| |needinfo?(hgowtham at redhat.c | |om) --- Comment #2 from Yaniv Kaul --- Still relevant? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 06:50:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 06:50:08 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1703753 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703753 [Bug 1703753] portmap entries missing in glusterd statedumps -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 28 06:50:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 06:50:08 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-28 07:07:30 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22640 (glusterd: define dumpops in the xlator_api of glusterd) merged (#2) on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 28 07:22:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:22:29 +0000 Subject: [Bugs] [Bug 1703759] New: statedump is not capturing info related to glusterd Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Bug ID: 1703759 Summary: statedump is not capturing info related to glusterd Product: GlusterFS Version: 6 Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org Depends On: 1703753, 1703629 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1703629 +++ Description of problem: statedump is not capturing the glusterd's mem dump. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1. Take a statdump of glusterd kill -USR1 `pidof glusterd` 2. check for glusterd related information 3. Actual results: glusterd related information is missing Expected results: statedump should capture info related to glusterd Additional info: --- Additional comment from Sanju on 2019-04-27 07:44:03 IST --- RCA: (gdb) b gf_proc_dump_single_xlator_info Breakpoint 1 at 0x7f19954b7e20: file statedump.c, line 480. (gdb) c Continuing. [Switching to Thread 0x7f198bb2f700 (LWP 10078)] Thread 3 "glfs_sigwait" hit Breakpoint 1, gf_proc_dump_single_xlator_info (trav=trav at entry=0x133ae30) at statedump.c:480 480 { (gdb) n 482 char itable_key[1024] = { (gdb) 480 { (gdb) 482 char itable_key[1024] = { (gdb) 480 { (gdb) 481 glusterfs_ctx_t *ctx = trav->ctx; (gdb) 482 char itable_key[1024] = { (gdb) 486 if (trav->cleanup_starting) (gdb) 489 if (ctx->measure_latency) (gdb) 490 gf_proc_dump_latency_info(trav); (gdb) 492 gf_proc_dump_xlator_mem_info(trav); (gdb) 494 if (GF_PROC_DUMP_IS_XL_OPTION_ENABLED(inode) && (trav->itable)) { (gdb) 499 if (!trav->dumpops) { (gdb) l 494 if (GF_PROC_DUMP_IS_XL_OPTION_ENABLED(inode) && (trav->itable)) { 495 snprintf(itable_key, sizeof(itable_key), "%d.%s.itable", ctx->graph_id, 496 trav->name); 497 } 498 499 if (!trav->dumpops) { 500 return; 501 } 502 503 if (trav->dumpops->priv && GF_PROC_DUMP_IS_XL_OPTION_ENABLED(priv)) (gdb) p trav->dumpops $3 = (struct xlator_dumpops *) 0x0 (gdb) c Continuing. In gf_proc_dump_single_xlator_info() trav->dumpops is null and function is returned to caller at line #500. If we look at xlator_api in glusterd.c file, we missed giving .dumpops value here. That's why trav->dumpops is null. xlator_api_t xlator_api = { .init = init, .fini = fini, .mem_acct_init = mem_acct_init, .op_version = {1}, /* Present from the initial version */ .fops = &fops, .cbks = &cbks, .options = options, .identifier = "glusterd", .category = GF_MAINTAINED, }; We have defined dumpops as below. we should add this information in xlator api. struct xlator_dumpops dumpops = { .priv = glusterd_dump_priv, }; xlator_api_t xlator_api = { .init = init, .fini = fini, .mem_acct_init = mem_acct_init, .op_version = {1}, /* Present from the initial version */ .dumpops = &dumpops, //added here, now trav->dumpops won't be null for glusterd .fops = &fops, .cbks = &cbks, .options = options, .identifier = "glusterd", .category = GF_MAINTAINED, }; Thanks, Sanju --- Additional comment from Worker Ant on 2019-04-27 07:49:19 IST --- REVIEW: https://review.gluster.org/22640 (glusterd: add glusterd information in statedump) posted (#1) for review on master by Sanju Rakonde --- Additional comment from Worker Ant on 2019-04-28 12:37:30 IST --- REVIEW: https://review.gluster.org/22640 (glusterd: define dumpops in the xlator_api of glusterd) merged (#2) on master by Sanju Rakonde Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 [Bug 1703629] statedump is not capturing info related to glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1703753 [Bug 1703753] portmap entries missing in glusterd statedumps -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 07:22:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:22:29 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1703759 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 [Bug 1703759] statedump is not capturing info related to glusterd -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Apr 28 07:25:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:25:33 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22641 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 07:25:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:25:34 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22641 (glusterd: define dumpops in the xlator_api of glusterd) posted (#1) for review on release-6 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 07:32:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:32:16 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Blocks| |1701203 (glusterfs-6.2) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 [Bug 1701203] GlusterFS 6.2 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 07:32:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:32:16 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1703759 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 [Bug 1703759] statedump is not capturing info related to glusterd -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 07:39:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 07:39:52 +0000 Subject: [Bugs] [Bug 1214654] Self-heal: Migrate lease_locks as part of self-heal process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1214654 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(skoduri at redhat.co | |m) --- Comment #1 from Yaniv Kaul --- Status? -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 13:42:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 13:42:31 +0000 Subject: [Bugs] [Bug 1695480] Global Thread Pool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695480 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Improvement, Performance Priority|unspecified |high OS|Unspecified |Linux Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 19:09:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 19:09:25 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22642 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Apr 28 19:09:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 28 Apr 2019 19:09:26 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #640 from Worker Ant --- REVIEW: https://review.gluster.org/22642 ([RFE][WIP][DNM]store: store all key-values in one shot) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 02:59:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 02:59:00 +0000 Subject: [Bugs] [Bug 1699023] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699023 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |amukherj at redhat.com Resolution|--- |DUPLICATE Last Closed| |2019-04-29 02:59:00 --- Comment #1 from Atin Mukherjee --- Duplicate of 1699025 *** This bug has been marked as a duplicate of bug 1699025 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 02:59:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 02:59:00 +0000 Subject: [Bugs] [Bug 1699025] Brick is not able to detach successfully in brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699025 --- Comment #3 from Atin Mukherjee --- *** Bug 1699023 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:02:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:02:47 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(christian.ihle at dr | |ift.oslo.kommune.no) --- Comment #5 from Atin Mukherjee --- Please let us know if you have tested 5.6 and see this problem disappearing. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 03:05:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:05:43 +0000 Subject: [Bugs] [Bug 1698566] shd crashed while executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698566 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com, | |rkavunga at redhat.com Flags| |needinfo?(rkavunga at redhat.c | |om) --- Comment #1 from Atin Mukherjee --- Rafi - could you take a look into it? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 03:08:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:08:35 +0000 Subject: [Bugs] [Bug 1703007] The telnet or something would cause high memory usage for glusterd & glusterfsd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703007 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(i_chips at qq.com) --- Comment #1 from Atin Mukherjee --- It's not clear to me how would a telnet cause an increase in the memory consumption of glusterd process and more over what's the motivation behind this? Could you help us in providing the following by a small test: Before and after telnet, capture statedumps of glusterd process (kill -SIGUSR1 $(pidof glusterd) and find the files in /var/run/gluster and share back here? Also do you see any logs in the glusterd and brick log files when you attempt to do telnet. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 03:21:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:21:31 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED --- Comment #16 from Nithya Balachandran --- Marking this Modified as I am done with the changes for now. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:22:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:22:36 +0000 Subject: [Bugs] [Bug 1703897] New: Refactor dht lookup code Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Bug ID: 1703897 Summary: Refactor dht lookup code Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: distribute Keywords: Reopened Severity: medium Priority: high Assignee: spalai at redhat.com Reporter: nbalacha at redhat.com QA Contact: tdesala at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1590385 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1590385 +++ Description of problem: Refactor the dht lookup code in order to make it easier to maintain. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2018-06-12 14:24:47 UTC --- REVIEW: https://review.gluster.org/20246 (cluster/dht: refactor dht_lookup) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-06-14 07:09:47 UTC --- REVIEW: https://review.gluster.org/20267 (cluster/dht: Minor code cleanup) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-06-20 02:40:18 UTC --- COMMIT: https://review.gluster.org/20267 committed in master by "N Balachandran" with a commit message- cluster/dht: Minor code cleanup Removed extra variable. Change-Id: If43c47f6630454aeadab357a36d061ec0b53cdb5 updates: bz#1590385 Signed-off-by: N Balachandran --- Additional comment from Worker Ant on 2018-06-21 05:36:13 UTC --- COMMIT: https://review.gluster.org/20246 committed in master by "Amar Tumballi" with a commit message- cluster/dht: refactor dht_lookup The dht lookup code is getting difficult to maintain due to its size. Refactoring the code will make it easier to modify it in future. Change-Id: Ic7cb5bf4f018504dfaa7f0d48cf42ab0aa34abdd updates: bz#1590385 Signed-off-by: N Balachandran --- Additional comment from Worker Ant on 2018-08-02 16:20:48 UTC --- REVIEW: https://review.gluster.org/20622 (cluster/dht: refactor dht_lookup_cbk) posted (#1) for review on master by N Balachandran --- Additional comment from Shyamsundar on 2018-10-23 15:11:13 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/ --- Additional comment from Nithya Balachandran on 2018-10-29 02:58:27 UTC --- Reopening as this is an umbrella BZ for many more changes to the rebalance process. --- Additional comment from Worker Ant on 2018-12-06 13:57:38 UTC --- REVIEW: https://review.gluster.org/21816 (cluster/dht: refactor dht_lookup_cbk) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-12-26 12:41:37 UTC --- REVIEW: https://review.gluster.org/21816 (cluster/dht: refactor dht_lookup_cbk) posted (#7) for review on master by N Balachandran --- Additional comment from Nithya Balachandran on 2018-12-26 12:56:32 UTC --- Reopening this as there will be more changes. --- Additional comment from Worker Ant on 2019-03-25 10:29:50 UTC --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) posted (#1) for review on master by N Balachandran --- Additional comment from Shyamsundar on 2019-03-25 16:30:27 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ --- Additional comment from Worker Ant on 2019-04-06 01:41:34 UTC --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) merged (#10) on master by N Balachandran --- Additional comment from Worker Ant on 2019-04-10 09:03:15 UTC --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2019-04-25 04:12:37 UTC --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) merged (#3) on master by Amar Tumballi --- Additional comment from Nithya Balachandran on 2019-04-29 03:21:31 UTC --- Marking this Modified as I am done with the changes for now. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 [Bug 1590385] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:22:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:22:36 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1703897 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 [Bug 1703897] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:28:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:28:32 +0000 Subject: [Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698131 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-29 03:28:32 --- Comment #5 from Atin Mukherjee --- >From glusterfs/glusterd.log-20190407 I can see the following: [2019-04-02 22:03:45.520037] I [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh brick process for brick /v0/bricks/gv0 [2019-04-02 22:03:45.522039] I [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-04-02 22:03:45.586328] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume gvOvirt. Starting local bricks. [2019-04-02 22:03:45.586480] I [glusterd-utils.c:6214:glusterd_brick_start] 0-management: discovered already-running brick /v0/gbOvirt/b0 [2019-04-02 22:03:45.586495] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /v0/gbOvirt/b0 on port 49157 [2019-04-02 22:03:45.586519] I [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-04-02 22:03:45.662116] E [MSGID: 101012] [common-utils.c:4075:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/gv0/boneyard-san-v0-bricks-gv0.pid [2019-04-02 22:03:45.662164] I [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh brick process for brick /v0/bricks/gv0 Which indicates that we attempted to start two processes for the same brick but this was with glusterfs-5.5 version which doesn't have the fix as mentioned in comment 2. Post this cluster has been upgraded to 6.0, I don't see such event. So this is already fixed and I am closing the bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:28:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:28:33 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1698131, which changed state. Bug 1698131 Summary: multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected https://bugzilla.redhat.com/show_bug.cgi?id=1698131 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 03:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:33:15 +0000 Subject: [Bugs] [Bug 1670334] Some memory leaks found in GlusterFS 5.3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670334 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Atin Mukherjee --- https://review.gluster.org/#/c/glusterfs/+/22619/5/xlators/mgmt/glusterd/src/glusterd-mountbroker.c covers this part. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 03:33:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 03:33:29 +0000 Subject: [Bugs] [Bug 1670334] Some memory leaks found in GlusterFS 5.3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670334 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(i_chips at qq.com) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 04:56:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 04:56:34 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|spalai at redhat.com |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 05:26:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:26:38 +0000 Subject: [Bugs] [Bug 1703435] gluster-block: Upstream Jenkins job which get triggered at PR level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703435 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com Assignee|bugs at gluster.org |dkhandel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 05:27:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:27:39 +0000 Subject: [Bugs] [Bug 1698694] regression job isn't voting back to gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698694 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-04-29 05:27:39 --- Comment #5 from Deepshikha khandelwal --- It is fixed now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 05:28:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:28:59 +0000 Subject: [Bugs] [Bug 1703433] gluster-block: setup GCOV & LCOV job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703433 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com Assignee|bugs at gluster.org |dkhandel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Apr 25 13:36:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 25 Apr 2019 13:36:43 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #29 from Worker Ant --- REVIEW: https://review.gluster.org/22627 (performance/decompounder: remove the translator as the feature is not used anymore) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 05:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:30:27 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22628 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 05:30:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:30:28 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #30 from Worker Ant --- REVIEW: https://review.gluster.org/22628 (protocol: remove compound fop) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 05:52:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 05:52:52 +0000 Subject: [Bugs] [Bug 1702185] coredump reported by test ./tests/bugs/glusterd/bug-1699339.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702185 --- Comment #4 from Mohammed Rafi KC --- Backtrace: Thread 1 (Thread 0x7feb839dd700 (LWP 1191)): #0 0x00007feb9107fef9 in vfprintf () from /lib64/libc.so.6 No symbol table info available. #1 0x00007feb910aac33 in vasprintf () from /lib64/libc.so.6 No symbol table info available. #2 0x00007feb92a444b1 in _gf_msg (domain=0x2198c20 "management", file=0x7feb86c96298 "/home/jenkins/root/workspace/regression-test-with-multiplex/xlators/mgmt/glusterd/src/glusterd-svc-helper.c", function=0x7feb86c968e0 <__FUNCTION__.31158> "glusterd_svc_attach_cbk", line=684, level=GF_LOG_INFO, errnum=0, trace=0, msgid=106617, fmt=0x7feb86c964f8 "svc %s of volume %s attached successfully to pid %d") at /home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:2113 ret = 0 msgstr = 0x0 ap = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = 0x7feb839dc908, reg_save_area = 0x7feb839dc820}} this = 0x2197b90 ctx = 0x214d010 callstr = '\000' passcallstr = 0 log_inited = 1 __PRETTY_FUNCTION__ = "_gf_msg" #3 0x00007feb86c4086b in glusterd_svc_attach_cbk (req=0x7feb6c02be88, iov=0x7feb6c02bec0, count=1, v_frame=0x7feb6c01a5c8) at /home/jenkins/root/workspace/regression-test-with-multiplex/xlators/mgmt/glusterd/src/glusterd-svc-helper.c:682 frame = 0x7feb6c01a5c8 volinfo = 0x0 shd = 0x0 svc = 0x2244fd0 parent_svc = 0x0 mux_proc = 0x0 conf = 0x21e6290 flag = 0x7feb6c01f0d0 this = 0x2197b90 pid = -1 ret = 16 rsp = {op_ret = 0, op_errno = 0, spec = 0x7feb74099370 "", xdata = {xdata_len = 0, xdata_val = 0x0}} __FUNCTION__ = "glusterd_svc_attach_cbk" #4 0x00007feb927e154b in rpc_clnt_handle_reply (clnt=0x7feb6c005710, pollin=0x7feb74083af0) at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-lib/src/rpc-clnt.c:764 conn = 0x7feb6c005740 saved_frame = 0x7feb6c035c38 ret = 0 req = 0x7feb6c02be88 xid = 30 __FUNCTION__ = "rpc_clnt_handle_reply" #5 0x00007feb927e1a74 in rpc_clnt_notify (trans=0x7feb6c040db0, mydata=0x7feb6c005740, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7feb74083af0) at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-lib/src/rpc-clnt.c:931 conn = 0x7feb6c005740 clnt = 0x7feb6c005710 ret = -1 req_info = 0x0 pollin = 0x7feb74083af0 clnt_mydata = 0x0 old_THIS = 0x2197b90 __FUNCTION__ = "rpc_clnt_notify" #6 0x00007feb927dda5b in rpc_transport_notify (this=0x7feb6c040db0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7feb74083af0) at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-lib/src/rpc-transport.c:549 ret = -1 __FUNCTION__ = "rpc_transport_notify" #7 0x00007feb85d30c79 in socket_event_poll_in_async (xl=0x2197b90, async=0x7feb74083c18) at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-transport/socket/src/socket.c:2569 pollin = 0x7feb74083af0 this = 0x7feb6c040db0 priv = 0x7feb6c040a30 #8 0x00007feb85d2844c in gf_async (async=0x7feb74083c18, xl=0x2197b90, cbk=0x7feb85d30c22 ) at /home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/glusterfs/async.h:189 __FUNCTION__ = "gf_async" #9 0x00007feb85d30e07 in socket_event_poll_in (this=0x7feb6c040db0, notify_handled=true) at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-transport/socket/src/socket.c:2610 ret = 0 pollin = 0x7feb74083af0 priv = 0x7feb6c040a30 ctx = 0x214d010 #10 0x00007feb85d31db0 in socket_event_handler (fd=69, idx=31, gen=4, data=0x7feb6c040db0, poll_in=1, poll_out=0, poll_err=0, event_thread_died=0 '\000') at /home/jenkins/root/workspace/regression-test-with-multiplex/rpc/rpc-transport/socket/src/socket.c:3001 this = 0x7feb6c040db0 priv = 0x7feb6c040a30 ret = 0 ctx = 0x214d010 socket_closed = false notify_handled = false __FUNCTION__ = "socket_event_handler" #11 0x00007feb92abeca4 in event_dispatch_epoll_handler (event_pool=0x2183e90, event=0x7feb839dce80) at /home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/event-epoll.c:648 ev_data = 0x7feb839dce84 slot = 0x21c79d0 handler = 0x7feb85d3190b data = 0x7feb6c040db0 idx = 31 gen = 4 ret = 0 fd = 69 handled_error_previously = false __FUNCTION__ = "event_dispatch_epoll_handler" #12 0x00007feb92abf1bd in event_dispatch_epoll_worker (data=0x2203eb0) at /home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/event-epoll.c:761 event = {events = 1, data = {ptr = 0x40000001f, fd = 31, u32 = 31, u64 = 17179869215}} ret = 1 ev_data = 0x2203eb0 event_pool = 0x2183e90 myindex = 1 timetodie = 0 gen = 0 poller_death_notify = {next = 0x0, prev = 0x0} slot = 0x0 tmp = 0x0 __FUNCTION__ = "event_dispatch_epoll_worker" #13 0x00007feb91869dd5 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #14 0x00007feb91130ead in clone () from /lib64/libc.so.6 ~ RCA: During glusterd restart, we start a new shd daemon if not already running and attach all subsequent shd graphs to the existing daemon. So when the glusterd wait for an attach request to be processed, there is a chance that the volinfo might be freed, by a thread which handles handshake or even an epoll thread that stops and delete the volinfo. So we have to keep a ref on volinfo when we send an attach request. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 06:00:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 06:00:46 +0000 Subject: [Bugs] [Bug 1698566] shd crashed while executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698566 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |rkavunga at redhat.com Flags|needinfo?(rkavunga at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 06:14:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 06:14:05 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22643 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 06:14:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 06:14:06 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1614 from Worker Ant --- REVIEW: https://review.gluster.org/22643 (cloudsync/plugin: coverity fixes) posted (#1) for review on master by Susant Palai -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 06:51:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 06:51:38 +0000 Subject: [Bugs] [Bug 1193174] flock does not observe group membership In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193174 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jthottan at redhat.com Flags|needinfo?(hgowtham at redhat.c |needinfo?(jthottan at redhat.c |om) |om) --- Comment #3 from hari gowtham --- I'm adding Jiffin from NFS, he would be the better person to answer this. @Jiffin, Please do the needful. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 07:29:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:29:59 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #31 from Worker Ant --- REVIEW: https://review.gluster.org/22621 (nl-cache:add test to increase code coverage) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 07:31:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:31:32 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #641 from Worker Ant --- REVIEW: https://review.gluster.org/22626 (storage/posix: fix fresh file detection delay) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 07:57:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:57:07 +0000 Subject: [Bugs] [Bug 1703948] New: Self-heal daemon resources are not cleaned properly after a ec fini Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Bug ID: 1703948 Summary: Self-heal daemon resources are not cleaned properly after a ec fini Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: We were not properly cleaning self-heal daemon resources during ec fini. With shd multiplexing, it is absolutely necessary to cleanup all the resources during ec fini. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 07:57:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:57:37 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |rkavunga at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 07:59:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:59:32 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22644 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 07:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 07:59:33 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22644 (ec/shd: Cleanup self heal daemon resources during ec fini) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 09:04:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 09:04:01 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Sayalee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |saraut at redhat.com QA Contact|tdesala at redhat.com |saraut at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 09:34:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 09:34:18 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #5 from Ashish Pandey --- Hi, Please find the details as follows - 1 - Ashish Pandey github user name - aspandey github email id - ashishpandey.cdac at gmail.com 2 - Amar Tumballi github user name - amarts email - amarts at gmail.com 3 - Vijay Bellur github user name - vbellur email - vbellur at redhat.com I am extremely sorry for the inconvenience. I just did not think about the github account details and sent the redhat email. --- Ashish -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 10:14:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 10:14:25 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #6 from M. Scherer --- No need to be sorry, if we do not say what we need, people can't know. We should have a form or some way to tell people what we need, you shouldn't have to guess that :/ I have added folks to the repo, tell me if anything is missing, otherwise, i will close the bug later. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 12:10:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 12:10:08 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #32 from Worker Ant --- REVIEW: https://review.gluster.org/22629 (libglusterfs: remove compound-fop helper functions) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 12:32:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 12:32:43 +0000 Subject: [Bugs] [Bug 1704252] New: Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 Bug ID: 1704252 Summary: Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Ideally, glusterd should spawn bulvoldict thread only while no. of volumes are high more than 100 and brick_mux is enabled. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Setup 1x3 cluster environment and enable brick_mux 2.Stop/start glusterd on one node 3. Check the messages in glusterd.log, it is showing below logs Create thread 1 to populate dict data for volume start index is 1 end index is 2 [[glusterd-utils.c:3559:glusterd_add_volumes_to_export_dict] 0-management: Finished dictionary popluation in all threads Actual results: dict thread is creating even no. of volume is 1 Expected results: No need to create dict thread if volume count is lower Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 12:32:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 12:32:56 +0000 Subject: [Bugs] [Bug 1704252] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 13:20:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 13:20:17 +0000 Subject: [Bugs] [Bug 1676479] read-ahead and io-cache degrading performance on sequential read In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676479 Csaba Henk changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1426045, 845300 CC| |hchen at redhat.com --- Comment #2 from Csaba Henk --- *** Bug 1094328 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 13:20:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 13:20:17 +0000 Subject: [Bugs] [Bug 1094328] poor fio rand read performance with read-ahead enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1094328 Csaba Henk changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-04-29 13:20:17 --- Comment #17 from Csaba Henk --- *** This bug has been marked as a duplicate of bug 1676479 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 14:38:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 14:38:59 +0000 Subject: [Bugs] [Bug 1703629] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703629 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1703753 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703753 [Bug 1703753] portmap entries missing in glusterd statedumps -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 14:38:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 14:38:59 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1703753 Depends On|1703753 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703753 [Bug 1703753] portmap entries missing in glusterd statedumps -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 14:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 14:40:04 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST CC| |amukherj at redhat.com Blocks| |1703759 Flags| |needinfo?(nbalacha at redhat.c | |om) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 [Bug 1703759] statedump is not capturing info related to glusterd -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Apr 29 14:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 14:40:04 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1703897 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 [Bug 1703897] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 18:51:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 18:51:12 +0000 Subject: [Bugs] [Bug 1428083] Repair cluster prove tests for FB environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428083 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(vbellur at redhat.co | |m) | --- Comment #2 from Vijay Bellur --- I will check the applicability of the proposed patch (https://review.gluster.org/#/c/glusterfs/+/16225/) and update the bug. Thanks! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 18:53:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 18:53:01 +0000 Subject: [Bugs] [Bug 1428097] Repair more cluster tests in FB IPv6 environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428097 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(vbellur at redhat.co | |m) | --- Comment #2 from Vijay Bellur --- Will review the relevance of https://review.gluster.org/#/c/glusterfs/+/16353/ and update this bug. Thanks! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 18:53:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 18:53:11 +0000 Subject: [Bugs] [Bug 1428097] Repair more cluster tests in FB IPv6 environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428097 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |vbellur at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 18:53:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 18:53:55 +0000 Subject: [Bugs] [Bug 1428083] Repair cluster prove tests for FB environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428083 Vijay Bellur changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |vbellur at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 21:29:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 21:29:14 +0000 Subject: [Bugs] [Bug 1430623] pthread mutexes and condition variables are not destroyed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1430623 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|ijamali at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 21:29:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 21:29:15 +0000 Subject: [Bugs] [Bug 1507896] glfs_init returns incorrect errno on faliure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1507896 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|ijamali at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 21:29:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 21:29:16 +0000 Subject: [Bugs] [Bug 1611546] Log file glustershd.log being filled with errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1611546 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|ijamali at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 03:13:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 03:13:37 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1703759 |1696807 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 [Bug 1703759] statedump is not capturing info related to glusterd -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 03:13:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 03:13:37 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1703897 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 [Bug 1703897] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 03:13:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 03:13:38 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Target Release|--- |RHGS 3.5.0 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 04:22:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 04:22:23 +0000 Subject: [Bugs] [Bug 1703897] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703897 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(nbalacha at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 06:27:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 06:27:07 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #642 from Worker Ant --- REVIEW: https://review.gluster.org/22601 (options.c,h: minor changes to GF_OPTION_RECONF) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 06:27:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 06:27:35 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1615 from Worker Ant --- REVIEW: https://review.gluster.org/22643 (cloudsync/plugin: coverity fixes) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 07:42:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 07:42:31 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #18 from manschwetus at cs-software-gmbh.de --- Ok, as we haven't had a crash since we disable open-behind it seems to be valid, to say that disabling open-behind bypasses the issue. As I disabled write-behind due to another defect report, is it worth to re enable it or would you expect problems with it? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 10:00:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 10:00:41 +0000 Subject: [Bugs] [Bug 1158120] Data corruption due to lack of cache revalidation on open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158120 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.1 Resolution|--- |WORKSFORME Flags|needinfo?(rgowdapp at redhat.c | |om) | |needinfo?(pgurusid at redhat.c | |om) | |needinfo?(atumball at redhat.c | |om) | Last Closed| |2019-04-30 10:00:41 --- Comment #6 from Amar Tumballi --- This is no more relevant as we did multiple fixes in our caching layers and fixed may issues for hosting DB workload on GlusterFS. Closing as WORKSFORME on GlusterFS-6.1 release. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 11:57:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 11:57:19 +0000 Subject: [Bugs] [Bug 1703329] [gluster-infra]: Please create repo for plus one scale work In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703329 --- Comment #7 from Ashish Pandey --- Hi, I think you may close the bug as repo has been created and can be used. Thanks!! --- Ashish -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 12:21:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 12:21:57 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #60 from abhays --- Hi @Nithya, Any updates on this issue? Seems that the same test cases are failing in the Glusterfs v6.1 with additional ones:- ./tests/bugs/replicate/bug-1655854-support-dist-to-rep3-arb-conversion.t ./tests/features/fuse-lru-limit.t And one query we have with respect to these failures whether they affect the main functionality of Glusterfs or they can be ignored for now? Please let us know. Also, s390x systems have been added on the gluster-ci. Any updates regards to that? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 12:22:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 12:22:48 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #61 from abhays --- (In reply to abhays from comment #52) > (In reply to Raghavendra Bhat from comment #50) > > Hi, > > > > Thanks for the logs. From the logs saw that the following things are > > happening. > > > > 1) The scrubbing is started > > > > 2) Scrubber always decides whether a file is corrupted or not by comparing > > the stored on-disk signature (gets by getxattr) with its own calculated > > signature of the file. > > > > 3) Here, while getting the on-disk signature, getxattr is failing with > > ENOMEM (i.e. Cannot allocate memory) because of the endianness. > > > > 4) Further testcases in the test fail because, they expect the bad-file > > extended attribute to be present which scrubber could not set because of the > > above error (i.e. had it been able to successfully get the signature of the > > file via getxattr, it would have been able to compare the signature with its > > own calculated signature and set the bad-file extended attribute to indicate > > the file is corrupted). > > > > > > Looking at the code to come up with a fix to address this. > > Thanks for the reply @Raghavendra. We are also looking into the same. Any Updates on this @Raghavendra? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 13:11:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:11:15 +0000 Subject: [Bugs] [Bug 1704252] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22647 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 13:11:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:11:17 +0000 Subject: [Bugs] [Bug 1704252] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-04-30 13:11:17 --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22647 (glusterd: Fix bulkvoldict thread logic in brick multiplexing) merged (#6) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 13:13:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:13:54 +0000 Subject: [Bugs] [Bug 1704769] New: Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704769 Bug ID: 1704769 Summary: Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: glusterd Assignee: amukherj at redhat.com Reporter: moagrawa at redhat.com QA Contact: bmekala at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1704252 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1704252 +++ Description of problem: Ideally, glusterd should spawn bulvoldict thread only while no. of volumes are high more than 100 and brick_mux is enabled. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Setup 1x3 cluster environment and enable brick_mux 2.Stop/start glusterd on one node 3. Check the messages in glusterd.log, it is showing below logs Create thread 1 to populate dict data for volume start index is 1 end index is 2 [[glusterd-utils.c:3559:glusterd_add_volumes_to_export_dict] 0-management: Finished dictionary popluation in all threads Actual results: dict thread is creating even no. of volume is 1 Expected results: No need to create dict thread if volume count is lower Additional info: --- Additional comment from Worker Ant on 2019-04-30 13:11:17 UTC --- REVIEW: https://review.gluster.org/22647 (glusterd: Fix bulkvoldict thread logic in brick multiplexing) merged (#6) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 [Bug 1704252] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 13:13:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:13:54 +0000 Subject: [Bugs] [Bug 1704252] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704252 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1704769 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704769 [Bug 1704769] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 13:14:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:14:51 +0000 Subject: [Bugs] [Bug 1704769] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704769 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|amukherj at redhat.com |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 13:15:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 13:15:23 +0000 Subject: [Bugs] [Bug 1704769] Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704769 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- CC|bugs at gluster.org | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Apr 30 18:05:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 18:05:40 +0000 Subject: [Bugs] [Bug 1704888] New: delete the snapshots and volume at the end of uss.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704888 Bug ID: 1704888 Summary: delete the snapshots and volume at the end of uss.t Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: The current uss.t test in the test infrastructure from the glusterfs codebase performs multiple tests and at the end leaves the volume(s) and snap(s) to be cleaned up by the cleanup () function. While this is functionaly correct, it might take more time and effort for cleanup function to release all the resources (in a hard way). So, delete all the snapshots and the volume after the tests. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 18:13:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 18:13:27 +0000 Subject: [Bugs] [Bug 1704888] delete the snapshots and volume at the end of uss.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704888 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22649 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 18:13:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 18:13:28 +0000 Subject: [Bugs] [Bug 1704888] delete the snapshots and volume at the end of uss.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704888 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22649 (tests: delete the snapshots and the volume after the tests) posted (#2) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 30 20:05:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 30 Apr 2019 20:05:01 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #62 from Raghavendra Bhat --- (In reply to abhays from comment #61) > (In reply to abhays from comment #52) > > (In reply to Raghavendra Bhat from comment #50) > > > Hi, > > > > > > Thanks for the logs. From the logs saw that the following things are > > > happening. > > > > > > 1) The scrubbing is started > > > > > > 2) Scrubber always decides whether a file is corrupted or not by comparing > > > the stored on-disk signature (gets by getxattr) with its own calculated > > > signature of the file. > > > > > > 3) Here, while getting the on-disk signature, getxattr is failing with > > > ENOMEM (i.e. Cannot allocate memory) because of the endianness. > > > > > > 4) Further testcases in the test fail because, they expect the bad-file > > > extended attribute to be present which scrubber could not set because of the > > > above error (i.e. had it been able to successfully get the signature of the > > > file via getxattr, it would have been able to compare the signature with its > > > own calculated signature and set the bad-file extended attribute to indicate > > > the file is corrupted). > > > > > > > > > Looking at the code to come up with a fix to address this. > > > > Thanks for the reply @Raghavendra. We are also looking into the same. > > Any Updates on this @Raghavendra? I am still working on a fix for this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Apr 29 21:29:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 29 Apr 2019 21:29:18 +0000 Subject: [Bugs] [Bug 1507896] glfs_init returns incorrect errno on faliure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1507896 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |StudentProject Priority|high |medium Status|POST |NEW QA Contact|bugs at gluster.org | -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Apr 23 13:37:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 23 Apr 2019 13:37:10 -0000 Subject: [Bugs] [Bug 1702316] New: Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 Bug ID: 1702316 Summary: Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: medium Assignee: bugs at gluster.org Reporter: rob.dewit at coosto.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: After upgrade from 5.3 to 6.1, gluster refuses to start bricks that apparently have 'crypt' and 'bd' xlators. None of these have been provided at creation and according to 'gluster get VOLUME all' they are not used. Version-Release number of selected component (if applicable): 6.1 [2019-04-23 10:36:44.325141] I [MSGID: 100030] [glusterfsd.c:2849:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 6.1 (args: /usr/sbin/glusterd --pid-file=/run/glusterd.pid) [2019-04-23 10:36:44.325505] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 31705 [2019-04-23 10:36:44.327314] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-04-23 10:36:44.327354] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory [2019-04-23 10:36:44.327363] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory [2019-04-23 10:36:44.330126] I [socket.c:931:__socket_server_bind] 0-socket.management: process started listening on port (36203) [2019-04-23 10:36:44.330258] E [rpc-transport.c:297:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/6.1/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-04-23 10:36:44.330267] W [rpc-transport.c:301:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-04-23 10:36:44.330274] W [rpcsvc.c:1985:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-04-23 10:36:44.330281] E [MSGID: 106244] [glusterd.c:1785:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-04-23 10:36:44.331976] I [socket.c:902:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 13 [2019-04-23 10:36:46.805843] I [MSGID: 106513] [glusterd-store.c:2394:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 50000 [2019-04-23 10:36:46.878878] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: 5104ed01-f959-4a82-bbd6-17d4dd177ec2 [2019-04-23 10:36:46.881463] E [mem-pool.c:351:__gf_free] (-->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x49190) [0x7fb0ecb64190] -->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x48f72) [0x7fb0ecb63f 72] -->/usr/lib64/libglusterfs.so.0(__gf_free+0x21d) [0x7fb0f25091dd] ) 0-: Assertion failed: mem_acct->rec[header->type].size >= header->size [2019-04-23 10:36:46.908134] I [MSGID: 106498] [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2019-04-23 10:36:46.910052] I [MSGID: 106498] [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2019-04-23 10:36:46.910135] W [MSGID: 106061] [glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2019-04-23 10:36:46.910167] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-04-23 10:36:46.911425] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 1024 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-04-23 10:36:46.911405] W [MSGID: 106061] [glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2019-04-23 10:36:46.914845] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-04-23 10:36:47.265981] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3, host: 10.10.0.25, port: 0 [2019-04-23 10:36:47.271481] I [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a fresh brick process for brick /local.mnt/glfs/brick [2019-04-23 10:36:47.273759] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-04-23 10:36:47.336220] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2019-04-23 10:36:47.336328] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped [2019-04-23 10:36:47.336383] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped [2019-04-23 10:36:47.336735] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2019-04-23 10:36:47.337733] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped [2019-04-23 10:36:47.337755] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped [2019-04-23 10:36:47.337804] I [MSGID: 106567] [glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd service [2019-04-23 10:36:48.340193] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2019-04-23 10:36:48.340446] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped [2019-04-23 10:36:48.340482] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped [2019-04-23 10:36:48.340525] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2019-04-23 10:36:48.340662] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2019-04-23 10:36:48.340686] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped [2019-04-23 10:36:48.340721] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2019-04-23 10:36:48.340851] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2019-04-23 10:36:48.340865] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped [2019-04-23 10:36:48.340913] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2019-04-23 10:36:48.341005] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600 [2019-04-23 10:36:48.342056] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3 [2019-04-23 10:36:48.342125] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4, host: 10.10.0.208, port: 0 [2019-04-23 10:36:48.378690] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4 [2019-04-23 10:37:15.410095] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory" repeated 2 times between [2019-04-23 10:37:15.410095] and [2019-04-23 10:37:15.410162] [2019-04-23 10:37:15.417228] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2019-04-23 10:37:15.417228] and [2019-04-23 10:37:15.417319] [2019-04-23 10:37:15.449809] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file: No such file or directory [2019-04-23 12:23:14.757482] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory [2019-04-23 12:23:14.765810] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api [2019-04-23 12:23:14.801394] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory" repeated 2 times between [2019-04-23 12:23:14.757482] and [2019-04-23 12:23:14.757578] The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2019-04-23 12:23:14.765810] and [2019-04-23 12:23:14.765864] [2019-04-23 12:29:45.957524] I [MSGID: 106488] [glusterd-handler.c:1559:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-04-23 12:30:06.917403] I [MSGID: 106488] [glusterd-handler.c:1559:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-04-23 12:38:25.514866] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory [2019-04-23 12:38:25.522473] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api [2019-04-23 12:38:25.555952] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory" repeated 2 times between [2019-04-23 12:38:25.514866] and [2019-04-23 12:38:25.514931] The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2019-04-23 12:38:25.522473] and [2019-04-23 12:38:25.522545] [2019-04-23 12:52:00.569988] W [glusterfsd.c:1570:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7504) [0x7fb0f1310504] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xd5) [0x409f45] -->/usr/sbin/glusterd(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum (15), shutting down Option Value ------ ----- cluster.lookup-unhashed on cluster.lookup-optimize on cluster.min-free-disk 10% cluster.min-free-inodes 5% cluster.rebalance-stats off cluster.subvols-per-directory (null) cluster.readdir-optimize on cluster.rsync-hash-regex (null) cluster.extra-hash-regex (null) cluster.dht-xattr-name trusted.glusterfs.dht cluster.randomize-hash-range-by-gfid off cluster.rebal-throttle normal cluster.lock-migration off cluster.force-migration off cluster.local-volume-name (null) cluster.weighted-rebalance on cluster.switch-pattern (null) cluster.entry-change-log on cluster.read-subvolume (null) cluster.read-subvolume-index -1 cluster.read-hash-mode 1 cluster.background-self-heal-count 8 cluster.metadata-self-heal on cluster.data-self-heal on cluster.entry-self-heal on cluster.self-heal-daemon enable cluster.heal-timeout 600 cluster.self-heal-window-size 1 cluster.data-change-log on cluster.metadata-change-log on cluster.data-self-heal-algorithm (null) cluster.eager-lock on disperse.eager-lock on disperse.other-eager-lock on disperse.eager-lock-timeout 1 disperse.other-eager-lock-timeout 1 cluster.quorum-type auto cluster.quorum-count (null) cluster.choose-local true cluster.self-heal-readdir-size 1KB cluster.post-op-delay-secs 1 cluster.ensure-durability on cluster.consistent-metadata no cluster.heal-wait-queue-length 128 cluster.favorite-child-policy none cluster.full-lock yes cluster.stripe-block-size 128KB cluster.stripe-coalesce true diagnostics.latency-measurement off diagnostics.dump-fd-stats off diagnostics.count-fop-hits off diagnostics.brick-log-level CRITICAL diagnostics.client-log-level CRITICAL diagnostics.brick-sys-log-level CRITICAL diagnostics.client-sys-log-level CRITICAL diagnostics.brick-logger (null) diagnostics.client-logger (null) diagnostics.brick-log-format (null) diagnostics.client-log-format (null) diagnostics.brick-log-buf-size 5 diagnostics.client-log-buf-size 5 diagnostics.brick-log-flush-timeout 120 diagnostics.client-log-flush-timeout 120 diagnostics.stats-dump-interval 0 diagnostics.fop-sample-interval 0 diagnostics.stats-dump-format json diagnostics.fop-sample-buf-size 65535 diagnostics.stats-dnscache-ttl-sec 86400 performance.cache-max-file-size 0 performance.cache-min-file-size 0 performance.cache-refresh-timeout 1 performance.cache-priority performance.cache-size 32MB performance.io-thread-count 16 performance.high-prio-threads 16 performance.normal-prio-threads 16 performance.low-prio-threads 16 performance.least-prio-threads 1 performance.enable-least-priority on performance.iot-watchdog-secs (null) performance.iot-cleanup-disconnected-reqsoff performance.iot-pass-through false performance.io-cache-pass-through false performance.cache-size 128MB performance.qr-cache-timeout 1 performance.cache-invalidation on performance.ctime-invalidation false performance.flush-behind on performance.nfs.flush-behind on performance.write-behind-window-size 1MB performance.resync-failed-syncs-after-fsyncoff performance.nfs.write-behind-window-size1MB performance.strict-o-direct off performance.nfs.strict-o-direct off performance.strict-write-ordering off performance.nfs.strict-write-ordering off performance.write-behind-trickling-writeson performance.aggregate-size 128KB performance.nfs.write-behind-trickling-writeson performance.lazy-open yes performance.read-after-open yes performance.open-behind-pass-through false performance.read-ahead-page-count 4 performance.read-ahead-pass-through false performance.readdir-ahead-pass-through false performance.md-cache-pass-through false performance.md-cache-timeout 600 performance.cache-swift-metadata true performance.cache-samba-metadata false performance.cache-capability-xattrs true performance.cache-ima-xattrs true performance.md-cache-statfs off performance.xattr-cache-list performance.nl-cache-pass-through false features.encryption off encryption.master-key (null) encryption.data-key-size 256 encryption.block-size 4096 network.frame-timeout 1800 network.ping-timeout 42 network.tcp-window-size (null) network.remote-dio disable client.event-threads 2 client.tcp-user-timeout 0 client.keepalive-time 20 client.keepalive-interval 2 client.keepalive-count 9 network.tcp-window-size (null) network.inode-lru-limit 200000 auth.allow * auth.reject (null) transport.keepalive 1 server.allow-insecure on server.root-squash off server.anonuid 65534 server.anongid 65534 server.statedump-path /var/run/gluster server.outstanding-rpc-limit 64 server.ssl (null) auth.ssl-allow * server.manage-gids off server.dynamic-auth on client.send-gids on server.gid-timeout 300 server.own-thread (null) server.event-threads 1 server.tcp-user-timeout 0 server.keepalive-time 20 server.keepalive-interval 2 server.keepalive-count 9 transport.listen-backlog 1024 ssl.own-cert (null) ssl.private-key (null) ssl.ca-list (null) ssl.crl-path (null) ssl.certificate-depth (null) ssl.cipher-list (null) ssl.dh-param (null) ssl.ec-curve (null) transport.address-family inet performance.write-behind on performance.read-ahead on performance.readdir-ahead on performance.io-cache on performance.quick-read on performance.open-behind on performance.nl-cache off performance.stat-prefetch on performance.client-io-threads off performance.nfs.write-behind on performance.nfs.read-ahead off performance.nfs.io-cache off performance.nfs.quick-read off performance.nfs.stat-prefetch off performance.nfs.io-threads off performance.force-readdirp true performance.cache-invalidation on features.uss off features.snapshot-directory .snaps features.show-snapshot-directory off features.tag-namespaces off network.compression off network.compression.window-size -15 network.compression.mem-level 8 network.compression.min-size 0 network.compression.compression-level -1 network.compression.debug false features.default-soft-limit 80% features.soft-timeout 60 features.hard-timeout 5 features.alert-time 86400 features.quota-deem-statfs off geo-replication.indexing off geo-replication.indexing off geo-replication.ignore-pid-check off geo-replication.ignore-pid-check off features.quota off features.inode-quota off features.bitrot disable debug.trace off debug.log-history no debug.log-file no debug.exclude-ops (null) debug.include-ops (null) debug.error-gen off debug.error-failure (null) debug.error-number (null) debug.random-failure off debug.error-fops (null) nfs.enable-ino32 no nfs.mem-factor 15 nfs.export-dirs on nfs.export-volumes on nfs.addr-namelookup off nfs.dynamic-volumes off nfs.register-with-portmap on nfs.outstanding-rpc-limit 16 nfs.port 2049 nfs.rpc-auth-unix on nfs.rpc-auth-null on nfs.rpc-auth-allow all nfs.rpc-auth-reject none nfs.ports-insecure off nfs.trusted-sync off nfs.trusted-write off nfs.volume-access read-write nfs.export-dir nfs.disable on nfs.nlm on nfs.acl on nfs.mount-udp off nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab nfs.rpc-statd /sbin/rpc.statd nfs.server-aux-gids off nfs.drc off nfs.drc-size 0x20000 nfs.read-size (1 * 1048576ULL) nfs.write-size (1 * 1048576ULL) nfs.readdir-size (1 * 1048576ULL) nfs.rdirplus on nfs.event-threads 1 nfs.exports-auth-enable (null) nfs.auth-refresh-interval-sec (null) nfs.auth-cache-ttl-sec (null) features.read-only off features.worm off features.worm-file-level off features.worm-files-deletable on features.default-retention-period 120 features.retention-mode relax features.auto-commit-period 180 storage.linux-aio off storage.batch-fsync-mode reverse-fsync storage.batch-fsync-delay-usec 0 storage.owner-uid -1 storage.owner-gid -1 storage.node-uuid-pathinfo off storage.health-check-interval 30 storage.build-pgfid off storage.gfid2path on storage.gfid2path-separator : storage.reserve 1 storage.health-check-timeout 10 storage.fips-mode-rchecksum off storage.force-create-mode 0000 storage.force-directory-mode 0000 storage.create-mask 0777 storage.create-directory-mask 0777 storage.max-hardlinks 100 storage.ctime off storage.bd-aio off config.gfproxyd off cluster.server-quorum-type off cluster.server-quorum-ratio 0 changelog.changelog off changelog.changelog-dir {{ brick.path }}/.glusterfs/changelogs changelog.encoding ascii changelog.rollover-time 15 changelog.fsync-interval 5 changelog.changelog-barrier-timeout 120 changelog.capture-del-path off features.barrier disable features.barrier-timeout 120 features.trash off features.trash-dir .trashcan features.trash-eliminate-path (null) features.trash-max-filesize 5MB features.trash-internal-op off cluster.enable-shared-storage disable locks.trace off locks.mandatory-locking off cluster.disperse-self-heal-daemon enable cluster.quorum-reads no client.bind-insecure (null) features.timeout 45 features.failover-hosts (null) features.shard off features.shard-block-size 64MB features.shard-lru-limit 16384 features.shard-deletion-rate 100 features.scrub-throttle lazy features.scrub-freq biweekly features.scrub false features.expiry-time 120 features.cache-invalidation on features.cache-invalidation-timeout 600 features.leases off features.lease-lock-recall-timeout 60 disperse.background-heals 8 disperse.heal-wait-qlength 128 cluster.heal-timeout 600 dht.force-readdirp on disperse.read-policy gfid-hash cluster.shd-max-threads 1 cluster.shd-wait-qlength 1024 cluster.locking-scheme full cluster.granular-entry-heal no features.locks-revocation-secs 0 features.locks-revocation-clear-all false features.locks-revocation-max-blocked 0 features.locks-monkey-unlocking false features.locks-notify-contention no features.locks-notify-contention-delay 5 disperse.shd-max-threads 1 disperse.shd-wait-qlength 1024 disperse.cpu-extensions auto disperse.self-heal-window-size 1 cluster.use-compound-fops off performance.parallel-readdir off performance.rda-request-size 131072 performance.rda-low-wmark 4096 performance.rda-high-wmark 128KB performance.rda-cache-limit 10MB performance.nl-cache-positive-entry false performance.nl-cache-limit 10MB performance.nl-cache-timeout 60 cluster.brick-multiplex off cluster.max-bricks-per-process 0 disperse.optimistic-change-log on disperse.stripe-cache 4 cluster.halo-enabled False cluster.halo-shd-max-latency 99999 cluster.halo-nfsd-max-latency 5 cluster.halo-max-latency 5 cluster.halo-max-replicas 99999 cluster.halo-min-replicas 2 cluster.daemon-log-level INFO debug.delay-gen off delay-gen.delay-percentage 10% delay-gen.delay-duration 100000 delay-gen.enable disperse.parallel-writes on features.sdfs on features.cloudsync off features.utime off ctime.noatime on feature.cloudsync-storetype (null) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug.