From bugzilla at redhat.com Mon Apr 1 02:41:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 02:41:14 +0000
Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes
using RDMA transport
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692441
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(amukherj at redhat.c |
|om) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 03:45:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 03:45:02 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed|2019-03-25 16:32:41 |2019-04-01 03:45:02
--- Comment #14 from Worker Ant ---
REVIEW: https://review.gluster.org/22075 (mgmt/shd: Implement multiplexing in
self heal daemon) merged (#25) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 04:35:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 04:35:10 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22455
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 04:35:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 04:35:11 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #6 from Worker Ant ---
REVIEW: https://review.gluster.org/22455 (posix-acl: remove default functions,
and use library fn instead) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:32:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:32:03 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22458
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:32:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:32:04 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #7 from Worker Ant ---
REVIEW: https://review.gluster.org/22458 (tests: enhance the auth.allow test to
validate all failures of 'login' module) posted (#1) for review on master by
Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:09 +0000
Subject: [Bugs] [Bug 1694561] New: gfapi: do not block epoll thread for
upcall notifications
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Bug ID: 1694561
Summary: gfapi: do not block epoll thread for upcall
notifications
Product: GlusterFS
Version: 6
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org, pasik at iki.fi
Depends On: 1693575
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1693575 +++
Description of problem:
With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to
offload processing upcall notifications to synctask so as not to block epoll
threads. However seems like the purpose wasnt fully resolved.
In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there
is no callback defined, the thread waits on synctask_join till the syncfn is
finished. So that way even with those changes, epoll threads are blocked till
the upcalls are processed.
Hence the right fix now is to define a callback function for that synctask
"glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and
the upcall processing can happen in parallel by synctask threads.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC ---
Users have complained about nfs-ganesha process getting stuck here -
https://github.com/nfs-ganesha/nfs-ganesha/issues/335
--- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) merged (#4) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
[Bug 1693575] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:09 +0000
Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694561
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
[Bug 1694561] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:32 +0000
Subject: [Bugs] [Bug 1694562] New: gfapi: do not block epoll thread for
upcall notifications
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
Bug ID: 1694562
Summary: gfapi: do not block epoll thread for upcall
notifications
Product: GlusterFS
Version: 5
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org, pasik at iki.fi
Depends On: 1693575
Blocks: 1694561
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1693575 +++
Description of problem:
With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to
offload processing upcall notifications to synctask so as not to block epoll
threads. However seems like the purpose wasnt fully resolved.
In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there
is no callback defined, the thread waits on synctask_join till the syncfn is
finished. So that way even with those changes, epoll threads are blocked till
the upcalls are processed.
Hence the right fix now is to define a callback function for that synctask
"glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and
the upcall processing can happen in parallel by synctask threads.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC ---
Users have complained about nfs-ganesha process getting stuck here -
https://github.com/nfs-ganesha/nfs-ganesha/issues/335
--- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) merged (#4) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
[Bug 1693575] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
[Bug 1694561] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:32 +0000
Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694562
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
[Bug 1694562] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:32 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694562
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
[Bug 1694562] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:50 +0000
Subject: [Bugs] [Bug 1694563] New: gfapi: do not block epoll thread for
upcall notifications
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
Bug ID: 1694563
Summary: gfapi: do not block epoll thread for upcall
notifications
Product: GlusterFS
Version: 4.1
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org, pasik at iki.fi
Depends On: 1693575
Blocks: 1694561, 1694562
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1693575 +++
Description of problem:
With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to
offload processing upcall notifications to synctask so as not to block epoll
threads. However seems like the purpose wasnt fully resolved.
In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there
is no callback defined, the thread waits on synctask_join till the syncfn is
finished. So that way even with those changes, epoll threads are blocked till
the upcalls are processed.
Hence the right fix now is to define a callback function for that synctask
"glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and
the upcall processing can happen in parallel by synctask threads.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC ---
Users have complained about nfs-ganesha process getting stuck here -
https://github.com/nfs-ganesha/nfs-ganesha/issues/335
--- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) merged (#4) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
[Bug 1693575] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
[Bug 1694561] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
[Bug 1694562] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:50 +0000
Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694563
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
[Bug 1694563] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:50 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694563
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
[Bug 1694563] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 05:59:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 05:59:50 +0000
Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694563
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
[Bug 1694563] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:09 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22459
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:10 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22459 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on release-6 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:17 +0000
Subject: [Bugs] [Bug 1694565] New: gfapi: do not block epoll thread for
upcall notifications
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
Bug ID: 1694565
Summary: gfapi: do not block epoll thread for upcall
notifications
Product: Red Hat Gluster Storage
Version: rhgs-3.5
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: pgurusid at redhat.com
Reporter: skoduri at redhat.com
QA Contact: vdas at redhat.com
CC: bugs at gluster.org, jthottan at redhat.com, pasik at iki.fi,
rhs-bugs at redhat.com, sankarshan at redhat.com,
skoduri at redhat.com, storage-qa-internal at redhat.com
Depends On: 1693575
Blocks: 1694561, 1694562, 1694563
Target Milestone: ---
Classification: Red Hat
+++ This bug was initially created as a clone of Bug #1693575 +++
Description of problem:
With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to
offload processing upcall notifications to synctask so as not to block epoll
threads. However seems like the purpose wasnt fully resolved.
In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there
is no callback defined, the thread waits on synctask_join till the syncfn is
finished. So that way even with those changes, epoll threads are blocked till
the upcalls are processed.
Hence the right fix now is to define a callback function for that synctask
"glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and
the upcall processing can happen in parallel by synctask threads.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC ---
Users have complained about nfs-ganesha process getting stuck here -
https://github.com/nfs-ganesha/nfs-ganesha/issues/335
--- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for
upcall processing) merged (#4) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
[Bug 1693575] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
[Bug 1694561] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
[Bug 1694562] gfapi: do not block epoll thread for upcall notifications
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
[Bug 1694563] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:17 +0000
Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693575
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694565
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
[Bug 1694565] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:17 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694565
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
[Bug 1694565] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:17 +0000
Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694565
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
[Bug 1694565] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:02:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:02:17 +0000
Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694565
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
[Bug 1694565] gfapi: do not block epoll thread for upcall notifications
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:03:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:03:19 +0000
Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22460
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:03:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:03:20 +0000
Subject: [Bugs] [Bug 1694562] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694562
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22460 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on release-5 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:19:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:19:43 +0000
Subject: [Bugs] [Bug 1694565] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|pgurusid at redhat.com |skoduri at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 06:33:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:33:33 +0000
Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22461
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:33:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:33:34 +0000
Subject: [Bugs] [Bug 1694563] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694563
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22461 (gfapi: Unblock epoll thread for
upcall processing) posted (#1) for review on release-4.1 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 06:50:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 06:50:09 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nbalacha at redhat.com
Flags| |needinfo?(hgowtham at redhat.c
| |om)
--- Comment #2 from Nithya Balachandran ---
Please explain why this happens and how the workaround solves the issue.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 07:58:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 07:58:40 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |amukherj at redhat.com
Flags| |needinfo?(hgowtham at redhat.c
| |om)
--- Comment #3 from Atin Mukherjee ---
> When we do a rolling upgrade of the cluster from 3.12, 4.1 or 5.5 to 6, the upgraded node goes into disconnected state.
Isn't this only seen from 3.12 to 6 upgrade?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 08:01:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 08:01:25 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(hgowtham at redhat.c |
|om) |
|needinfo?(hgowtham at redhat.c |
|om) |
--- Comment #4 from hari gowtham ---
Hi Nithya,
The RCA for this is yet to be done.
I didn't find anything fishy in the logs.
As I had to move forward with the testing, I tried the usually way of flushing
the iptables
to check if it fixes the disconnects and it yes, it did connect the peers back.
The reason why this is happening is yet to be discovered.
Regards,
Hari.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 08:04:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 08:04:41 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
--- Comment #5 from hari gowtham ---
(In reply to Atin Mukherjee from comment #3)
> > When we do a rolling upgrade of the cluster from 3.12, 4.1 or 5.5 to 6, the upgraded node goes into disconnected state.
>
> Isn't this only seen from 3.12 to 6 upgrade?
No, Atin. I issue happened with all the versions.
It could as well be some network issue with the machines I tried it on.
Not sure of it.
The point to note here is: some times just a glusterd restart fixed and
in some scenarios it needed a iptables flush followed with glusterd restart.
But I found that the iptables flush with glusterd restart fixed it in every
scenario i tried.
I could find time to debug this further.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 08:22:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 08:22:32 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
--- Comment #6 from Atin Mukherjee ---
FYI.. I tested the rolling upgrade from glusterfs 3.12 latest to glusterfs-6
with out any issues.
Can some one else please try as well?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 08:41:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 08:41:27 +0000
Subject: [Bugs] [Bug 1694565] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694565
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #2 from Soumya Koduri ---
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/166586/1
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 09:03:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:03:01 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #8 from Worker Ant ---
REVIEW: https://review.gluster.org/22441 (tests: add statedump to playground)
merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:04:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:04:00 +0000
Subject: [Bugs] [Bug 1694610] New: glusterd leaking memory when issued
gluster vol status all tasks continuosly
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
Bug ID: 1694610
Summary: glusterd leaking memory when issued gluster vol status
all tasks continuosly
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: glusterd
Severity: high
Priority: medium
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: amukherj at redhat.com, bmekala at redhat.com,
bugs at gluster.org, nchilaka at redhat.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
srakonde at redhat.com, storage-qa-internal at redhat.com,
vbellur at redhat.com
Depends On: 1691164
Blocks: 1686255
Target Milestone: ---
Classification: Community
Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly
for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have
been 750 MB.
Version-Release number of selected component (if applicable):
glusterfs-3.12.2
How reproducible:
1/1
Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with
a time gap of 2 seconds
Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB
Expected results:
glusterd memory shouldn't leak
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1686255
[Bug 1686255] glusterd leaking memory when issued gluster vol status all tasks
continuosly
https://bugzilla.redhat.com/show_bug.cgi?id=1691164
[Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks
continuosly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:04:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:04:00 +0000
Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691164
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694610
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
[Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks
continuosly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:05:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:05:13 +0000
Subject: [Bugs] [Bug 1694612] New: glusterd leaking memory when issued
gluster vol status all tasks continuosly
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
Bug ID: 1694612
Summary: glusterd leaking memory when issued gluster vol status
all tasks continuosly
Product: GlusterFS
Version: 5
Hardware: x86_64
OS: Linux
Status: NEW
Component: glusterd
Severity: high
Priority: medium
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: amukherj at redhat.com, bmekala at redhat.com,
bugs at gluster.org, nchilaka at redhat.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
srakonde at redhat.com, storage-qa-internal at redhat.com,
vbellur at redhat.com
Depends On: 1691164
Blocks: 1686255, 1694610
Target Milestone: ---
Classification: Community
Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly
for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have
been 750 MB.
Version-Release number of selected component (if applicable):
glusterfs-3.12.2
How reproducible:
1/1
Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with
a time gap of 2 seconds
Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB
Expected results:
glusterd memory shouldn't leak
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1686255
[Bug 1686255] glusterd leaking memory when issued gluster vol status all tasks
continuosly
https://bugzilla.redhat.com/show_bug.cgi?id=1691164
[Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks
continuosly
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
[Bug 1694610] glusterd leaking memory when issued gluster vol status all tasks
continuosly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:05:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:05:13 +0000
Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691164
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1694612
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
[Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks
continuosly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:05:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:05:13 +0000
Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1694612
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
[Bug 1694612] glusterd leaking memory when issued gluster vol status all tasks
continuosly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:07:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:07:40 +0000
Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
--- Comment #1 from Sanju ---
Root cause:
There's a leak of a key setting in the dictionary priv->glusterd_txn_opinfo in
every volume status all transaction when cli fetches the list of volume names
as part of the first transaction.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:07:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:07:54 +0000
Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
--- Comment #1 from Sanju ---
Root cause:
There's a leak of a key setting in the dictionary priv->glusterd_txn_opinfo in
every volume status all transaction when cli fetches the list of volume names
as part of the first transaction.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:12:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:12:51 +0000
Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22466
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:12:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:12:52 +0000
Subject: [Bugs] [Bug 1694612] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694612
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22466 (glusterd: fix txn-id mem leak) posted
(#1) for review on release-5 by Sanju Rakonde
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:14:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:14:31 +0000
Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22467
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:14:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:14:32 +0000
Subject: [Bugs] [Bug 1694610] glusterd leaking memory when issued gluster
vol status all tasks continuosly
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694610
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22467 (glusterd: fix txn-id mem leak) posted
(#1) for review on release-6 by Sanju Rakonde
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 09:26:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:26:48 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Mohammed Rafi KC changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |POST
Resolution|NEXTRELEASE |---
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 09:29:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:29:54 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22468
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 09:29:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:29:55 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
--- Comment #15 from Worker Ant ---
REVIEW: https://review.gluster.org/22468 (client/fini: return fini after rpc
cleanup) posted (#1) for review on master by mohammed rafi kc
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 09:37:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 09:37:41 +0000
Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to
activate host in DC with glusterfs 3.12 domains
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672318
Netbulae changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(amukherj at redhat.c |
|om) |
|needinfo?(info at netbulae.com |
|) |
--- Comment #26 from Netbulae ---
(In reply to Atin Mukherjee from comment #24)
> [2019-03-18 11:29:01.000279] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.14
>
> Why did we get a disconnect. Was glusterd service at *.14 not running?
>
> [2019-03-18 11:29:01.000330] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.15
> [2019-03-18 11:29:01.002495] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fb4beddbfbb] (-->
> /lib64/libgfrpc.so.0(+0xce11)[0x7fb4beba4e11] (-->
> /lib64/libgfrpc.so.0(+0xcf2e)[0x7fb4beba4f2e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7fb4beba6531] (-->
> /lib64/libgfrpc.so.0(+0xf0d8)[0x7fb4beba70d8] ))))) 0-glusterfs: forced
> unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at
> 2019-03-18 11:13:29.445101 (xid=0x2)
>
> The above log seems to be the culprit here.
>
> [2019-03-18 11:29:01.002517] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:/ssd9)
>
> And the above log is the after effect.
>
>
> I have few questions:
>
> 1. Does the mount fail everytime?
Yes. It also stays the same when we switch the primary storage domain to
another one.
> 2. Do you see any change in the behaviour when the primary volfile server is
> changed?
No I have different primary volfile server across volumes to spread the load a
bit more. Same effect always.
> 3. What are the gluster version in the individual peers?
All nodes and servers are on 3.12.15
>
> (Keeping the needinfo intact for now, but request Sahina to get us these
> details to work on).
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 10:03:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 10:03:52 +0000
Subject: [Bugs] [Bug 1694637] New: Geo-rep: Rename to an existing file name
destroys its content on slave
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694637
Bug ID: 1694637
Summary: Geo-rep: Rename to an existing file name destroys its
content on slave
Product: GlusterFS
Version: 5
OS: Linux
Status: NEW
Component: geo-replication
Severity: high
Assignee: bugs at gluster.org
Reporter: homma at allworks.co.jp
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Renaming a file to an existing file name on master results in an empty file on
slave.
Version-Release number of selected component (if applicable):
glusterfs 5.5-1.el7 from centos-gluster5 repository
How reproducible:
Always
Steps to Reproduce:
1. On geo-rep master, create a temporary files and rename them to existing
files repeatedly:
for n in {0..9}; do for i in {0..9}; do printf "%04d\n" $n > file$i.tmp; mv
file$i.tmp file$i; done; done
2. List the created files on master and slave.
Actual results:
On master
$ ls -l
total 6
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file0
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file1
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file2
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file3
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file4
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file5
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file6
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file7
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file8
-rw-rw-r-- 1 centos centos 5 Apr 1 18:08 file9
On slave
$ ls -l
total 1
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file0
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file0.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file1
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file1.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file2
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file2.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file3
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file3.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file4
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file4.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file5
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file5.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file6
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file6.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file7
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file7.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file8
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file8.tmp
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file9
-rw-rw-r-- 1 centos centos 0 Apr 1 18:08 file9.tmp
Expected results:
Files are successfully renamed with correct contents on slave.
Additional info:
I have a 2-node replicated volume on master, and a single-node volume on slave.
Master volume:
Volume Name: www
Type: Replicate
Volume ID: bc99bbd2-20f9-4440-b51e-a1e105adfdf3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fs01.localdomain:/glusterfs/www/brick1/brick
Brick2: fs02.localdomain:/glusterfs/www/brick1/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.build-pgfid: on
server.manage-gids: on
network.ping-timeout: 3
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Slave volume:
Volume Name: www
Type: Distribute
Volume ID: 026a58f5-9696-4d9e-9674-74771526e880
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: fs21.localdomain:/glusterfs/www/brick1/brick
Options Reconfigured:
storage.build-pgfid: on
server.manage-gids: on
network.ping-timeout: 3
transport.address-family: inet
nfs.disable: on
Many messages as follows appear in gsyncd.log on master:
[2019-04-01 09:08:06.994154] I [master(worker
/glusterfs/www/brick1/brick):813:fix_possible_entry_failures] _GMaster: Entry
not present on master. Fixing gfid mismatch in slave. Deleting the entry
retry_count=1 entry=({'stat': {}, 'entry1':
'.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0', 'gfid':
'54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'link': None, 'entry':
'.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'RENAME'}, 17,
{'slave_isdir': False, 'gfid_mismatch': True, 'slave_name': None, 'slave_gfid':
'df891073-b19c-481c-9916-f96790ff4d31', 'name_mismatch': False, 'dst': True})
[2019-04-01 09:08:07.33778] I [master(worker
/glusterfs/www/brick1/brick):813:fix_possible_entry_failures] _GMaster: Entry
not present on master. Fixing gfid mismatch in slave. Deleting the entry
retry_count=1 entry=({'uid': 1000, 'gfid':
'c2836641-1000-48b0-865e-2c9ea6815baf', 'gid': 1000, 'mode': 4294934964,
'entry': '.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op':
'CREATE'}, 17, {'slave_isdir': False, 'gfid_mismatch': True, 'slave_name':
None, 'slave_gfid': '54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'name_mismatch':
False, 'dst': False})
[2019-04-01 09:08:07.319814] I [master(worker
/glusterfs/www/brick1/brick):904:fix_possible_entry_failures] _GMaster: Fixing
ENOENT error in slave. Create parent directory on slave. retry_count=1
entry=({'stat': {'atime': 1554109682.6345513, 'gid': 1000, 'mtime':
1554109682.6455512, 'mode': 33204, 'uid': 1000}, 'entry1':
'.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0', 'gfid':
'5755b878-9ba6-4da4-aa27-28cf6defd06e', 'link': None, 'entry':
'.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'RENAME'}, 2,
{'slave_isdir': False, 'gfid_mismatch': False, 'slave_name': None,
'slave_gfid': None, 'name_mismatch': False, 'dst': False})
[2019-04-01 09:08:13.855005] E [master(worker
/glusterfs/www/brick1/brick):784:log_failures] _GMaster: ENTRY FAILED
data=({'uid': 1000, 'gfid': '5755b878-9ba6-4da4-aa27-28cf6defd06e', 'gid':
1000, 'mode': 4294934964, 'entry':
'.gfid/1915ab69-f1cd-42bf-8e75-0507ac765b58/file0.tmp', 'op': 'CREATE'}, 17,
{'slave_isdir': False, 'gfid_mismatch': True, 'slave_name': None, 'slave_gfid':
'54ff5e4c-8565-4246-aa1d-0b2b59a8d577', 'name_mismatch': False, 'dst': False})
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 10:13:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 10:13:42 +0000
Subject: [Bugs] [Bug 1624701] error-out {inode,
entry}lk fops with all-zero lk-owner
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1624701
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22469 (cluster/afr: Send inodelk/entrylk
with non-zero lk-owner) posted (#1) for review on master by Pranith Kumar
Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 10:13:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 10:13:41 +0000
Subject: [Bugs] [Bug 1624701] error-out {inode,
entry}lk fops with all-zero lk-owner
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1624701
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22469
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 12:14:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 12:14:53 +0000
Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to
activate host in DC with glusterfs 3.12 domains
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672318
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(info at netbulae.com
| |)
--- Comment #27 from Atin Mukherjee ---
Since I'm unable to reproduce this even after multiple attempts, the only
possibility I have to make some progress on this is by asking you to test
different combinations, I understand that this might make you frustrated but I
have no other way at this point of time to pinpoint this. In my local setup I
tried every possible options to simulate this but had no success.
As I explained in comment 24, it seems like that client couldn't get the
volfile from glusterd running in *.15 instance. However since there's no log
entry in INFO mode in glusterd which could indicate the possibility of this
failure can I request you to do the following if possible?
1. Run 'pkill glusterd; glusterd ' on *.15 node
2. Attempt to mount the client.
3. Find out the failure log of 'failed to fetch volume file' and see the
timestamp. From glusterd log map this timestamp and send us the snippet of the
log entries around this timestamp.
4. Run gluster v info command from all the nodes and paste back the output
5. Provide the output of 'gluster v get all cluster.op-version' from one of the
nodes
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 12:54:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 12:54:54 +0000
Subject: [Bugs] [Bug 1694139] Error waiting for job
'heketi-storage-copy-job' to complete on one-node k3s deployment.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694139
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |amukherj at redhat.com
Flags| |needinfo?(it.sergm at gmail.co
| |m)
--- Comment #1 from Atin Mukherjee ---
Could you elaborate the problem bit more? Are you seeing volume mount failing
or something wrong with the clustering? From the quick scan through of the bug
report, I don't see anything problematic from glusterd end.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 12:55:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 12:55:50 +0000
Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1684404
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-01 12:55:50
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 13:10:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 13:10:20 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22471
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 13:10:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 13:10:21 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #605 from Worker Ant ---
REVIEW: https://review.gluster.org/22471 (build: conditional rpcbind for gnfs
in glusterd.service) posted (#1) for review on master by Kaleb KEITHLEY
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 13:39:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 13:39:22 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |srakonde at redhat.com
Resolution|--- |NOTABUG
Last Closed| |2019-04-01 13:39:22
--- Comment #7 from Sanju ---
I've tested rolling upgrade from 3.12 to 6, but haven't seen any issue. The
cluster is in a healthy state and all peers are in connected state. Based on my
experience and comment 6, I'm closing this as not a bug. Please, feel free to
re-open the bug if you face it.
Thanks,
Sanju
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 14:30:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 14:30:01 +0000
Subject: [Bugs] [Bug 1690254] Volume create fails with "Commit failed"
message if volumes is created using 3 nodes with glusterd restarts on 4th
node.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1690254
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |amukherj at redhat.com
Resolution|--- |NOTABUG
Last Closed| |2019-04-01 14:30:01
--- Comment #1 from Atin Mukherjee ---
The current behavior is as per design. Please remember in GD1, every nodes have
to participate in the transaction and the commit phase should succeed
irrespective of if the bricks are hosted on m out of n nodes in the trusted
storage pool.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 14:33:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 14:33:42 +0000
Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1690753
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |Triaged
CC| |amukherj at redhat.com
Assignee|bugs at gluster.org |risjain at redhat.com
--- Comment #1 from Atin Mukherjee ---
This looks like a bug and should be an easy fix.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 17:45:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 17:45:28 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22473
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 17:45:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 17:45:30 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #606 from Worker Ant ---
REVIEW: https://review.gluster.org/22473 ([WIP][RFC]mem-pool: set ptr to 0x0
after free'ed.) posted (#1) for review on master by Yaniv Kaul
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 19:01:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 19:01:58 +0000
Subject: [Bugs] [Bug 1694820] New: Issue in heavy rename workload
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694820
Bug ID: 1694820
Summary: Issue in heavy rename workload
Product: GlusterFS
Version: mainline
Status: NEW
Component: geo-replication
Assignee: bugs at gluster.org
Reporter: sunkumar at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
This problem only exists in heavy RENAME workload where parallel rename are
frequent or doing RENAME with existing destination.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Run frequent RENAME on master mount and check for sync in slave.
Ex - while true; do uuid="`uuidgen`"; echo "some data" > "test$uuid"; mv
"test$uuid" "test" -f; done
Actual results:
Does not syncs renames properly and creates multiples files in slave.
Expected results:
Should sync renames.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 19:02:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 19:02:18 +0000
Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694820
Sunny Kumar changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |sunkumar at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Apr 1 19:10:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 19:10:44 +0000
Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694820
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22474
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Apr 1 19:10:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 01 Apr 2019 19:10:45 +0000
Subject: [Bugs] [Bug 1694820] Issue in heavy rename workload
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694820
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22474 (geo-rep: fix rename with existing
gfid) posted (#1) for review on master by Sunny Kumar
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 04:37:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 04:37:39 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
asender at testlabs.com.au changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |asender at testlabs.com.au
--- Comment #11 from asender at testlabs.com.au ---
You can try this simple test to reproduce the problem.
On Master
[svc_sp_st_script at hplispnfs30079 conf]$ touch test.txt
[svc_sp_st_script at hplispnfs30079 conf]$ vi test.txt
a
b
c
d
[svc_sp_st_script at hplispnfs30079 conf]$ ll test.txt
-rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt
On Slave
[root at hplispnfs40079 conf]# ll test.txt
-rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt
[root at hplispnfs40079 conf]# cat test.txt
a
b
c
d
On Master
[svc_sp_st_script at hplispnfs30079 conf]$ mv test.txt test-moved.txt
[svc_sp_st_script at hplispnfs30079 conf]$ ll test-moved.txt
-rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test-moved.txt
On Slave
File is not deleted, test-moved.txt does not exist and is not replicated.
[root at hplispnfs40079 conf]# ll testfile
-rw-r----- 1 svc_sp_st_script domain users 6 Apr 2 14:52 testfile
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 04:38:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 04:38:35 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #12 from asender at testlabs.com.au ---
I also tried setting use_tarssh:true but this did not change the behavior.
[root at hplispnfs30079 conf]# gluster volume geo-replication common
hplispnfs40079::common config
access_mount:false
allow_network:
change_detector:changelog
change_interval:5
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/changes-${local_id}.log
changelog_log_level:INFO
checkpoint:0
chnagelog_archive_format:%Y%m
cli_log_file:/var/log/glusterfs/geo-replication/cli.log
cli_log_level:INFO
connection_timeout:60
georep_session_working_dir:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/
gluster_cli_options:
gluster_command:gluster
gluster_command_dir:/usr/sbin
gluster_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/mnt-${local_id}.log
gluster_log_level:INFO
gluster_logdir:/var/log/glusterfs
gluster_params:aux-gfid-mount acl
gluster_rundir:/var/run/gluster
glusterd_workdir:/var/lib/glusterd
gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
ignore_deletes:false
isolated_slaves:
log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/gsyncd.log
log_level:INFO
log_rsync_performance:false
master_disperse_count:1
master_replica_count:1
max_rsync_retries:10
meta_volume_mnt:/var/run/gluster/shared_storage
pid_file:/var/run/gluster/gsyncd-common-hplispnfs40079-common.pid
remote_gsyncd:/usr/libexec/glusterfs/gsyncd
replica_failover_interval:1
rsync_command:rsync
rsync_opt_existing:true
rsync_opt_ignore_missing_args:true
rsync_options:
rsync_ssh_options:
slave_access_mount:false
slave_gluster_command_dir:/usr/sbin
slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-${master_node}-${master_brick_id}.log
slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-mbr-${master_node}-${master_brick_id}.log
slave_gluster_log_level:INFO
slave_gluster_params:aux-gfid-mount acl
slave_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/gsyncd.log
slave_log_level:INFO
slave_timeout:120
special_sync_mode:
ssh_command:ssh
ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem
ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/tar_ssh.pem
ssh_port:22
state_file:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/monitor.status
state_socket_unencoded:
stime_xattr_prefix:trusted.glusterfs.bb691a2e-801c-435b-a905-11ad249d43a7.ab3b208f-8cd1-4a2d-bf56-4a98434605c5
sync_acls:true
sync_jobs:3
sync_xattrs:true
tar_command:tar
use_meta_volume:true
use_rsync_xattrs:false
use_tarssh:true
working_dir:/var/lib/misc/gluster/gsyncd/common_hplispnfs40079_common/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 05:09:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 05:09:18 +0000
Subject: [Bugs] [Bug 1694920] New: Inconsistent locking in presence of
disconnects
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694920
Bug ID: 1694920
Summary: Inconsistent locking in presence of disconnects
Product: GlusterFS
Version: mainline
Hardware: x86_64
OS: Linux
Status: NEW
Component: protocol
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: rgowdapp at redhat.com
CC: bkunal at redhat.com, ccalhoun at redhat.com,
james.c.buckley at vumc.org, kdhananj at redhat.com,
nchilaka at redhat.com, pkarampu at redhat.com,
ravishankar at redhat.com, rgowdapp at redhat.com,
rhinduja at redhat.com, rhs-bugs at redhat.com,
rkavunga at redhat.com, sankarshan at redhat.com,
storage-qa-internal at redhat.com
Depends On: 1689375
Target Milestone: ---
Group: redhat
Classification: Community
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 05:32:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 05:32:05 +0000
Subject: [Bugs] [Bug 1694925] New: GF_LOG_OCCASSIONALLY API doesn't log at
first instance
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Bug ID: 1694925
Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance
Product: GlusterFS
Version: mainline
Status: NEW
Component: logging
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every
42nd iterations which isn't effective as in some cases we might not have
the code flow hitting the same log for as many as 42 times and we'd end
up suppressing the log.
Version-Release number of selected component (if applicable):
Mainline
How reproducible:
Always
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 05:35:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 05:35:14 +0000
Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22475
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 05:35:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 05:35:15 +0000
Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#1) for review on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 06:38:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:38:07 +0000
Subject: [Bugs] [Bug 1694943] New: parallel-readdir slows down directory
listing
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694943
Bug ID: 1694943
Summary: parallel-readdir slows down directory listing
Product: GlusterFS
Version: mainline
Status: NEW
Component: core
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
While running tests with the upstream master (HEAD at commit
dfa255ae7f2dab4fb3d84c67a0452c5b32455877), I noticed that enabling
parallel-readdir seems to increase the time taken for a directory listing:
Numbers from a pure distribute 3 brick volume:
Volume Name: pvol
Type: Distribute
Volume ID: c39c8c16-82d3-4b0b-8050-9c3d22c800ea
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/bricks/fsgbench0002/brick-0
Brick2: server1:/mnt/bricks/fsgbench0003/brick-0
Brick3: server1:/mnt/bricks/fsgbench0004/brick-0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
The volume was mounted on /mnt/nithya and I created 10K directories and 10K
files in the volume root:
With readdir-ahead enabled:
----------------------------
[root at server2 nithya]# time ll |wc -l
20001
real 0m11.434s
user 0m0.116s
sys 0m0.241s
[root at server2 nithya]# time ll |wc -l
20001
real 0m6.825s
user 0m0.111s
sys 0m0.265s
With readdir-ahead and parallel-readdir enabled:
------------------------------------------------
[root at server2 nithya]# time ll |wc -l
20001
real 0m15.609s
user 0m0.148s
sys 0m0.379s
[root at server2 nithya]# time ll |wc -l
20001
real 0m9.930s
user 0m0.107s
sys 0m0.295s
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 06:38:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:38:27 +0000
Subject: [Bugs] [Bug 1694943] parallel-readdir slows down directory listing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694943
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |rgowdapp at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 06:48:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:48:08 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |khiremat at redhat.com
Depends On| |1583018
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1583018
[Bug 1583018] changelog: Changelog is not capturing rename of files
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:48:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:48:08 +0000
Subject: [Bugs] [Bug 1583018] changelog: Changelog is not capturing rename
of files
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1583018
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1660225
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
[Bug 1660225] geo-rep does not replicate mv or rename of file
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:48:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:48:46 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #13 from Kotresh HR ---
This issue is fixed in upstream and 5.x and 6.x series
Patch: https://review.gluster.org/#/c/glusterfs/+/20093/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:49:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:49:40 +0000
Subject: [Bugs] [Bug 1694139] Error waiting for job
'heketi-storage-copy-job' to complete on one-node k3s deployment.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694139
it.sergm at gmail.com changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(it.sergm at gmail.co |
|m) |
--- Comment #2 from it.sergm at gmail.com ---
The thing is i don't see any exception there also and heketidbstorage volume is
manually mounting(no files inside), but still its not working with k3s and pod
seems cannot mount needed stuff before starting. K3s itself works fine and can
deploy stuff with no errors.
I could be wrong, but there is only one volume listed from gluster pod:
[root at k3s-gluster /]# gluster volume list
heketidbstorage
but regarding to pod's error there should be more:
3d12h Warning FailedMount Pod Unable to mount
volumes for pod
"heketi-storage-copy-job-qzpr7_kube-system(36e1b013-5200-11e9-a826-227e2ba50104)":
timeout expired waiting for volumes to attach or mount for pod
"kube-system"/"heketi-storage-copy-job-qzpr7". list of unmounted
volumes=[heketi-storage]. list of unattached volumes=[heketi-storage
heketi-storage-secret default-token-98jvk]
Here is the list of volumes on gluster pod:
[root at k3s-gluster /]# df -h
Filesystem
Size Used Avail Use% Mounted on
overlay
9.8G 6.9G 2.5G 74% /
udev
3.9G 0 3.9G 0% /dev
/dev/vda2
9.8G 6.9G 2.5G 74% /run
tmpfs
798M 1.3M 797M 1% /run/lvm
tmpfs
3.9G 0 3.9G 0% /dev/shm
tmpfs
3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs
3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
/dev/mapper/vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7
2.0G 33M 2.0G 2%
/var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7
[root at k3s-gluster /]# blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/vda1: PARTUUID="258e4699-a592-442c-86d7-3d7ee4a0dfb7"
/dev/vda2: UUID="b394d2be-6b9e-11e8-82ca-22c5fe683ae4" TYPE="ext4"
PARTUUID="97104384-f79f-4a39-b3d4-56d717673a18"
/dev/vdb: UUID="RUR8Cw-eVYg-H26e-yQ4g-7YCe-NzNg-ocJazb" TYPE="LVM2_member"
/dev/mapper/vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7:
UUID="ab0e969f-ae85-459c-914f-b008aeafb45e" TYPE="xfs"
Also here what i've found on main node - host ip is NULL (btw i've changed
topology before with external and private ips - nothing changed for this):
root at k3s-gluster:~# cat /var/log/glusterfs/cli.log.1
[2019-03-29 08:54:07.634136] I [cli.c:773:main] 0-cli: Started running gluster
with version 4.1.7
[2019-03-29 08:54:07.678012] I [MSGID: 101190]
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-03-29 08:54:07.678105] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now
[2019-03-29 08:54:07.678268] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0
[2019-03-29 08:54:07.721606] I [cli-rpc-ops.c:1169:gf_cli_create_volume_cbk]
0-cli: Received resp to create volume
[2019-03-29 08:54:07.721773] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2019-03-29 08:54:07.817416] I [cli.c:773:main] 0-cli: Started running gluster
with version 4.1.7
[2019-03-29 08:54:07.861767] I [MSGID: 101190]
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-03-29 08:54:07.861943] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now
[2019-03-29 08:54:07.862016] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0
[2019-03-29 08:54:08.009116] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk]
0-cli: Received resp to start volume
[2019-03-29 08:54:08.009314] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2019-03-29 14:18:51.209759] I [cli.c:773:main] 0-cli: Started running gluster
with version 4.1.7
[2019-03-29 14:18:51.256846] I [MSGID: 101190]
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-03-29 14:18:51.256985] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now
[2019-03-29 14:18:51.257093] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0
[2019-03-29 14:18:51.259408] I [cli-rpc-ops.c:875:gf_cli_get_volume_cbk] 0-cli:
Received resp to get vol: 0
[2019-03-29 14:18:51.259587] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0
[2019-03-29 14:18:51.260102] I [cli-rpc-ops.c:875:gf_cli_get_volume_cbk] 0-cli:
Received resp to get vol: 0
[2019-03-29 14:18:51.260143] I [input.c:31:cli_batch] 0-: Exiting with: 0
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 06:49:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:49:42 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #14 from Kotresh HR ---
Workaround:
The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes.
It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to convert
single distribute to two distribute volume or upgrade to later versions
if it can't be waited until next 4.1.x release.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:55:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:55:56 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22476
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:55:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:55:56 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #15 from Worker Ant ---
REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in
changelog) posted (#1) for review on release-4.1 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 06:57:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 06:57:53 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
--- Comment #8 from Nithya Balachandran ---
(In reply to Sanju from comment #7)
> I've tested rolling upgrade from 3.12 to 6, but haven't seen any issue. The
> cluster is in a healthy state and all peers are in connected state. Based on
> my experience and comment 6, I'm closing this as not a bug. Please, feel
> free to re-open the bug if you face it.
>
> Thanks,
> Sanju
What about the upgrades from the other versions? This BZ refers to upgrades to
release 6 from 3.12, 4 and 5.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 08:15:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 08:15:51 +0000
Subject: [Bugs] [Bug 1694976] New: On Fedora 29 GlusterFS 4.1 repo has
bad/missing rpm signs
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694976
Bug ID: 1694976
Summary: On Fedora 29 GlusterFS 4.1 repo has bad/missing rpm
signs
Product: GlusterFS
Version: 4.1
OS: Linux
Status: NEW
Component: unclassified
Severity: high
Assignee: bugs at gluster.org
Reporter: bence at noc.elte.hu
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
On fedora 29 upgrade glusterfs from 4.1.7 to 4.1.8 failed becase fo missing/bad
rpm signiatures.
Version-Release number of selected component (if applicable):
GlusterFS 4.1.8 on Fedora 29
How reproducible:
Installed glusterfs 4.1.7 earlier from
https://download.gluster.org/pub/gluster/glusterfs/4.1/ repo. Now upgrade to
4.1.8 failes
Steps to Reproduce:
1. Install Fedora 29 Workstation
2. Disable glusterfs* packages from base/updates
3. Setup repo glusterfs-41-fedora as follows:
cat /etc/yum.repos.d/glusterfs-41-fedora.repo
[glusterfs-fedora]
name=GlusterFS is a clustered file-system capable of scaling to several
petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/4.1/LATEST/Fedora/fedora-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
4. install glusterfs client packages 4.1.7 version
5. upgrade to 4.1.8 released on 2019-03-28 11:21
Actual results:
# dnf update
Last metadata expiration check: 0:00:23 ago on Tue 02 Apr 2019 09:55:04 AM
CEST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Upgrading
glusterfs x86_64 4.1.8-1.fc29 glusterfs-fedora 618 k
glusterfs-api x86_64 4.1.8-1.fc29 glusterfs-fedora 82 k
glusterfs-cli x86_64 4.1.8-1.fc29 glusterfs-fedora 189 k
glusterfs-client-xlators x86_64 4.1.8-1.fc29 glusterfs-fedora 942 k
glusterfs-fuse x86_64 4.1.8-1.fc29 glusterfs-fedora 126 k
glusterfs-libs x86_64 4.1.8-1.fc29 glusterfs-fedora 379 k
Transaction Summary
================================================================================
Upgrade 6 Packages
Total download size: 2.3 M
Is this ok [y/N]: y
Downloading Packages:
(1/6): glusterfs-api-4.1.8-1.fc29.x86_64.rpm 64 kB/s | 82 kB 00:01
(2/6): glusterfs-cli-4.1.8-1.fc29.x86_64.rpm 145 kB/s | 189 kB 00:01
(3/6): glusterfs-4.1.8-1.fc29.x86_64.rpm 375 kB/s | 618 kB 00:01
(4/6): glusterfs-fuse-4.1.8-1.fc29.x86_64.rpm 290 kB/s | 126 kB 00:00
(5/6): glusterfs-client-xlators-4.1.8-1.fc29.x8 1.7 MB/s | 942 kB 00:00
(6/6): glusterfs-libs-4.1.8-1.fc29.x86_64.rpm 1.3 MB/s | 379 kB 00:00
--------------------------------------------------------------------------------
Total 1.2 MB/s | 2.3 MB 00:01
warning:
/var/cache/dnf/glusterfs-fedora-80772cffdd565d3f/packages/glusterfs-4.1.8-1.fc29.x86_64.rpm:
Header V4 RSA/SHA256 Signature, key ID c2f8238c: NOKEY
GlusterFS is a clustered file-system capable of 2.9 kB/s | 1.7 kB 00:00
Importing GPG key 0x78FA6D97:
Userid : "Gluster Packager "
Fingerprint: EED3 351A FD72 E543 7C05 0F03 88F6 CDEE 78FA 6D97
From : http://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Is this ok [y/N]: y
Key imported successfully
Import of key(s) didn't help, wrong key(s)?
Public key for glusterfs-4.1.8-1.fc29.x86_64.rpm is not installed. Failing
package is: glusterfs-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Public key for glusterfs-api-4.1.8-1.fc29.x86_64.rpm is not installed. Failing
package is: glusterfs-api-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Public key for glusterfs-cli-4.1.8-1.fc29.x86_64.rpm is not installed. Failing
package is: glusterfs-cli-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Public key for glusterfs-client-xlators-4.1.8-1.fc29.x86_64.rpm is not
installed. Failing package is: glusterfs-client-xlators-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Public key for glusterfs-fuse-4.1.8-1.fc29.x86_64.rpm is not installed. Failing
package is: glusterfs-fuse-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
Public key for glusterfs-libs-4.1.8-1.fc29.x86_64.rpm is not installed. Failing
package is: glusterfs-libs-4.1.8-1.fc29.x86_64
GPG Keys are configured as:
https://download.gluster.org/pub/gluster/glusterfs/4.1/rsa.pub
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: GPG check FAILED
Expected results:
Upgrades to 4.1.8
Additional info:
Centos 7 from GlusterFS 4.1 repo upgardes as expected as of 01/04/2019.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 10:17:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 10:17:39 +0000
Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling
upgrade.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694010
--- Comment #9 from Sanju ---
(In reply to Nithya Balachandran from comment #8)
> What about the upgrades from the other versions? This BZ refers to upgrades
> to release 6 from 3.12, 4 and 5.
I did test upgrade to release 6 from 4 and 5. Haven't seen any issue.
Thanks,
Sanju
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 10:57:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 10:57:31 +0000
Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-02 10:57:31
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) merged (#2) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 12:19:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 12:19:19 +0000
Subject: [Bugs] [Bug 1624701] error-out {inode,
entry}lk fops with all-zero lk-owner
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1624701
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22469 (cluster/afr: Send inodelk/entrylk
with non-zero lk-owner) merged (#3) on master by Pranith Kumar Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 12:42:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 12:42:46 +0000
Subject: [Bugs] [Bug 1482909] RFE : Enable glusterfs md cache for nfs-ganesha
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1482909
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1695072
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695072
[Bug 1695072] Doc changes for [RFE]nfs-ganesha: optimize FSAL_GLUSTER upcall
mechanism
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Apr 2 13:21:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 13:21:23 +0000
Subject: [Bugs] [Bug 1695099] New: The number of glusterfs processes keeps
increasing, using all available resources
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695099
Bug ID: 1695099
Summary: The number of glusterfs processes keeps increasing,
using all available resources
Product: GlusterFS
Version: 5
Hardware: x86_64
OS: Linux
Status: NEW
Component: glusterd
Severity: high
Assignee: bugs at gluster.org
Reporter: christian.ihle at drift.oslo.kommune.no
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
During normal operations, CPU and memory usage gradually increase to 100%,
being used by a large number of glusterfs processes. The result is slowness and
resource starvation. Issue startet happening with GlusterFS 5.2, but did not
improve with 5.5. Did not see this issue in 3.12.
Version-Release number of selected component (if applicable):
GlusterFS 5.2 and 5.5
Heketi 8.0.0
CentOS 7.6
How reproducible:
Users of the cluster hit this issue pretty often by creating and deleting
volumes quickly, from Kubernetes (using Heketi to control GlusterFS). Sometimes
we hit 100% resource usage several times a day.
Steps to Reproduce:
1. Create volume
2. Delete volume
3. Repeat quickly
Actual results:
CPU usage and memory usage increase, and the number of glusterfs processes
increases. I have to login to each node in the cluster and kill old processes
to make nodes responsive again, otherwise the nodes eventually freeze from
resource starvation.
Expected results:
CPU and memory usage should only spike shortly, and not continue to increase,
and there should be only one glusterfs process.
Additional info:
I found some issues that look similar:
* https://github.com/gluster/glusterfs/issues/625
* https://github.com/heketi/heketi/issues/1439
Log output from a time where resource usage increased
(/var/log/glusterfs/glusterd.log):
[2019-04-01 12:07:23.377715] W [MSGID: 101095]
[xlator.c:180:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/5.5/xlator/nfs/server.so: cannot open shared object file:
Ingen slik fil eller filkatalog
[2019-04-01 12:07:23.684561] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh
--volname=vol_45653f46dbc8953f876a009b4ea8dd26
[2019-04-01 12:07:26.931683] I [rpc-clnt.c:1000:rpc_clnt_connection_init]
0-snapd: setting frame-timeout to 600
[2019-04-01 12:07:26.932340] I [rpc-clnt.c:1000:rpc_clnt_connection_init]
0-gfproxyd: setting frame-timeout to 600
[2019-04-01 12:07:26.932667] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2019-04-01 12:07:26.932707] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is
stopped
[2019-04-01 12:07:26.932731] I [MSGID: 106599]
[glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so
xlator is not installed
[2019-04-01 12:07:26.963055] I [MSGID: 106568]
[glusterd-proc-mgmt.c:92:glusterd_proc_stop] 0-management: Stopping glustershd
daemon running in pid: 16020
[2019-04-01 12:07:27.963708] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is
stopped
[2019-04-01 12:07:27.963951] I [MSGID: 106567]
[glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd
service
[2019-04-01 12:07:28.985311] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2019-04-01 12:07:28.985478] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is
stopped
[2019-04-01 12:07:28.989024] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already
stopped
[2019-04-01 12:07:28.989098] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is
stopped
[2019-04-01 12:07:29.299841] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
--volname=vol_45653f46dbc8953f876a009b4ea8dd26 --first=no --version=1
--volume-op=start --gd-workdir=/var/lib/glusterd
[2019-04-01 12:07:29.338437] E [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe69c3) [0x7fd04cc469c3]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Failed to execute script:
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_45653f46dbc8953f876a009b4ea8dd26 --first=no --version=1
--volume-op=start --gd-workdir=/var/lib/glusterd
[2019-04-01 12:07:52.658922] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
--volname=vol_c5112b1e28a7bbc96640a8572009c6f0 --last=no
[2019-04-01 12:07:52.679220] E [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe69c3) [0x7fd04cc469c3]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Failed to execute script: /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
--volname=vol_c5112b1e28a7bbc96640a8572009c6f0 --last=no
[2019-04-01 12:07:52.681081] I [MSGID: 106542]
[glusterd-utils.c:8440:glusterd_brick_signal] 0-glusterd: sending signal 15 to
brick with pid 27595
[2019-04-01 12:07:53.732699] I [MSGID: 106599]
[glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so
xlator is not installed
[2019-04-01 12:07:53.791560] I [MSGID: 106568]
[glusterd-proc-mgmt.c:92:glusterd_proc_stop] 0-management: Stopping glustershd
daemon running in pid: 18583
[2019-04-01 12:07:53.791857] I [MSGID: 106143]
[glusterd-pmap.c:389:pmap_registry_remove] 0-pmap: removing brick
/var/lib/heketi/mounts/vg_799fbf11286fbf497605bbe58c3e9dfa/brick_08bfe132dad6099ab387555298466ca3/brick
on port 49162
[2019-04-01 12:07:53.822032] I [MSGID: 106006]
[glusterd-svc-mgmt.c:356:glusterd_svc_common_rpc_notify] 0-management:
glustershd has disconnected from glusterd.
[2019-04-01 12:07:54.792497] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is
stopped
[2019-04-01 12:07:54.792736] I [MSGID: 106567]
[glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd
service
[2019-04-01 12:07:55.812655] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2019-04-01 12:07:55.812837] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is
stopped
[2019-04-01 12:07:55.816580] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already
stopped
[2019-04-01 12:07:55.816672] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is
stopped
[2019-04-01 12:07:59.829927] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x3b2dd) [0x7fd04cb9b2dd]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh
--volname=vol_c5112b1e28a7bbc96640a8572009c6f0
[2019-04-01 12:07:59.951300] I [MSGID: 106495]
[glusterd-handler.c:3118:__glusterd_handle_getwd] 0-glusterd: Received getwd
req
[2019-04-01 12:07:59.967584] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6f9a) [0x7fd04cc46f9a]
-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe6a65) [0x7fd04cc46a65]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fd058156955] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
--volname=vol_c5112b1e28a7bbc96640a8572009c6f0
[2019-04-01 12:07:53.732626] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2019-04-01 12:07:53.732677] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is
stopped
Examples of errors about deleted volumes from /var/log/glusterfs/glustershd.log
- we get gigabytes of these every day:
[2019-04-02 09:57:08.997572] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
10-vol_3be8a34875cc37098593d4bc8740477b-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.033441] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
26-vol_2399e6ef0347ac569a0b1211f1fd109d-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.036003] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
40-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.077109] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
2-vol_bca47201841f5b50d341eb2bedf5cd46-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.103495] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
24-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.455818] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
30-vol_fafddd8a937a550fbefc6c54830ce44f-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.511070] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
14-vol_cf3700764dfdce40d60b89fde7e1a643-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
[2019-04-02 09:57:09.490714] E [MSGID: 108006]
[afr-common.c:5314:__afr_handle_child_down_event]
0-vol_c5112b1e28a7bbc96640a8572009c6f0-replicate-0: All subvolumes are down.
Going offline until at least one of them comes back up.
Example of concurrent glusterfs processes on a node:
root 4559 16.8 6.5 14882048 1060288 ? Ssl Apr01 206:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 6507 14.7 6.1 14250324 998280 ? Ssl Apr01 178:33
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 6743 0.0 1.2 4780344 201708 ? Ssl Apr01 0:35
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 7660 17.0 6.3 14859244 1027432 ? Ssl Apr01 206:32
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 7789 0.1 1.5 5390364 250200 ? Ssl Apr01 1:08
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 9259 16.4 6.3 14841432 1029512 ? Ssl Apr01 198:12
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 12394 14.0 5.6 13549044 918424 ? Ssl Apr01 167:46
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 14980 9.2 4.7 11657716 778876 ? Ssl Apr01 110:10
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 16032 8.2 4.4 11040436 716020 ? Ssl Apr01 97:39
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 23961 6.3 3.7 9807736 610408 ? Ssl Apr01 62:03
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 25560 2.8 3.0 8474704 503488 ? Ssl Apr01 27:33
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 26293 3.2 1.2 4812208 200896 ? Ssl 09:26 0:35
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 28205 1.3 1.8 5992016 300012 ? Ssl Apr01 13:31
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 29186 1.4 2.1 6669800 352440 ? Ssl Apr01 13:59
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 30485 0.9 0.6 3527080 101552 ? Ssl 09:35 0:05
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 31171 1.0 0.6 3562360 104908 ? Ssl 09:35 0:05
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
root 32086 0.6 0.3 2925412 54852 ? Ssl 09:35 0:03
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/b1de56a0bbcc8779.socket --xlator-option
*replicate*.node-uuid=24419492-0d80-4a2a-9420-1dd92515eaf1 --process-name
glustershd
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 20:14:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 20:14:09 +0000
Subject: [Bugs] [Bug 1695327] New: regression test fails with brick mux
enabled.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695327
Bug ID: 1695327
Summary: regression test fails with brick mux enabled.
Product: GlusterFS
Version: mainline
Status: NEW
Component: tests
Assignee: bugs at gluster.org
Reporter: rabhat at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
The test "tests/bitrot/bug-1373520.t" fails with the following error when it is
run with brick-multiplexing enabled.
[root at workstation glusterfs]# prove -rfv tests/bitrot/bug-1373520.t
tests/bitrot/bug-1373520.t ..
1..31
ok 1, LINENUM:8
ok 2, LINENUM:9
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:15
ok 7, LINENUM:16
volume set: failed: Volume patchy is not of replicate type
ok 8, LINENUM:23
ok 9, LINENUM:24
ok 10, LINENUM:25
ok 11, LINENUM:28
ok 12, LINENUM:29
ok 13, LINENUM:32
ok 14, LINENUM:33
ok 15, LINENUM:36
ok 16, LINENUM:38
ok 17, LINENUM:41
getfattr: Removing leading '/' from absolute path names
ok 18, LINENUM:47
ok 19, LINENUM:48
ok 20, LINENUM:49
ok 21, LINENUM:50
ok 22, LINENUM:52
ok 23, LINENUM:53
ok 24, LINENUM:54
ok 25, LINENUM:55
ok 26, LINENUM:58
ok 27, LINENUM:61
ok 28, LINENUM:67
stat: cannot stat '/d/backends/patchy5/FILE1': No such file or directory
stat: cannot stat '/d/backends/patchy5/FILE1': No such file or directory
not ok 29 Got "0" instead of "512", LINENUM:70
FAILED COMMAND: 512 path_size /d/backends/patchy5/FILE1
ok 30, LINENUM:71
not ok 31 Got "0" instead of "512", LINENUM:72
FAILED COMMAND: 512 path_size /d/backends/patchy5/HL_FILE1
Failed 2/31 subtests
Test Summary Report
-------------------
tests/bitrot/bug-1373520.t (Wstat: 0 Tests: 31 Failed: 2)
Failed tests: 29, 31
Files=1, Tests=31, 218 wallclock secs ( 0.03 usr 0.01 sys + 2.50 cusr 3.52
csys = 6.06 CPU)
Result: FAIL
Version-Release number of selected component (if applicable):
How reproducible:
Run the above testcase with brick multiplexing enabled.
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 20:26:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 20:26:28 +0000
Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695327
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22481
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 20:26:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 20:26:29 +0000
Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695327
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22481 (tests/bitrot: enable self-heal daemon
before accessing the files) posted (#1) for review on master by Raghavendra
Bhat
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:37:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:37:15 +0000
Subject: [Bugs] [Bug 1695390] New: GF_LOG_OCCASSIONALLY API doesn't log at
first instance
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
Bug ID: 1695390
Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance
Product: GlusterFS
Version: 6
Status: NEW
Component: logging
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org
Depends On: 1694925
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1694925 +++
Description of problem:
GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every
42nd iterations which isn't effective as in some cases we might not have
the code flow hitting the same log for as many as 42 times and we'd end
up suppressing the log.
Version-Release number of selected component (if applicable):
Mainline
How reproducible:
Always
--- Additional comment from Worker Ant on 2019-04-02 05:35:15 UTC ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#1) for review on master by Atin Mukherjee
--- Additional comment from Worker Ant on 2019-04-02 10:57:31 UTC ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) merged (#2) on master by Atin Mukherjee
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
[Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:37:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:37:15 +0000
Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695390
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
[Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:38:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:38:53 +0000
Subject: [Bugs] [Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695391
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695391
[Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:38:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:38:53 +0000
Subject: [Bugs] [Bug 1695391] New: GF_LOG_OCCASSIONALLY API doesn't log at
first instance
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695391
Bug ID: 1695391
Summary: GF_LOG_OCCASSIONALLY API doesn't log at first instance
Product: GlusterFS
Version: 5
Status: NEW
Component: logging
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org
Depends On: 1694925
Blocks: 1695390
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1694925 +++
Description of problem:
GF_LOG_OCCASSIONALLY doesn't log on the first instance rather at every
42nd iterations which isn't effective as in some cases we might not have
the code flow hitting the same log for as many as 42 times and we'd end
up suppressing the log.
Version-Release number of selected component (if applicable):
Mainline
How reproducible:
Always
--- Additional comment from Worker Ant on 2019-04-02 05:35:15 UTC ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#1) for review on master by Atin Mukherjee
--- Additional comment from Worker Ant on 2019-04-02 10:57:31 UTC ---
REVIEW: https://review.gluster.org/22475 (logging: Fix GF_LOG_OCCASSIONALLY
API) merged (#2) on master by Atin Mukherjee
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694925
[Bug 1694925] GF_LOG_OCCASSIONALLY API doesn't log at first instance
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
[Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first instance
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:38:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:38:53 +0000
Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1695391
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695391
[Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first instance
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:39:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:39:22 +0000
Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22482
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:39:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:39:23 +0000
Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#1) for review on release-6 by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:42:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:42:28 +0000
Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695391
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22483
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 02:42:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 02:42:29 +0000
Subject: [Bugs] [Bug 1695391] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695391
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22483 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#1) for review on release-5 by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 03:02:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 03:02:09 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22484
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 03:02:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 03:02:10 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #607 from Worker Ant ---
REVIEW: https://review.gluster.org/22484 (glusterd: remove redundant
glusterd_check_volume_exists () calls) posted (#1) for review on master by Atin
Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 03:55:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 03:55:41 +0000
Subject: [Bugs] [Bug 1695399] New: With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695399
Bug ID: 1695399
Summary: With parallel-readdir enabled, deleting a directory
containing stale linkto files fails with "Directory
not empty"
Product: GlusterFS
Version: 5
Status: NEW
Component: distribute
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
This bug was initially created as a copy of Bug #1672851
I am copying this bug because:
Description of problem:
If parallel-readdir is enabled on a volume, rm -rf fails with "Directory
not empty" if contains stale linkto files.
Version-Release number of selected component (if applicable):
How reproducible:
Consistently
Steps to Reproduce:
1. Create a 3 brick distribute volume
2. Enable parallel-readdir and readdir-ahead on the volume
3. Fuse mount the volume and mkdir dir0
4. Create some files inside dir0 and rename them so linkto files are created on
the bricks
5. Check the bricks to see which files have linkto files. Delete the data files
directly on the bricks, leaving the linkto files behind. These are now stale
linkto files.
6. Remount the volume
7. rm -rf dir0
Actual results:
[root at rhgs313-6 fuse1]# rm -rf dir0/
rm: cannot remove ?dir0/?: Directory not empty
Expected results:
dir0 should be deleted without errors
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 03:57:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 03:57:00 +0000
Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695399
--- Comment #1 from Nithya Balachandran ---
RCA:
rm -rf works by first listing and unlinking all entries in and then
calling an rmdir .
As DHT readdirp does not return linkto files in the listing, they are not
unlinked as part of the rm -rf itself. dht_rmdir handles this by performing a
readdirp internally on and deleting all stale linkto files before
proceeding with the actual rmdir operation.
When parallel-readdir is enabled, the rda xlator is loaded below dht in the
graph and proactively lists and caches entries when an opendir is performed.
Entries are returned from this cache for any subsequent readdirp calls on the
directory that was opened.
DHT uses the presence of the trusted.glusterfs.dht.linkto xattr to determine
whether a file is a linkto file. As this call to opendir does not set
trusted.glusterfs.dht.linkto in the list of requested xattrs for the opendir
call, the cached entries do not contain this xattr value. As none of the
entries returned will have the xattr, DHT believes they are all data files and
fails the rmdir with ENOTEMPTY.
Turning off parallel-readdir allows the rm -rf to succeed.
Upstream master: https://review.gluster.org/22160
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:07:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:07:32 +0000
Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695399
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22485
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:07:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:07:33 +0000
Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695399
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22485 (cluster/dht: Request linkto xattrs in
dht_rmdir opendir) posted (#1) for review on release-5 by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:07:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:07:45 +0000
Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695399
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |nbalacha at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:10:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:10:06 +0000
Subject: [Bugs] [Bug 1695403] New: rm -rf fails with "Directory not empty"
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
Bug ID: 1695403
Summary: rm -rf fails with "Directory not empty"
Product: GlusterFS
Version: 5
Status: NEW
Component: distribute
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org
Depends On: 1676400
Blocks: 1458215, 1661258, 1677260, 1686272
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1676400 +++
Description of problem:
When 2 clients run rm -rf concurrently, the operation sometimes fails
with " Directory not empty"
ls on the directory from the gluster mount point does not show any entries
however there are directories on some of the bricks.
Version-Release number of selected component (if applicable):
How reproducible:
Rare.This is a race condition.
Steps to Reproduce:
Steps:
1. Create 3x (2+1) arbiter volume and fuse mount it. Make sure lookup-optimize
is enabled.
2. mkdir -p dir0/dir1/dir2.
3. Unmount and remount the volume to ensure a fresh lookup is sent. GDB into
the fuse process and set a breakpoint at dht_lookup.
4. from the client mount:
rm -rf mra_sources
5. When gdb breaks at dht_lookup for dir0/dir1/dir2, set a breakpoint at
dht_lookup_cbk. Allow the process to continue until it hits dht_lookup_cbk.
dht_lookup_cbk will return with op_ret = 0 .
6. Delete dir0/dir1/dir2 from every brick on the non-hashed subvols.
7. Set a breakpoint in dht_selfheal_dir_mkdir and allow gdb to continue.
8. When the process breaks at dht_selfheal_dir_mkdir, delete the directory from
the hashed subvolume bricks.
9. In dht_selfheal_dir_mkdir_lookup_cbk, set a breakpoint at line :
if (local->selfheal.hole_cnt == layout->cnt) {
When gdb breaks at this point, set local->selfheal.hole_cnt to a value
different from that of layout->cnt. Allow gdb to proceed.
DHT will create the directories only on the non-hashed subvolumes as the layout
has not been updated to indicate that the dir no longer exists on the hashed
subvolume. This directory will no longer be visible on the mount point causing
the rm -rf to fail.
Actual results:
root at server fuse1]# rm -rf mra_sources
rm: cannot remove ?dir0/dir1?: Directory not empty
Expected results:
rm -rf should succeed.
Additional info:
As lookup-optimize is enabled, subsequent lookups cannot heal the directory.
The same steps with lookup-optimize disabled will work as a subsequent lookup
will lookup everywhere even if the entry does not exist on the hashed subvol.
--- Additional comment from Nithya Balachandran on 2019-02-12 08:08:31 UTC ---
RCA for the invisible directory left behind with concurrent rm -rf :
--------------------------------------------------------------------
dht_selfheal_dir_mkdir_lookup_cbk (...) {
...
1381 this_call_cnt = dht_frame_return (frame);
1382
1383 LOCK (&frame->lock);
1384 {
1385 if ((op_ret < 0) &&
1386 (op_errno == ENOENT || op_errno == ESTALE)) {
1387 local->selfheal.hole_cnt =
!local->selfheal.hole_cnt ? 1
1388 : local->selfheal.hole_cnt
+ 1;
1389 }
1390
1391 if (!op_ret) {
1392 dht_iatt_merge (this, &local->stbuf, stbuf, prev);
1393 }
1394 check_mds = dht_dict_get_array (xattr,
conf->mds_xattr_key,
1395 mds_xattr_val, 1, &errst);
1396 if (dict_get (xattr, conf->mds_xattr_key) && check_mds &&
!errst) {
1397 dict_unref (local->xattr);
1398 local->xattr = dict_ref (xattr);
1399 }
1400
1401 }
1402 UNLOCK (&frame->lock);
1403
1404 if (is_last_call (this_call_cnt)) {
1405 if (local->selfheal.hole_cnt == layout->cnt) {
1406 gf_msg_debug (this->name, op_errno,
1407 "Lookup failed, an rmdir could have
"
1408 "deleted this entry %s", loc->name);
1409 local->op_errno = op_errno;
1410 goto err;
1411 } else {
1412 for (i = 0; i < layout->cnt; i++) {
1413 if (layout->list[i].err == ENOENT ||
1414 layout->list[i].err == ESTALE ||
1415 local->selfheal.force_mkdir)
1416 missing_dirs++;
1417 }
There are 2 problems here:
1. The layout is not updated with the new subvol status on error.
In this case, the initial lookup found a directory on the hashed subvol so only
2 entries in the layout indicate missing directories. However, by the time the
selfheal code is executed, the racing rmdir has deleted the directory from all
the subvols. At this point, the directory does not exist on any subvol and
dht_selfheal_dir_mkdir_lookup_cbk gets an error from all 3 subvols,
but this new status is not updated in the layout which still has only 2 missing
dirs marked.
2. this_call_cnt = dht_frame_return (frame); is called before processing the
frame. So with a call cnt of 3, it is possible that the second response has
reached 1404 before the third one has started processing the return values. At
this point,
local->selfheal.hole_cnt != layout->cnt so control goes to line 1412.
At line 1412, since we are still using the old layout, only the directories on
the non-hashed subvols are considered when incrementing missing_dirs and for
the healing.
The combination of these two causes the selfheal to start healing the
directories on the non-hashed subvols. It succeeds in creating the dirs on the
non-hashed subvols. However, to set the layout, dht takes an inodelk on the
hashed subvol which fails because the directory does on exist there. We
therefore end up with directories on the non-hashed subvols with no layouts
set.
--- Additional comment from Worker Ant on 2019-02-12 08:34:01 UTC ---
REVIEW: https://review.gluster.org/22195 (cluster/dht: Fix lookup selfheal and
rmdir race) posted (#1) for review on master by N Balachandran
--- Additional comment from Worker Ant on 2019-02-13 18:20:26 UTC ---
REVIEW: https://review.gluster.org/22195 (cluster/dht: Fix lookup selfheal and
rmdir race) merged (#3) on master by Raghavendra G
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1458215
[Bug 1458215] Slave reports ENOTEMPTY when rmdir is executed on master
https://bugzilla.redhat.com/show_bug.cgi?id=1676400
[Bug 1676400] rm -rf fails with "Directory not empty"
https://bugzilla.redhat.com/show_bug.cgi?id=1677260
[Bug 1677260] rm -rf fails with "Directory not empty"
https://bugzilla.redhat.com/show_bug.cgi?id=1686272
[Bug 1686272] fuse mount logs inundated with [dict.c:471:dict_get]
(-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x6228d)
[0x7f9029d8628d]
-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x202f7)
[0x7f9029aa12f7] -->/lib64/libglusterfs.so.0(
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:10:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:10:06 +0000
Subject: [Bugs] [Bug 1676400] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1676400
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695403
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
[Bug 1695403] rm -rf fails with "Directory not empty"
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:10:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:10:06 +0000
Subject: [Bugs] [Bug 1677260] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1677260
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1695403
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
[Bug 1695403] rm -rf fails with "Directory not empty"
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:10:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:10:42 +0000
Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |nbalacha at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:13:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:13:29 +0000
Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691616
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-03 04:13:29
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22395 (transport/socket: log shutdown msg
occasionally) merged (#5) on master by Raghavendra G
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:13:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:13:30 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Bug 1672818 depends on bug 1691616, which changed state.
Bug 1691616 Summary: client log flooding with intentional socket shutdown message when a brick is down
https://bugzilla.redhat.com/show_bug.cgi?id=1691616
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:14:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:14:42 +0000
Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22486
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:14:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:14:43 +0000
Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22486 (cluster/dht: Fix lookup selfheal and
rmdir race) posted (#1) for review on release-5 by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:16:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:16:17 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #16 from asender at testlabs.com.au ---
(In reply to Kotresh HR from comment #13)
> This issue is fixed in upstream and 5.x and 6.x series
>
> Patch: https://review.gluster.org/#/c/glusterfs/+/20093/
We are having the issue in replicate mode (using replica 2).
Adrian Sender
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:28:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:28:18 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #9 from Worker Ant ---
REVIEW: https://review.gluster.org/22455 (posix-acl: remove default functions,
and use library fn instead) merged (#3) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:29:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:29:15 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21960
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:31:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:31:19 +0000
Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692101
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-03 04:31:19
--- Comment #4 from Worker Ant ---
REVIEW: https://review.gluster.org/22403 (client-rpc: Fix the payload being
sent on the wire) merged (#3) on release-6 by Poornima G
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:31:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:31:19 +0000
Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692093
Bug 1692093 depends on bug 1692101, which changed state.
Bug 1692101 Summary: Network throughput usage increased x5
https://bugzilla.redhat.com/show_bug.cgi?id=1692101
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:31:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:31:42 +0000
Subject: [Bugs] [Bug 1694561] gfapi: do not block epoll thread for upcall
notifications
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694561
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-03 04:31:42
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22459 (gfapi: Unblock epoll thread for
upcall processing) merged (#2) on release-6 by Amar Tumballi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:32:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:32:03 +0000
Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot
allocate memory"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694002
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-03 04:32:03
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22447 (geo-rep: Fix syncing multiple rename
of symlink) merged (#3) on release-6 by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:37:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:37:12 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #608 from Worker Ant ---
REVIEW: https://review.gluster.org/22387 (changelog: remove unused code.)
merged (#6) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:40:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:40:34 +0000
Subject: [Bugs] [Bug 1579615] [geo-rep]: [Errno 39] Directory not empty
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1579615
Bug 1579615 depends on bug 1575553, which changed state.
Bug 1575553 Summary: [geo-rep]: [Errno 39] Directory not empty
https://bugzilla.redhat.com/show_bug.cgi?id=1575553
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |INSUFFICIENT_DATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:37:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:37:12 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #609 from Worker Ant ---
REVIEW: https://review.gluster.org/22439 (rpclib: slow floating point math and
libm) merged (#3) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:54:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:54:35 +0000
Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
--- Comment #2 from Worker Ant ---
REVISION POSTED: https://review.gluster.org/22482 (logging: Fix
GF_LOG_OCCASSIONALLY API) posted (#2) for review on release-6 by Raghavendra G
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:54:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:54:36 +0000
Subject: [Bugs] [Bug 1695390] GF_LOG_OCCASSIONALLY API doesn't log at first
instance
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695390
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID|Gluster.org Gerrit 22482 |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 04:54:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:54:38 +0000
Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22482
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:54:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:54:39 +0000
Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22482 (logging: Fix GF_LOG_OCCASSIONALLY
API) posted (#2) for review on release-6 by Raghavendra G
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:56:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:56:39 +0000
Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22487
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 04:56:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 04:56:40 +0000
Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22487 (transport/socket: log shutdown msg
occasionally) posted (#1) for review on release-6 by Raghavendra G
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 05:00:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:00:09 +0000
Subject: [Bugs] [Bug 1695416] New: client log flooding with intentional
socket shutdown message when a brick is down
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695416
Bug ID: 1695416
Summary: client log flooding with intentional socket shutdown
message when a brick is down
Product: GlusterFS
Version: 5
Status: NEW
Component: core
Assignee: bugs at gluster.org
Reporter: rgowdapp at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org,
mchangir at redhat.com, pasik at iki.fi
Depends On: 1679904, 1691616
Blocks: 1691620, 1672818 (glusterfs-6.0)
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1691616 +++
+++ This bug was initially created as a clone of Bug #1679904 +++
Description of problem:
client log flooding with intentional socket shutdown message when a brick is
down
[2019-02-22 08:24:42.472457] I [socket.c:811:__socket_shutdown]
0-test-vol-client-0: intentional socket shutdown(5)
Version-Release number of selected component (if applicable):
glusterfs-6
How reproducible:
Always
Steps to Reproduce:
1. 1 X 3 volume created and started over a 3 node cluster
2. mount a fuse client
3. kill a brick
4. Observe that fuse client log is flooded with the intentional socket shutdown
message after every 3 seconds.
Actual results:
Expected results:
Additional info:
--- Additional comment from Worker Ant on 2019-03-22 05:14:20 UTC ---
REVIEW: https://review.gluster.org/22395 (transport/socket: move shutdown msg
to DEBUG loglevel) posted (#1) for review on master by Raghavendra G
--- Additional comment from Worker Ant on 2019-04-03 04:13:29 UTC ---
REVIEW: https://review.gluster.org/22395 (transport/socket: log shutdown msg
occasionally) merged (#5) on master by Raghavendra G
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
[Bug 1679904] client log flooding with intentional socket shutdown message when
a brick is down
https://bugzilla.redhat.com/show_bug.cgi?id=1691616
[Bug 1691616] client log flooding with intentional socket shutdown message when
a brick is down
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 05:00:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:00:09 +0000
Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1679904
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695416
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695416
[Bug 1695416] client log flooding with intentional socket shutdown message when
a brick is down
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 05:00:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:00:09 +0000
Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket
shutdown message when a brick is down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1691616
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695416
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695416
[Bug 1695416] client log flooding with intentional socket shutdown message when
a brick is down
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 05:00:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:00:09 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1695416
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695416
[Bug 1695416] client log flooding with intentional socket shutdown message when
a brick is down
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 05:56:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:56:54 +0000
Subject: [Bugs] [Bug 1695436] New: geo-rep session creation fails with IPV6
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695436
Bug ID: 1695436
Summary: geo-rep session creation fails with IPV6
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: geo-replication
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: amukherj at redhat.com, avishwan at redhat.com,
bugs at gluster.org, csaba at redhat.com,
khiremat at redhat.com, rhs-bugs at redhat.com,
sankarshan at redhat.com, sasundar at redhat.com,
storage-qa-internal at redhat.com
Depends On: 1688833
Blocks: 1688231, 1688239
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1688833 +++
+++ This bug was initially created as a clone of Bug #1688231 +++
Description of problem:
-----------------------
This issue is seen with the RHHI-V usecase. VM images are stored in the gluster
volumes and geo-replicated to the secondary site, for DR use case.
When IPv6 is used, the additional mount option is required
--xlator-option=transport.address-family=inet6". But when geo-rep check for
slave space with gverify.sh, these mount options are not considered and it
fails to mount either master or slave volume
Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.4.4 ( glusterfs-3.12.2-47 )
How reproducible:
-----------------
Always
Steps to Reproduce:
-------------------
1. Create geo-rep session from the master to slave
Actual results:
--------------
Creation of geo-rep session fails at gverify.sh
Expected results:
-----------------
Creation of geo-rep session should be successful
Additional info:
--- Additional comment from SATHEESARAN on 2019-03-13 11:49:02 UTC ---
[root@ ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2620:52:0:4624:5054:ff:fee9:57f8 master.lab.eng.blr.redhat.com
2620:52:0:4624:5054:ff:fe6d:d816 slave.lab.eng.blr.redhat.com
[root@ ~]# gluster volume info
Volume Name: master
Type: Distribute
Volume ID: 9cf0224f-d827-4028-8a45-37f7bfaf1c78
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: master.lab.eng.blr.redhat.com:/gluster/brick1/master
Options Reconfigured:
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
user.cifs: off
features.shard: on
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet6
nfs.disable: on
[root at localhost ~]# gluster volume geo-replication master
slave.lab.eng.blr.redhat.com::slave create push-pem
Unable to mount and fetch slave volume details. Please check the log:
/var/log/glusterfs/geo-replication/gverify-slavemnt.log
geo-replication command failed
Snip from gverify-slavemnt.log
[2019-03-13 11:46:28.746494] I [MSGID: 100030] [glusterfsd.c:2646:main]
0-glusterfs: Started running glusterfs version 3.12.2 (args: glusterfs
--xlator-option=*dht.lookup-unhashed=off --volfile-server
slave.lab.eng.blr.redhat.com --volfile-id slave -l
/var/log/glusterfs/geo-replication/gverify-slavemnt.log /tmp/gverify.sh.y1TCoY)
[2019-03-13 11:46:28.750595] W [MSGID: 101002] [options.c:995:xl_opt_validate]
0-glusterfs: option 'address-family' is deprecated, preferred is
'transport.address-family', continuing with correction
[2019-03-13 11:46:28.753702] E [MSGID: 101075]
[common-utils.c:482:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2)
(Name or service not known)
[2019-03-13 11:46:28.753725] E [name.c:267:af_inet_client_get_remote_sockaddr]
0-glusterfs: DNS resolution failed on host slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753953] I [glusterfsd-mgmt.c:2337:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753980] I [glusterfsd-mgmt.c:2358:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-03-13 11:46:28.753998] I [MSGID: 101190]
[event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 0
[2019-03-13 11:46:28.754073] I [MSGID: 101190]
[event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-03-13 11:46:28.754154] W [glusterfsd.c:1462:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xab) [0x7fc39d379bab]
-->glusterfs(+0x11fcd) [0x56427db95fcd] -->glusterfs(cleanup_and_exit+0x6b)
[0x56427db8eb2b] ) 0-: received signum (1), shutting down
[2019-03-13 11:46:28.754197] I [fuse-bridge.c:6611:fini] 0-fuse: Unmounting
'/tmp/gverify.sh.y1TCoY'.
[2019-03-13 11:46:28.760213] I [fuse-bridge.c:6616:fini] 0-fuse: Closing fuse
connection to '/tmp/gverify.sh.y1TCoY'.
--- Additional comment from Worker Ant on 2019-03-14 14:51:56 UTC ---
REVIEW: https://review.gluster.org/22363 (WIP geo-rep: IPv6 support) posted
(#1) for review on master by Aravinda VK
--- Additional comment from Worker Ant on 2019-03-15 14:59:56 UTC ---
REVIEW: https://review.gluster.org/22363 (geo-rep: IPv6 support) merged (#3) on
master by Aravinda VK
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1688231
[Bug 1688231] geo-rep session creation fails with IPV6
https://bugzilla.redhat.com/show_bug.cgi?id=1688239
[Bug 1688239] geo-rep session creation fails with IPV6
https://bugzilla.redhat.com/show_bug.cgi?id=1688833
[Bug 1688833] geo-rep session creation fails with IPV6
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 05:56:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 05:56:54 +0000
Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1688833
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695436
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695436
[Bug 1695436] geo-rep session creation fails with IPV6
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 06:05:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:05:08 +0000
Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695436
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |avishwan at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 06:30:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:30:07 +0000
Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695436
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22488
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 06:30:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:30:08 +0000
Subject: [Bugs] [Bug 1695436] geo-rep session creation fails with IPV6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695436
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22488 (geo-rep: IPv6 support) posted (#1)
for review on release-6 by Aravinda VK
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 06:30:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:30:59 +0000
Subject: [Bugs] [Bug 1695445] New: ssh-port config set is failing
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695445
Bug ID: 1695445
Summary: ssh-port config set is failing
Product: GlusterFS
Version: 6
Status: NEW
Component: geo-replication
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: bugs at gluster.org
Depends On: 1692666
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1692666 +++
Description of problem:
If non-standard ssh-port is used, Geo-rep can be configured to use that ssh
port by configuring as below
```
gluster volume geo-replication :: config
ssh-port 2222
```
But this command is failing even if a valid value is passed.
```
$ gluster v geo gv1 centos.sonne::gv2 config ssh-port 2222
geo-replication config-set failed for gv1 centos.sonne::gv2
geo-replication command failed
```
--- Additional comment from Worker Ant on 2019-03-26 08:00:05 UTC ---
REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config
validation) posted (#1) for review on master by Aravinda VK
--- Additional comment from Worker Ant on 2019-03-27 14:35:10 UTC ---
REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config
validation) merged (#2) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1692666
[Bug 1692666] ssh-port config set is failing
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 06:30:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:30:59 +0000
Subject: [Bugs] [Bug 1692666] ssh-port config set is failing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692666
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1695445
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695445
[Bug 1695445] ssh-port config set is failing
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 06:31:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:31:13 +0000
Subject: [Bugs] [Bug 1695445] ssh-port config set is failing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695445
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |avishwan at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 06:33:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:33:17 +0000
Subject: [Bugs] [Bug 1695445] ssh-port config set is failing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695445
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22489
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 06:33:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 06:33:18 +0000
Subject: [Bugs] [Bug 1695445] ssh-port config set is failing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695445
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22489 (geo-rep: fix integer config
validation) posted (#1) for review on release-6 by Aravinda VK
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 07:24:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 07:24:59 +0000
Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps
increasing, using all available resources
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695099
--- Comment #1 from Christian Ihle ---
Example of how to reliably reproduce the issue from Kubernetes.
1. kubectl apply -f pvc.yaml
2. kubectl delete -f pvc.yaml
There will almost always be a few more glusterfs-processes running after doing
this.
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: glusterfs-replicated-2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: glusterfs-replicated-2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: glusterfs-replicated-2
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 08:09:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 08:09:22 +0000
Subject: [Bugs] [Bug 1695480] New: Global Thread Pool
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695480
Bug ID: 1695480
Summary: Global Thread Pool
Product: GlusterFS
Version: mainline
Status: NEW
Component: core
Assignee: bugs at gluster.org
Reporter: jahernan at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
The Global Thread Pool provides lower contention and increased performance in
some cases, but it has been observed that sometimes there's a huge increment in
the number of requests going to the disks in parallel which seems to be causing
a performance degradation.
Actually, it seems that sending the same amount of requests but from fewer
threads is giving higher performance.
The current implementation already does some dynamic adjustment of the number
of active threads based on the current number of requests, but it doesn't
consider the load on the back-end file systems. This means that as long as more
requests come, the number of threads is scaled accordingly, which could have a
negative impact if the back-end is already saturated.
The way to control that in current version is to manually adjust the maximum
number of threads that can be used, which effectively limits the load on
back-end file systems even if more requests are coming, but this is only useful
for volumes whose workload is homogeneous and constant.
To make it more versatile, the maximum number of threads need to be
automatically self-adjusted to adapt dynamically to the current load so that it
can be useful in a general case.
Version-Release number of selected component (if applicable): mainline
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 08:15:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 08:15:37 +0000
Subject: [Bugs] [Bug 1695484] New: smoke fails with "Build root is locked by
another process"
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695484
Bug ID: 1695484
Summary: smoke fails with "Build root is locked by another
process"
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: pkarampu at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Please check https://build.gluster.org/job/devrpm-fedora/15405/console for more
details. Smoke is failing with the reason mentioned in the subject.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 08:35:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 08:35:11 +0000
Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by
another process"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695484
Deepshikha khandelwal changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |dkhandel at redhat.com
--- Comment #1 from Deepshikha khandelwal ---
It happens mainly because your previously running build was aborted by a new
patchset and hence no cleanup.
Re-triggering might help.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 08:39:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 08:39:23 +0000
Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps
increasing, using all available resources
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695099
--- Comment #2 from Christian Ihle ---
I have been experimenting with setting "max_inflight_operations" to 1 in
Heketi, as mentioned in https://github.com/heketi/heketi/issues/1439
Example of how to configure this:
https://github.com/heketi/heketi/blob/8417f25f474b0b16e1936a66f9b63bcedfba6e4c/tests/functional/TestSmokeTest/config/heketi.json
I am not able to reproduce the issue anymore when the value is set to 1.
The number of glusterfs-processes varies between 0 and 2 during volume changes,
but always settles on 1 single process afterwards.
This seems to be an easy workaround, but hopefully the bug will be fixed so I
can revert back to concurrent Heketi again.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 08:41:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 08:41:52 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #10 from Worker Ant ---
REVIEW: https://review.gluster.org/22443 (sdfs: enable pass-through) merged
(#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 09:50:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 09:50:43 +0000
Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
nravinas at redhat.com changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
Group| |redhat
CC| |nravinas at redhat.com
Severity|unspecified |high
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 10:04:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:04:23 +0000
Subject: [Bugs] [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
Kaleb KEITHLEY changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NOTABUG
Last Closed| |2019-04-03 10:04:23
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:04:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:04:24 +0000
Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692394
Bug 1692394 depends on bug 1692957, which changed state.
Bug 1692957 Summary: build: link libgfrpc with MATH_LIB (libm, -lm)
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NOTABUG
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:04:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:04:24 +0000
Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692959
Bug 1692959 depends on bug 1692957, which changed state.
Bug 1692957 Summary: build: link libgfrpc with MATH_LIB (libm, -lm)
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NOTABUG
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:29:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:29:53 +0000
Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by
another process"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695484
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #2 from M. Scherer ---
Mhh, then shouldn't we clean up when there is something that do stop the build
?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:30:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:30:46 +0000
Subject: [Bugs] [Bug 1692957] rpclib: slow floating point math and libm
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
Kaleb KEITHLEY changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NOTABUG |---
Assignee|bugs at gluster.org |kkeithle at redhat.com
Summary|build: link libgfrpc with |rpclib: slow floating point
|MATH_LIB (libm, -lm) |math and libm
Keywords| |Reopened
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:30:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:30:46 +0000
Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692394
Bug 1692394 depends on bug 1692957, which changed state.
Bug 1692957 Summary: rpclib: slow floating point math and libm
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NOTABUG |---
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 10:30:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 10:30:46 +0000
Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1692959
Bug 1692959 depends on bug 1692957, which changed state.
Bug 1692957 Summary: rpclib: slow floating point math and libm
https://bugzilla.redhat.com/show_bug.cgi?id=1692957
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NOTABUG |---
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 11:09:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 11:09:00 +0000
Subject: [Bugs] [Bug 1695403] rm -rf fails with "Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695403
nravinas at redhat.com changed:
What |Removed |Added
----------------------------------------------------------------------------
Group|redhat |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 11:23:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 11:23:06 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22491
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 11:23:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 11:23:07 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #11 from Worker Ant ---
REVIEW: https://review.gluster.org/22491 (tests: make sure to traverse all of
meta dir) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 11:38:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 11:38:07 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22492
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 11:38:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 11:38:08 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #610 from Worker Ant ---
REVIEW: https://review.gluster.org/22492 (tests: shard read test correction)
posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 12:25:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 12:25:57 +0000
Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from
/dev/fuse returned -1 (Operation not permitted)"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1644322
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22494
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 12:25:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 12:25:58 +0000
Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from
/dev/fuse returned -1 (Operation not permitted)"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1644322
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22494 (fuse: rate limit reading from fuse
device upon receiving EPERM) posted (#1) for review on master by Csaba Henk
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 14:19:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 14:19:24 +0000
Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot
allocate memory"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1694002
Bug 1694002 depends on bug 1693648, which changed state.
Bug 1693648 Summary: Geo-re: Geo replication failing in "cannot allocate memory"
https://bugzilla.redhat.com/show_bug.cgi?id=1693648
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Apr 3 15:09:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 15:09:52 +0000
Subject: [Bugs] [Bug 1695484] smoke fails with "Build root is locked by
another process"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695484
--- Comment #3 from M. Scherer ---
So indeed, https://build.gluster.org/job/devrpm-fedora/15404/ aborted the patch
test, then https://build.gluster.org/job/devrpm-fedora/15405/ failed. but the
next run worked.
Maybe the problem is that it take more than 30 seconds to clean the build or
something similar. Maybe we need to add some more time, but I can't seems to
find a log to evaluate how long it does take when things are cancelled. Let's
keep stuff opened if the issue arise again to collect the log, and see if there
is a pattern.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Apr 3 21:55:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 03 Apr 2019 21:55:30 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #17 from perplexed767 ---
(In reply to Kotresh HR from comment #14)
> Workaround:
> The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes.
> It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to
> convert
> single distribute to two distribute volume or upgrade to later versions
> if it can't be waited until next 4.1.x release.
greate thanks, is it planned to be backported to for 4.x as my os (sles 12.2)
does not currenty support 5.x gluster) I would have to upgrade the os to sles
12.3
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 04:28:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 04:28:46 +0000
Subject: [Bugs] [Bug 1696046] New: Log level changes do not take effect
until the process is restarted
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696046
Bug ID: 1696046
Summary: Log level changes do not take effect until the process
is restarted
Product: GlusterFS
Version: mainline
Status: NEW
Component: core
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: amukherj at redhat.com, bmekala at redhat.com,
bugs at gluster.org, nbalacha at redhat.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
vbellur at redhat.com
Depends On: 1695081
Target Milestone: ---
Classification: Community
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1695081
[Bug 1695081] Log level changes do not take effect until the process is
restarted
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 04:29:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 04:29:02 +0000
Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the
process is restarted
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696046
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |moagrawa at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 04:43:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 04:43:23 +0000
Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the
process is restarted
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696046
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22495
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 04:43:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 04:43:24 +0000
Subject: [Bugs] [Bug 1696046] Log level changes do not take effect until the
process is restarted
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696046
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22495 (core: Log level changes do not effect
on running client process) posted (#1) for review on master by MOHIT AGRAWAL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 06:10:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 06:10:30 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
--- Comment #18 from Kotresh HR ---
I have backported the patch https://review.gluster.org/#/c/glusterfs/+/22476/.
It's not merged yet.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 06:44:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 06:44:07 +0000
Subject: [Bugs] [Bug 1696075] New: Client lookup is unable to heal missing
directory GFID entry
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Bug ID: 1696075
Summary: Client lookup is unable to heal missing directory GFID
entry
Product: GlusterFS
Version: 6
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: anepatel at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
When dir gfid entry is missing on few backend bricks for a directory, client
heal is unable to re-create the gfid entries after doing stat from client.
The automated test-case passes on downstream 3.4.4 but is failing on upstream
gluster 6.
Version-Release number of selected component (if applicable):
Latest gluster 6
How reproducible:
Always,
Steps to Reproduce:
1. Create a 2X3 dist-replicated volume, and fuse mount it
2. Create a empty directory from mount point
3. Verify the gfid entry is present on all backend bricks for this dir
4. Delete gfid entry for 5 out of 6 backend bricks, brick{1..6}
5. Now trigger heal from mount pt.
#ls -l
#find . | xargs stat
6. Check backend bricks, the gfid entry should be healed for all the bricks.
Actual results:
At step 6, gfid entry is not created after client lookup.
Expected results:
Client lookup should trigger heal and gfid should be healed
Additional info:
There is also a latest fix per BZ#1661258, in which the dht delegates task to
AFR when there is a missing gfid for all bricks in subvol, as per my
understanding.
The test-case is automated and can be found at
https://review.gluster.org/c/glusto-tests/+/22480/
The test passes Downstream but fails upstream, the glusto logs for the failure
can be found at
https://ci.centos.org/job/gluster_glusto-patch-check/1277/artifact/glustomain.log
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 06:44:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 06:44:29 +0000
Subject: [Bugs] [Bug 1696075] Client lookup is unable to heal missing
directory GFID entry
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Anees Patel changed:
What |Removed |Added
----------------------------------------------------------------------------
QA Contact| |anepatel at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 06:45:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 06:45:59 +0000
Subject: [Bugs] [Bug 1696077] New: Add pause and resume test case for geo-rep
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696077
Bug ID: 1696077
Summary: Add pause and resume test case for geo-rep
Product: GlusterFS
Version: mainline
Status: NEW
Component: geo-replication
Assignee: bugs at gluster.org
Reporter: sacharya at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
There is no pause and resume test case for geo-rep
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 07:17:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 07:17:44 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22496
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 07:17:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 07:17:45 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #611 from Worker Ant ---
REVIEW: https://review.gluster.org/22496 (cluster/afr: Invalidate inode on
change of split-brain-choice) posted (#1) for review on master by Pranith Kumar
Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:28:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:28:38 +0000
Subject: [Bugs] [Bug 1696136] New: gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Bug ID: 1696136
Summary: gluster fuse mount crashed, when deleting 2T image
file from oVirt Manager UI
Product: GlusterFS
Version: mainline
Hardware: x86_64
OS: Linux
Status: NEW
Component: sharding
Keywords: Triaged
Severity: urgent
Priority: urgent
Assignee: bugs at gluster.org
Reporter: kdhananj at redhat.com
QA Contact: bugs at gluster.org
CC: amukherj at redhat.com, bkunal at redhat.com,
bugs at gluster.org, pasik at iki.fi, rhs-bugs at redhat.com,
sabose at redhat.com, sankarshan at redhat.com,
sasundar at redhat.com, storage-qa-internal at redhat.com,
ykaul at redhat.com
Depends On: 1694595
Blocks: 1694604
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1694595 +++
Description of problem:
------------------------
When deleting the 2TB image file , gluster fuse mount process has crashed
Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.12.2-47
How reproducible:
-----------------
1/1
Steps to Reproduce:
-------------------
1. Create a image file of 2T from oVirt Manager UI
2. Delete the same image file after its created successfully
Actual results:
---------------
Fuse mount crashed
Expected results:
-----------------
All should work fine and no fuse mount crashes
--- Additional comment from SATHEESARAN on 2019-04-01 08:33:14 UTC ---
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2019-04-01 07:57:53
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.12.2
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9d)[0x7fc72c186b9d]
/lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fc72c191114]
/lib64/libc.so.6(+0x36280)[0x7fc72a7c2280]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9627)[0x7fc71f8ba627]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9ef1)[0x7fc71f8baef1]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x3ae9c)[0x7fc71fb15e9c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x9e8c)[0x7fc71fd88e8c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xb79b)[0x7fc71fd8a79b]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xc226)[0x7fc71fd8b226]
/usr/lib64/glusterfs/3.12.2/xlator/protocol/client.so(+0x17cbc)[0x7fc72413fcbc]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7fc72bf2ca00]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x26b)[0x7fc72bf2cd6b]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fc72bf28ae3]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x7586)[0x7fc727043586]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x9bca)[0x7fc727045bca]
/lib64/libglusterfs.so.0(+0x8a870)[0x7fc72c1e5870]
/lib64/libpthread.so.0(+0x7dd5)[0x7fc72afc2dd5]
/lib64/libc.so.6(clone+0x6d)[0x7fc72a889ead]
--- Additional comment from SATHEESARAN on 2019-04-01 08:37:56 UTC ---
1. RHHI-V Information
----------------------
RHV 4.3.3
RHGS 3.4.4
2. Cluster Information
-----------------------
[root at rhsqa-grafton11 ~]# gluster pe s
Number of Peers: 2
Hostname: rhsqa-grafton10.lab.eng.blr.redhat.com
Uuid: 46807597-245c-4596-9be3-f7f127aa4aa2
State: Peer in Cluster (Connected)
Other names:
10.70.45.32
Hostname: rhsqa-grafton12.lab.eng.blr.redhat.com
Uuid: 8a3bc1a5-07c1-4e1c-aa37-75ab15f29877
State: Peer in Cluster (Connected)
Other names:
10.70.45.34
3. Volume information
-----------------------
Affected volume: data
[root at rhsqa-grafton11 ~]# gluster volume info data
Volume Name: data
Type: Replicate
Volume ID: 9d5a9d10-f192-49ed-a6f0-c912224869e8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data
(arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
[root at rhsqa-grafton11 ~]# gluster volume status data
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick rhsqa-grafton10.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23403
Brick rhsqa-grafton11.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23285
Brick rhsqa-grafton12.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23296
Self-heal Daemon on localhost N/A N/A Y 16195
Self-heal Daemon on rhsqa-grafton12.lab.eng
.blr.redhat.com N/A N/A Y 52917
Self-heal Daemon on rhsqa-grafton10.lab.eng
.blr.redhat.com N/A N/A Y 43829
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694595
[Bug 1694595] gluster fuse mount crashed, when deleting 2T image file from RHV
Manager UI
https://bugzilla.redhat.com/show_bug.cgi?id=1694604
[Bug 1694604] gluster fuse mount crashed, when deleting 2T image file from RHV
Manager UI
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:29:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:29:38 +0000
Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Krutika Dhananjay changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |kdhananj at redhat.com
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:35:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:35:59 +0000
Subject: [Bugs] [Bug 1696077] Add pause and resume test case for geo-rep
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696077
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22498
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:36:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:36:00 +0000
Subject: [Bugs] [Bug 1696077] Add pause and resume test case for geo-rep
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696077
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22498 (tests/geo-rep: Add pause and resume
test case for geo-rep) posted (#1) for review on master by Shwetha K Acharya
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:42:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:42:09 +0000
Subject: [Bugs] [Bug 1696147] New: Multiple shd processes are running on
brick_mux environmet
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
Bug ID: 1696147
Summary: Multiple shd processes are running on brick_mux
environmet
Product: GlusterFS
Version: 5
Hardware: x86_64
Status: NEW
Component: glusterd
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi
Depends On: 1683880
Blocks: 1672818 (glusterfs-6.0), 1684404
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1683880 +++
Description of problem:
Multiple shd processes are running while created 100 volumes in brick_mux
environment
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a 1x3 volume
2. Enable brick_mux
3.Run below command
n1=
n2=
n3=
for i in {1..10};do
for h in {1..20};do
gluster v create vol-$i-$h rep 3
$n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h
$n3:/home/dist/brick$h/vol-$i-$h force
gluster v start vol-$i-$h
sleep 1
done
done
for k in $(gluster v list|grep -v heketi);do gluster v stop $k
--mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done
Actual results:
Multiple shd processes are running and consuming system resources
Expected results:
Only one shd process should be run
Additional info:
--- Additional comment from Mohit Agrawal on 2019-03-01 08:23:03 UTC ---
Upstream patch is posted to resolve the same
https://review.gluster.org/#/c/glusterfs/+/22290/
--- Additional comment from Atin Mukherjee on 2019-03-06 15:30:41 UTC ---
(In reply to Mohit Agrawal from comment #1)
> Upstream patch is posted to resolve the same
> https://review.gluster.org/#/c/glusterfs/+/22290/
this is an upstream bug only :-) Once the mainline patch is merged and we
backport it to release-6 branch, the bug status will be corrected.
--- Additional comment from Worker Ant on 2019-03-12 11:21:18 UTC ---
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) posted (#2) for review on release-6 by
MOHIT AGRAWAL
--- Additional comment from Worker Ant on 2019-03-12 20:53:28 UTC ---
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar
Ranganathan
--- Additional comment from Shyamsundar on 2019-03-25 16:33:26 UTC ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-6.0, please open a new bug report.
glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for
several distributions should become available in the near future. Keep an eye
on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1683880
[Bug 1683880] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1684404
[Bug 1684404] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:42:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:42:09 +0000
Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1683880
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1696147
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
[Bug 1696147] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 08:42:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:42:09 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1696147
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
[Bug 1696147] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:42:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:42:09 +0000
Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1684404
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1696147
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
[Bug 1696147] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 08:42:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:42:27 +0000
Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |moagrawa at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 08:44:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:44:39 +0000
Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22499
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 08:44:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:44:40 +0000
Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22499 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) posted (#1) for review on release-5 by
MOHIT AGRAWAL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 08:48:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 08:48:42 +0000
Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and
files listing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670382
joao.bauto at neuro.fchampalimaud.org changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |joao.bauto at neuro.fchampalim
| |aud.org
--- Comment #9 from joao.bauto at neuro.fchampalimaud.org ---
So I think I'm hitting this bug also.
I have an 8 brick distributed volume where Windows and Linux clients mount the
volume via samba and headless compute servers using gluster native fuse. With
parallel-readdir on, if a Windows client creates a new folder, the folder is
indeed created but invisible to the Windows client. Accessing the same samba
share in a Linux client, the folder is again visible and with normal behaviour.
The same folder is also visible when mounting via gluster native fuse.
The Windows client can list existing directories and rename them while, for
files, everything seems to be working fine.
Gluster servers: CentOS 7.5 with Gluster 5.3 and Samba 4.8.3-4.el7.0.1 from
@fasttrack
Clients tested: Windows 10, Ubuntu 18.10, CentOS 7.5
Volume Name: tank
Type: Distribute
Volume ID: 9582685f-07fa-41fd-b9fc-ebab3a6989cf
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: swp-gluster-01:/tank/volume1/brick
Brick2: swp-gluster-02:/tank/volume1/brick
Brick3: swp-gluster-03:/tank/volume1/brick
Brick4: swp-gluster-04:/tank/volume1/brick
Brick5: swp-gluster-01:/tank/volume2/brick
Brick6: swp-gluster-02:/tank/volume2/brick
Brick7: swp-gluster-03:/tank/volume2/brick
Brick8: swp-gluster-04:/tank/volume2/brick
Options Reconfigured:
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
storage.batch-fsync-delay-usec: 0
performance.write-behind-window-size: 32MB
performance.stat-prefetch: on
performance.read-ahead: on
performance.read-ahead-page-count: 16
performance.rda-request-size: 131072
performance.quick-read: on
performance.open-behind: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
performance.io-thread-count: 64
performance.io-cache: off
performance.flush-behind: on
performance.client-io-threads: off
performance.write-behind: off
performance.cache-samba-metadata: on
network.inode-lru-limit: 0
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
cluster.readdir-optimize: on
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 16
features.quota-deem-statfs: on
nfs.disable: on
features.quota: on
features.inode-quota: on
cluster.enable-shared-storage: disable
Cheers
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 11:20:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 11:20:31 +0000
Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-04 11:20:31
--- Comment #19 from Worker Ant ---
REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in
changelog) merged (#1) on release-4.1 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 16:28:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 16:28:37 +0000
Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22507
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 16:28:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 16:28:38 +0000
Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22507 (features/shard: Fix crash during
background shard deletion in a specific case) posted (#1) for review on master
by Krutika Dhananjay
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 19:44:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 19:44:23 +0000
Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1642168
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-04 19:44:23
--- Comment #7 from Worker Ant ---
REVIEW: https://review.gluster.org/21585 (libglusterfs: define macros needed
for cloudsync) merged (#9) on master by Vijay Bellur
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Apr 4 21:06:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 21:06:32 +0000
Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases
without locks
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1689799
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22372 (cluster/ec: Fix handling of heal info
cases without locks) merged (#5) on master by Xavi Hernandez
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Apr 2 20:26:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 02 Apr 2019 20:26:29 +0000
Subject: [Bugs] [Bug 1695327] regression test fails with brick mux enabled.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1695327
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-04-04 21:10:59
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22481 (tests/bitrot: enable self-heal daemon
before accessing the files) merged (#2) on master by Xavi Hernandez
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 22:05:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 22:05:58 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22509
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Apr 4 22:05:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 04 Apr 2019 22:05:59 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #612 from Worker Ant ---
REVIEW: https://review.gluster.org/22509 (ec: increase line coverage of ec)
posted (#1) for review on master by Xavi Hernandez
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 03:46:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:46:28 +0000
Subject: [Bugs] [Bug 1696512] New: glusterfs build is failing on rhel-6
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696512
Bug ID: 1696512
Summary: glusterfs build is failing on rhel-6
Product: GlusterFS
Version: mainline
Status: NEW
Component: build
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
glusterfs build is failing on RHEL 6.
Version-Release number of selected component (if applicable):
How reproducible:
Run make for glusterfs on RHEL-6
make us throwing below error
.libs/glusterd_la-glusterd-utils.o: In function `glusterd_get_volopt_content':
/root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:13333:
undefined reference to `dlclose'
.libs/glusterd_la-glusterd-utils.o: In function
`glusterd_get_value_for_vme_entry':
/root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:12890:
undefined reference to `dlclose'
.libs/glusterd_la-glusterd-volgen.o: In function `_gd_get_option_type':
/root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-volgen.c:6902:
undefined reference to `dlclose'
.libs/glusterd_la-glusterd-quota.o: In function
`_glusterd_validate_quota_opts':
/root/gluster_upstream/glusterfs/xlators/mgmt/glusterd/src/glusterd-quota.c:1947:
undefined reference to `dlclose'
collect2: ld returned 1 exit status
Steps to Reproduce:
1.
2.
3.
Actual results:
glusterfs build is failing
Expected results:
the build should not fail
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 03:46:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:46:45 +0000
Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696512
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |moagrawa at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 03:52:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:52:09 +0000
Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696512
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22510
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 03:52:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:52:10 +0000
Subject: [Bugs] [Bug 1696512] glusterfs build is failing on rhel-6
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696512
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22510 (build: glusterfs build is failing on
RHEL-6) posted (#1) for review on master by MOHIT AGRAWAL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:38 +0000
Subject: [Bugs] [Bug 1696513] New: Multiple shd processes are running on
brick_mux environmet
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
Bug ID: 1696513
Summary: Multiple shd processes are running on brick_mux
environmet
Product: GlusterFS
Version: 4.1
Hardware: x86_64
Status: NEW
Component: glusterd
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi
Depends On: 1683880
Blocks: 1696147, 1672818 (glusterfs-6.0), 1684404
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1683880 +++
Description of problem:
Multiple shd processes are running while created 100 volumes in brick_mux
environment
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a 1x3 volume
2. Enable brick_mux
3.Run below command
n1=
n2=
n3=
for i in {1..10};do
for h in {1..20};do
gluster v create vol-$i-$h rep 3
$n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h
$n3:/home/dist/brick$h/vol-$i-$h force
gluster v start vol-$i-$h
sleep 1
done
done
for k in $(gluster v list|grep -v heketi);do gluster v stop $k
--mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done
Actual results:
Multiple shd processes are running and consuming system resources
Expected results:
Only one shd process should be run
Additional info:
--- Additional comment from Mohit Agrawal on 2019-03-01 08:23:03 UTC ---
Upstream patch is posted to resolve the same
https://review.gluster.org/#/c/glusterfs/+/22290/
--- Additional comment from Atin Mukherjee on 2019-03-06 15:30:41 UTC ---
(In reply to Mohit Agrawal from comment #1)
> Upstream patch is posted to resolve the same
> https://review.gluster.org/#/c/glusterfs/+/22290/
this is an upstream bug only :-) Once the mainline patch is merged and we
backport it to release-6 branch, the bug status will be corrected.
--- Additional comment from Worker Ant on 2019-03-12 11:21:18 UTC ---
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) posted (#2) for review on release-6 by
MOHIT AGRAWAL
--- Additional comment from Worker Ant on 2019-03-12 20:53:28 UTC ---
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar
Ranganathan
--- Additional comment from Shyamsundar on 2019-03-25 16:33:26 UTC ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-6.0, please open a new bug report.
glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for
several distributions should become available in the near future. Keep an eye
on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1683880
[Bug 1683880] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1684404
[Bug 1684404] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
[Bug 1696147] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:38 +0000
Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1683880
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1696513
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
[Bug 1696513] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:38 +0000
Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1696513
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
[Bug 1696513] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:38 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1696513
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
[Bug 1696513] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:38 +0000
Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1684404
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1696513
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
[Bug 1696513] Multiple shd processes are running on brick_mux environmet
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 03:56:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 03:56:57 +0000
Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |moagrawa at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 04:21:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 04:21:25 +0000
Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22511
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 04:21:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 04:21:26 +0000
Subject: [Bugs] [Bug 1696513] Multiple shd processes are running on
brick_mux environmet
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22511 (glusterfsd: Multiple shd processes
are spawned on brick_mux environment) posted (#2) for review on release-4.1 by
MOHIT AGRAWAL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 04:51:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 04:51:44 +0000
Subject: [Bugs] [Bug 1696518] New: builder203 does not have a valid hostname
set
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696518
Bug ID: 1696518
Summary: builder203 does not have a valid hostname set
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: dkhandel at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
After reinstallation builder203 on AWS does not have a valid hostname set and
hence it's network service might behave weird.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 05:55:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 05:55:21 +0000
Subject: [Bugs] [Bug 1696518] builder203 does not have a valid hostname set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696518
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |mscherer at redhat.com
--- Comment #1 from M. Scherer ---
Can you be a bit more specific on:
- what network do behave weirdly ?
I also did set the hostname (using hostnamectl), so maybe this requires a
reboot, and/or a different hostname.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 06:05:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 06:05:06 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22512
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 06:05:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 06:05:07 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #613 from Worker Ant ---
REVIEW: https://review.gluster.org/22512 ([WIP]glusterd-volgen.c: skip fetching
skip-CLIOT in a loop.) posted (#1) for review on master by Yaniv Kaul
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 07:02:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 07:02:14 +0000
Subject: [Bugs] [Bug 1696518] builder203 does not have a valid hostname set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696518
--- Comment #2 from M. Scherer ---
So, answering to myself, rpc.statd didn't start after reboot, and the hostname
was ip-172-31-38-158.us-east-2.compute.internal. After "hostnamectl
set-hostname builder203.int.aws.gluster.org", that's better; Guess we need to
automate that (as I used builder203.aws.gluster.org and this was wrong).
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 08:35:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 08:35:03 +0000
Subject: [Bugs] [Bug 1696599] New: Fops hang when inodelk fails on the first
fop
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696599
Bug ID: 1696599
Summary: Fops hang when inodelk fails on the first fop
Product: GlusterFS
Version: mainline
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: pkarampu at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Steps:
glusterd
gluster peer probe localhost.localdomain
peer probe: success. Probe on localhost not needed
gluster --mode=script --wignore volume create r3 replica 3
localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1
localhost.localdomain:/home/gfs/r3_2
volume create: r3: success: please start the volume to access data
gluster --mode=script volume start r3
volume start: r3: success
mkdir: cannot create directory ?/mnt/r3?: File exists
mount -t glusterfs localhost.localdomain:/r3 /mnt/r3
First terminal:
# cd /mnt/r3
# touch abc
Attach the mount process in gdb and put a break point on function afr_lock()
>From second terminal:
# exec 200>abc
# echo abc >&200
# When the break point is hit, on third terminal execute "gluster volume stop
r3"
# quit gdb
# execute "gluster volume start r3 force"
# On the first terminal execute "exec abc >&200" again and this command hangs.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 08:35:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 08:35:20 +0000
Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696599
Pranith Kumar K changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1688395
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 08:37:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 08:37:53 +0000
Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696599
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22515
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 08:37:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 08:37:54 +0000
Subject: [Bugs] [Bug 1696599] Fops hang when inodelk fails on the first fop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696599
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from
owners_list on failure of lock-acquisition) posted (#1) for review on master by
Pranith Kumar Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 09:06:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 09:06:48 +0000
Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22517
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 09:06:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 09:06:49 +0000
Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed,
when deleting 2T image file from oVirt Manager UI
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22517 (features/shard: Fix extra unref when
inode object is lru'd out and added back) posted (#1) for review on master by
Krutika Dhananjay
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 09:09:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 09:09:04 +0000
Subject: [Bugs] [Bug 1642168] changes to cloudsync xlator
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1642168
anuradha.stalur at gmail.com changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |POST
Resolution|NEXTRELEASE |---
Keywords| |Reopened
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 09:17:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 09:17:45 +0000
Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1673058
Sandro Bonazzola changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1677319
| |(Gluster_5_Affecting_oVirt_
| |4.3)
Dependent Products| |Red Hat Enterprise
| |Virtualization Manager
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1677319
[Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 09:49:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 09:49:30 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #614 from Worker Ant ---
REVIEW: https://review.gluster.org/22496 (cluster/afr: Invalidate inode on
change of split-brain-choice) merged (#3) on master by Pranith Kumar Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 10:28:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 10:28:37 +0000
Subject: [Bugs] [Bug 1696633] New: GlusterFs v4.1.5 Tests from /tests/bugs/
module failing on Intel
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1696633
Bug ID: 1696633
Summary: GlusterFs v4.1.5 Tests from /tests/bugs/ module
failing on Intel
Product: GlusterFS
Version: 4.1
Hardware: x86_64
OS: Linux
Status: NEW
Component: tests
Severity: high
Assignee: bugs at gluster.org
Reporter: chandranaik2 at gmail.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Some of the tests from /tests/bugs module are failing on x86 on "SUSE Linux
Enterprise Server 12 SP3" in GlusterFs v4.1.5
Failing Tests from /tests/bugs/ module are as mentioned below:
glusterfs-server/bug-887145.t
nfs/bug-974972.t
rpc/bug-847624.t
rpc/bug-954057.t
shard/bug-1251824.t
shard/bug-1468483.t
shard/zero-flag.t
How reproducible:
Run the tests with ./run-tests.sh or run individual tests with ./run-tests.sh
prove -vf
Steps to Reproduce:
1. Build GlusterFs v4.1.5
2. Run the tests as below
./run-tests.sh prove -vf
Actual results:
Tests should pass
Expected results:
Tests fail
Additional info:
Failure Details:
glusterfs-server/bug-887145.t -
Subtest 21-24, fails with touch: cannot touch '/mnt/glusterfs/0/dir/file':
Permission denied. Whereas subtest 26 fails with error : rmdir: failed to
remove '/mnt/nfs/0/dir/*': No such file or directory
nfs/bug-974972.t
Subtest 14 fails with rm: cannot remove '/var/run/gluster/': Is a directory
rpc/bug-847624.t
Subtest 9 which does "dbench -t 10 10" fails.
rpc/bug-954057.t
Subtest 16 fails to create the directory ?/mnt/glusterfs/0/nobody/other?:
Permission denied
shard/bug-1251824.t, shard/bug-1468483.t
Subtest 14-26, 40-42 fails for user ?test_user:test_user? in the test.
shard/zero-flag.t
Sub tests fails as below:
TEST 17 (line 40): 2097152 echo
not ok 17 Got "" instead of "2097152", LINENUM:40
Please let us know if these are known failures on intel.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 13:39:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 13:39:17 +0000
Subject: [Bugs] [Bug 1670303] api: bad GFAPI_4.1.6 block
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670303
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version| |glusterfs-4.1.8
Resolution|NEXTRELEASE |CURRENTRELEASE
--- Comment #4 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-4.1.8, please open a new bug report.
glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages
for several distributions should become available in the near future. Keep an
eye on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 13:39:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 13:39:17 +0000
Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server
vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672249
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Fixed In Version| |glusterfs-4.1.8
Resolution|--- |CURRENTRELEASE
Last Closed|2019-02-18 14:41:34 |2019-04-05 13:39:17
--- Comment #4 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-4.1.8, please open a new bug report.
glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages
for several distributions should become available in the near future. Keep an
eye on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 13:39:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 13:39:17 +0000
Subject: [Bugs] [Bug 1673265] Fix timeouts so the tests pass on AWS
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1673265
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version| |glusterfs-4.1.8
Resolution|NEXTRELEASE |CURRENTRELEASE
--- Comment #3 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-4.1.8, please open a new bug report.
glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages
for several distributions should become available in the near future. Keep an
eye on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Apr 5 13:39:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 13:39:17 +0000
Subject: [Bugs] [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are
converted to arbiter
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1687746
Shyamsundar changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version| |glusterfs-4.1.8
Resolution|NEXTRELEASE |CURRENTRELEASE
--- Comment #2 from Shyamsundar ---
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-4.1.8, please open a new bug report.
glusterfs-4.1.8 has been announced on the Gluster mailinglists [1], packages
for several distributions should become available in the near future. Keep an
eye on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-April/000122.html
[2] https://www.gluster.org/pipermail/gluster-users/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Apr 5 13:39:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 05 Apr 2019 13:39:17 +0000
Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To: