From bugzilla at redhat.com Tue Jan 1 15:50:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 01 Jan 2019 15:50:17 +0000
Subject: [Bugs] [Bug 1138841] allow the use of the CIDR format with
auth.allow
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1138841
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |POST
Resolution|UPSTREAM |---
Keywords| |Reopened
External Bug ID| |Gluster.org Gerrit 21970
--- Comment #5 from Worker Ant ---
REVIEW: https://review.gluster.org/21970 (Added a function to validate CIDR IP)
posted (#1) for review on master by Rinku Kothiya
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Jan 1 18:52:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 01 Jan 2019 18:52:17 +0000
Subject: [Bugs] [Bug 1623107] FUSE client's memory leak
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
--- Comment #33 from Znamensky Pavel ---
(In reply to Nithya Balachandran from comment #31)
> Then it is likely to be because the fuse client does not invalidate inodes.
> Does your workload access a lot of files? The earlier statedump showed
> around 3 million inodes in memory.
>
>...
>
> https://review.gluster.org/#/c/glusterfs/+/19778/ has a fix to invalidate
> inodes but is not targeted for release 5 as yet.
Nithya, you're right!
I built glusterfs from the current master
(https://github.com/gluster/glusterfs/tree/d9a8ccd354df6db94477bf9ecb09735194523665)
with the new invalidate inodes mechanism that you mentioned before, and RSS
memory consumption indeed became much lower.
And as you supposed our apps quite often access a lot of files.
Here are two tests with clients on v6dev and v4.1 (the server is still on v4.1
and read-ahead=on)
The first test with default --lru-limit=0 (just did `find /in/big/dir -type
f`):
v4.1 - ~3GB RSS:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 633 6.6 18.5 3570216 3056136 ? Ssl 19:44 6:25
/usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
--volfile-id=/st1 /mnt/st1
v6dev - ~1.5GB RSS:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 10851 16.5 9.2 2071036 1526456 ? Ssl 19:45 15:50
/usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
--volfile-id=/st1 /mnt/st1
It looks good. Let's do the next test.
The second test with --lru-limit=10_000 for v6dev:
v4.1 - ~3GB RSS:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 3589 4.7 18.6 3570216 3060364 ? Ssl 13:11 18:40
/usr/sbin/glusterfs --process-name fuse --volfile-server=srv --volfile-id=/st1
/mnt/st1
v6dev - ~170MB RSS:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 24152 14.2 1.0 758768 173704 ? Ssl 13:58 49:06
/usr/sbin/glusterfs --lru-limit=10000 --process-name fuse --volfile-server=srv
--volfile-id=/st1 /mnt/st1
170MB vs. 3GB!
It's incredible!
Unfortunately, the new version has a drawback - CPU time increased 2.5x times.
At the moment it doesn't matter for us.
Anyway, I'm sure this change solves our problem. And of course, we're looking
forward to a stable version with it.
Thank you a lot!
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Jan 1 20:37:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 01 Jan 2019 20:37:11 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #510 from Worker Ant ---
REVIEW: https://review.gluster.org/21971 (all: toward better string copies)
posted (#1) for review on master by Kaleb KEITHLEY
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Jan 1 20:37:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 01 Jan 2019 20:37:13 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21971
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 03:55:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 03:55:56 +0000
Subject: [Bugs] [Bug 1138841] allow the use of the CIDR format with
auth.allow
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1138841
Mohit Agrawal changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|moagrawa at redhat.com |rkothiya at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 05:44:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 05:44:24 +0000
Subject: [Bugs] [Bug 1660732] create gerrit for github project
glusterfs-containers-tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660732
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-02 05:44:24
--- Comment #4 from Nigel Babu ---
Alright. Valerii is now on the committers group for the
glusterfs-container-tests
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 05:51:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 05:51:33 +0000
Subject: [Bugs] [Bug 1623107] FUSE client's memory leak
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
Travers Carter changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |tcarter at noggin.com.au
--- Comment #34 from Travers Carter ---
We are seeing what looks like the same issue on glusterfs 4.1.5
I'm not sure if further information is still needed, given the last few
comments, but I've collected client statedumps from three systems, along the
with the gluster volume info here:
https://s3.amazonaws.com/public-rhbz/glusterfs-client-4.1.5-statedumps.zip
This includes 2 x client statedumps from each of 3 systems taken roughly 30 to
60 minutes apart
The "webserver" and "appserver-active" gluster clients were restarted after
setting readdir-ahead to off as suggested earlier in the ticket (this didn't
seem to help much in this case), but the "webserver" client has already reached
about 15GiB VIRT in just over 48 hours.
We had also historically seen somewhat slower, but still significant fuse
client memory leaks on v3.x (I think 3.11 or 3.12), but not (or at least not
significant) on 3.7.11 or 4.0.2 with very similar workloads.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 06:15:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 06:15:52 +0000
Subject: [Bugs] [Bug 1662830] New: [RFE] Enable parallel-readdir by default
for all gluster volumes
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662830
Bug ID: 1662830
Summary: [RFE] Enable parallel-readdir by default for all
gluster volumes
Product: GlusterFS
Version: mainline
Status: NEW
Component: core
Keywords: FutureFeature, Performance, ZStream
Severity: high
Assignee: bugs at gluster.org
Reporter: rgowdapp at redhat.com
CC: bugs at gluster.org
Depends On: 1510724
Target Milestone: ---
Classification: Community
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1510724
[Bug 1510724] [RFE] Enable parallel-readdir by default for all gluster volumes
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 06:22:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 06:22:16 +0000
Subject: [Bugs] [Bug 1662830] [RFE] Enable parallel-readdir by default for
all gluster volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662830
--- Comment #1 from Raghavendra G ---
For some performance data, see:
1.
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
2. https://www.spinics.net/lists/gluster-users/msg34956.html
3. https://bugzilla.redhat.com/show_bug.cgi?id=1628807#c35
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 06:22:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 06:22:44 +0000
Subject: [Bugs] [Bug 1662830] [RFE] Enable parallel-readdir by default for
all gluster volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662830
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |rgowdapp at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 06:42:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 06:42:18 +0000
Subject: [Bugs] [Bug 1662830] [RFE] Enable parallel-readdir by default for
all gluster volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662830
--- Comment #2 from Raghavendra G ---
Also see:
1. https://lists.gluster.org/pipermail/gluster-devel/2018-September/055419.html
2. https://lists.gnu.org/archive/html/gluster-devel/2013-09/msg00034.html
>From a mail to gluster-devel titled "serialized readdir(p) across subvols and
effect on performance"
All,
As many of us are aware, readdir(p)s are serialized across DHT subvols. One of
the intuitive first reactions for this algorithm is that readdir(p) is going to
be slow.
However this is partly true as reading the contents of a directory is normally
split into multiple readdir(p) calls and most of the times (when a directory is
sufficiently large to have dentries and inode data is bigger than a typical
readdir(p) buffer size - 128K when readdir-ahead is enabled and 4KB on fuse
when readdir-ahead is disabled - on each subvol) a single readdir(p) request is
served from a single subvolume (or two subvolumes in the worst case) and hence
a single readdir(p) is not serialized across all subvolumes.
Having said that, there are definitely cases where a single readdir(p) request
can be serialized on many subvolumes. A best example for this is a readdir(p)
request on an empty directory. Other relevant examples are those directories
which don't have enough dentries to fit into a single readdir(p) buffer size on
each subvolume of DHT. This is where performance.parallel-readdir helps. Also,
note that this is the same reason why having cache-size for each readdir-ahead
(loaded as a parent for each DHT subvolume) way bigger than a single readdir(p)
buffer size won't really improve the performance in proportion to cache-size
when performance.parallel-readdir is enabled.
Though this is not a new observation [1] (I stumbled upon [1] after realizing
the above myself independently while working on performance.parallel-readdir),
I felt this as a common misconception (I ran into similar argument while trying
to explain DHT architecture to someone new to Glusterfs recently) and hence
thought of writing out a mail to clarify the same.
[1] https://lists.gnu.org/archive/html/gluster-devel/2013-09/msg00034.html
regards,
Raghavendra
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 06:53:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 06:53:34 +0000
Subject: [Bugs] [Bug 1662830] [RFE] Enable parallel-readdir by default for
all gluster volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662830
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/21973 (performance/parallel-readdir: enable
by default) posted (#1) for review on master by Raghavendra G
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 07:02:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:02:46 +0000
Subject: [Bugs] [Bug 1662838] New: FUSE mount seems to be hung and not
accessible
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Bug ID: 1662838
Summary: FUSE mount seems to be hung and not accessible
Product: Red Hat Gluster Storage
Status: NEW
Component: fuse
Severity: high
Assignee: csaba at redhat.com
Reporter: tdesala at redhat.com
QA Contact: rhinduja at redhat.com
CC: bugs at gluster.org, nbalacha at redhat.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
storage-qa-internal at redhat.com, tdesala at redhat.com
Depends On: 1659334
Target Milestone: ---
Classification: Red Hat
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1659334
[Bug 1659334] FUSE mount seems to be hung and not accessible
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 07:02:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:02:46 +0000
Subject: [Bugs] [Bug 1659334] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659334
Prasad Desala changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1662838
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
[Bug 1662838] FUSE mount seems to be hung and not accessible
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 07:02:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:02:48 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Red Hat Bugzilla Rules Engine changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |ZStream
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 07:15:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:15:47 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
nchilaka changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nchilaka at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 07:29:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:29:02 +0000
Subject: [Bugs] [Bug 1654270] glusterd crashed with seg fault possibly
during node reboot while volume creates and deletes were happening
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654270
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/21974 (glusterd: kill the process without
releasing the cleanup mutex lock) posted (#1) for review on master by Sanju
Rakonde
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 07:29:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:29:03 +0000
Subject: [Bugs] [Bug 1654270] glusterd crashed with seg fault possibly
during node reboot while volume creates and deletes were happening
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654270
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21974
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 07:29:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 07:29:09 +0000
Subject: [Bugs] [Bug 1362129] rename of a file can cause data loss in an
replica/arbiter volume configuration
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1362129
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(ravishankar at redha |
|t.com) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 08:41:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 08:41:31 +0000
Subject: [Bugs] [Bug 1624724] ctime: Enable ctime feature by default and
also improve usability by providing single option to enable
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1624724
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
CC| |rgowdapp at redhat.com
Resolution|NEXTRELEASE |---
Keywords| |Reopened
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 08:41:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 08:41:53 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Vijay Avuthu changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |Automation
CC| |vavuthu at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 10:54:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 10:54:16 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
Sayalee changed:
What |Removed |Added
----------------------------------------------------------------------------
QA Contact|tdesala at redhat.com |saraut at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 10:54:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 10:54:29 +0000
Subject: [Bugs] [Bug 1659439] Memory leak: dict_t leak in rda_opendir
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659439
Sayalee changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |saraut at redhat.com
QA Contact|tdesala at redhat.com |saraut at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 11:09:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 11:09:03 +0000
Subject: [Bugs] [Bug 1662906] New: Longevity: glusterfsd(brick process)
crashed when we do volume creates and deletes
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662906
Bug ID: 1662906
Summary: Longevity: glusterfsd(brick process) crashed when we
do volume creates and deletes
Product: GlusterFS
Version: mainline
Status: NEW
Component: core
Keywords: ZStream
Severity: urgent
Priority: high
Assignee: bugs at gluster.org
Reporter: moagrawa at redhat.com
CC: bugs at gluster.org
Depends On: 1662828
Target Milestone: ---
Classification: Community
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1662828
[Bug 1662828] Longevity: glusterfsd(brick process) crashed when we do volume
creates and deletes
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 11:10:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 11:10:59 +0000
Subject: [Bugs] [Bug 1662906] Longevity: glusterfsd(brick process) crashed
when we do volume creates and deletes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662906
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21976 (core: brick process is crashed at the
time of spawn thread) posted (#1) for review on master by MOHIT AGRAWAL
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 11:11:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 11:11:00 +0000
Subject: [Bugs] [Bug 1662906] Longevity: glusterfsd(brick process) crashed
when we do volume creates and deletes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662906
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21976
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 11:28:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 11:28:10 +0000
Subject: [Bugs] [Bug 1654270] glusterd crashed with seg fault possibly
during node reboot while volume creates and deletes were happening
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654270
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--- Comment #4 from Worker Ant ---
REVIEW: https://review.gluster.org/21974 (glusterd: kill the process without
releasing the cleanup mutex lock) posted (#1) for review on master by Sanju
Rakonde
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 12:29:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 12:29:50 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(tdesala at redhat.co
| |m)
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 12:54:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 12:54:26 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #511 from Worker Ant ---
REVIEW: https://review.gluster.org/21977 (timer-wheel: run the timer function
outside of locked region) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 12:54:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 12:54:27 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21977
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 12:59:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 12:59:18 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #512 from Worker Ant ---
REVIEW: https://review.gluster.org/21978 (syncop: move CALLOC -> MALLOC) posted
(#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 12:59:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 12:59:19 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21978
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Jan 2 13:16:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 13:16:56 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Prasad Desala changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(tdesala at redhat.co |
|m) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 13:40:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 13:40:47 +0000
Subject: [Bugs] [Bug 1138841] allow the use of the CIDR format with
auth.allow
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1138841
--- Comment #6 from Worker Ant ---
REVIEW: https://review.gluster.org/21980 (Modified few functions to isolate
cidr feature) posted (#1) for review on master by Rinku Kothiya
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Jan 2 13:40:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 02 Jan 2019 13:40:48 +0000
Subject: [Bugs] [Bug 1138841] allow the use of the CIDR format with
auth.allow
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1138841
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21980
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 00:35:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 00:35:03 +0000
Subject: [Bugs] [Bug 1105277] Failure to execute gverify.sh.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1105277
--- Comment #8 from vnosov ---
Her is some additional info about geo-replication failure to use log file
/var/log/glusterfs/cli.log.
This problem is exposed on geo-replication slave system. Log file
/var/log/glusterfs/cli.log
is created and updated by gluster that runs on slave system. It makes log file
to havenext attributes:
[root at SC-10-10-63-182 log]# ls -l /var/log/glusterfs/cli.log
-rw------- 1 root root 72629 Dec 31 15:24 /var/log/glusterfs/cli.log
If geo-replication is based on SSH access to the slave for not a "root" user,
for example, "nasgorep" from group "nasgorep",
all handling of the /var/log/glusterfs/cli.log on slave including slave's
gluster
are successful when log file has attributes:
[root at SC-10-10-63-182 log]# ls -l /var/log/glusterfs/cli.log
-rw-rw---- 1 root nasgorep 41553 Jan 2 16:00 /var/log/glusterfs/cli.log
Problem is that GlusterFS 5.2 does not provide these settings for the log file
or
lets geo-replication use it now.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 04:00:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 04:00:44 +0000
Subject: [Bugs] [Bug 1663077] New: memory leak in mgmt handshake
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663077
Bug ID: 1663077
Summary: memory leak in mgmt handshake
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: zhhuan at gmail.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Found a memory leak in mgmt handling handshake.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 04:02:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 04:02:27 +0000
Subject: [Bugs] [Bug 1663077] memory leak in mgmt handshake
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663077
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21981 (glusterd: fix memory leak in
handshake) posted (#1) for review on master by Zhang Huan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 04:02:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 04:02:28 +0000
Subject: [Bugs] [Bug 1663077] memory leak in mgmt handshake
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663077
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21981
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 05:26:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 05:26:06 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #513 from Worker Ant ---
REVIEW: https://review.gluster.org/21982 (extras: Add readdir-ahead to samba
group command) posted (#1) for review on master by Anoop C S
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 05:26:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 05:26:07 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21982
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 05:43:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 05:43:36 +0000
Subject: [Bugs] [Bug 1623107] FUSE client's memory leak
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
--- Comment #35 from Travers Carter ---
I believe that I'm able to artificially trigger this using fs_mark, for
example:
mkdir /srv/gluster/fsmark
cd /srv/gluster/fsmark
fs_mark -L 500 -d $PWD -v -S 0 -D 128 -n 1000 -s $[8*1024]
That's 500 rounds of 128 threads each creating and deleting 1000 8KiB files
each in a per-thread subdirectory, where /srv/gluster is a gluster volume
mounted with the fuse client
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 05:58:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 05:58:35 +0000
Subject: [Bugs] [Bug 1623107] FUSE client's memory leak
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
--- Comment #36 from Nithya Balachandran ---
(In reply to Znamensky Pavel from comment #33)
> (In reply to Nithya Balachandran from comment #31)
> > Then it is likely to be because the fuse client does not invalidate inodes.
> > Does your workload access a lot of files? The earlier statedump showed
> > around 3 million inodes in memory.
> >
> >...
> >
> > https://review.gluster.org/#/c/glusterfs/+/19778/ has a fix to invalidate
> > inodes but is not targeted for release 5 as yet.
>
>
> Nithya, you're right!
> I built glusterfs from the current master
> (https://github.com/gluster/glusterfs/tree/
> d9a8ccd354df6db94477bf9ecb09735194523665) with the new invalidate inodes
> mechanism that you mentioned before, and RSS memory consumption indeed
> became much lower.
> And as you supposed our apps quite often access a lot of files.
> Here are two tests with clients on v6dev and v4.1 (the server is still on
> v4.1 and read-ahead=on)
>
> The first test with default --lru-limit=0 (just did `find /in/big/dir -type
> f`):
>
> v4.1 - ~3GB RSS:
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 633 6.6 18.5 3570216 3056136 ? Ssl 19:44 6:25
> /usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
>
> v6dev - ~1.5GB RSS:
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 10851 16.5 9.2 2071036 1526456 ? Ssl 19:45 15:50
> /usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
>
> It looks good. Let's do the next test.
> The second test with --lru-limit=10_000 for v6dev:
>
> v4.1 - ~3GB RSS:
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 3589 4.7 18.6 3570216 3060364 ? Ssl 13:11 18:40
> /usr/sbin/glusterfs --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
>
> v6dev - ~170MB RSS:
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 24152 14.2 1.0 758768 173704 ? Ssl 13:58 49:06
> /usr/sbin/glusterfs --lru-limit=10000 --process-name fuse
> --volfile-server=srv --volfile-id=/st1 /mnt/st1
>
> 170MB vs. 3GB!
> It's incredible!
> Unfortunately, the new version has a drawback - CPU time increased 2.5x
> times. At the moment it doesn't matter for us.
> Anyway, I'm sure this change solves our problem. And of course, we're
> looking forward to a stable version with it.
> Thank you a lot!
Thank you for testing this. I'm glad to hear the patch is working as expected
to keep the memory use down.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 06:06:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 06:06:23 +0000
Subject: [Bugs] [Bug 1623107] FUSE client's memory leak
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|mchangir at redhat.com |sunkumar at redhat.com
--- Comment #37 from Amar Tumballi ---
> Unfortunately, the new version has a drawback - CPU time increased 2.5x
> times. At the moment it doesn't matter for us.
> Anyway, I'm sure this change solves our problem. And of course, we're
> looking forward to a stable version with it.
While a release with this patch merged/tested is another 50days away, we surely
would like to reduce the CPU load you see too. Whenever you get time, if you
can capture CPU info with below tool "perf record -ag --call-graph=dwarf -o
perf.data -p ", and then see "perf report" to see
what actually caused the CPU usage, it will help us to resolve that too.
Also note, lru-limit=10000 while many files are accessed may not be a good
value. I recommend something like 64k at least. But well, it depends on your
memory needs too. So, if you can give 512MB - 1GB RAM for glusterfs, its better
at least for performance.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 06:12:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 06:12:45 +0000
Subject: [Bugs] [Bug 1663089] New: Make GD2 container nightly and push it
docker hub
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Bug ID: 1663089
Summary: Make GD2 container nightly and push it docker hub
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
During GCS scale testing effort, we identified couple of major issues in GD2
for which the PRs were posted and merged yesterday night, but apparently they
missed the window of yesterday's nightly build and hence we're sort of blocked
till today evening for picking up the GD2 container image.
If we can build the container from the latest GD2 head and push it to docker
hub right away, it'd be great and we should get unblocked.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 06:15:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 06:15:05 +0000
Subject: [Bugs] [Bug 1663089] Make GD2 container nightly and push it docker
hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Did it make it to the GD2 nightly RPM build?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 07:00:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:00:46 +0000
Subject: [Bugs] [Bug 1663089] Make GD2 container nightly and push it docker
hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
--- Comment #2 from Atin Mukherjee ---
As per https://ci.centos.org/view/Gluster/job/gluster_gd2-nightly-rpms/ , it
seems like the last build was 6 hours 49 minutes ago which means the required
PRs should be in as part of the rpms.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 07:28:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:28:42 +0000
Subject: [Bugs] [Bug 1663102] New: Change default value for client side heal
to off for replicate volumes
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663102
Bug ID: 1663102
Summary: Change default value for client side heal to off for
replicate volumes
Product: GlusterFS
Version: mainline
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: sheggodu at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
client-side heal on AFR volumes are slowing down systems when top-level
directories need healing. Relying on server side heal by default keeps the
system in the stable state.
This bug is raised to set the default value for client-side heal to "off" for
AFR volumes.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 07:29:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:29:01 +0000
Subject: [Bugs] [Bug 1663102] Change default value for client side heal to
off for replicate volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663102
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |sheggodu at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 07:37:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:37:22 +0000
Subject: [Bugs] [Bug 1663102] Change default value for client side heal to
off for replicate volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663102
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21938 (cluster/afr: Disable client side
heals in AFR by default.) posted (#6) for review on master by Sunil Kumar
Acharya
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 07:37:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 07:37:23 +0000
Subject: [Bugs] [Bug 1663102] Change default value for client side heal to
off for replicate volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663102
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21938
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 09:11:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:11:51 +0000
Subject: [Bugs] [Bug 1651323] Tracker bug for all leases related issues
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651323
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21985
--- Comment #12 from Worker Ant ---
REVIEW: https://review.gluster.org/21985 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#1) for review on release-5 by soumya k
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:12:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:12:44 +0000
Subject: [Bugs] [Bug 1662838] FUSE mount seems to be hung and not accessible
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662838
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |UPSTREAM
Last Closed| |2019-01-03 09:12:44
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 08:50:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 08:50:23 +0000
Subject: [Bugs] [Bug 1663077] memory leak in mgmt handshake
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663077
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/21981 (glusterd: fix memory leak in
handshake) posted (#1) for review on master by Zhang Huan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:27:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:27:46 +0000
Subject: [Bugs] [Bug 1660577] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Version|4.1 |mainline
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 09:28:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:28:03 +0000
Subject: [Bugs] [Bug 1660577] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
Hardware|Unspecified |All
OS|Unspecified |All
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 09:29:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:29:00 +0000
Subject: [Bugs] [Bug 1663131] New: [Ganesha] Ganesha failed on one node
while exporting volumes in loop
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
Bug ID: 1663131
Summary: [Ganesha] Ganesha failed on one node while exporting
volumes in loop
Product: GlusterFS
Version: 5
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Keywords: ZStream
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Depends On: 1660577
Blocks: 1658132
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1660577 +++
+++ This bug was initially created as a clone of Bug #1658132 +++
Description of problem:
-----------------------
ganesha entered failed state in one node of the four node cluster while
exporting volumes in loop. Tried to export 109 volumes one after the other in
loop.
===============================================================================
Version-Release number of selected component (if applicable):
-------------------------------------------------------------
nfs-ganesha-2.5.5-10.el7rhgs.x86_64
nfs-ganesha-gluster-2.5.5-10.el7rhgs.x86_64
glusterfs-ganesha-3.12.2-28.el7rhgs.x86_64
===============================================================================
How reproducible:
-----------------
1/1
===============================================================================
Steps to Reproduce:
-------------------
1. Create 4 node ganesha cluster.
2. Create and start 100 or more volumes.
3. Verify status of all volumes.
4. Export volumes one after the other in a loop.
===============================================================================
Actual results:
---------------
Ganesha entered failed state in one of the nodes.
===============================================================================
Expected results:
-----------------
No failure should be observed.
==============================================================================
Additional info:
----------------
* All volumes were exported on other 3 nodes in the 4 node cluster.
* The failure observed is on a different node than the one from where export
operation was executed.
Setup is kept in same state and can be shared if required.
--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-12-11
10:35:37 UTC ---
This bug is automatically being proposed for a Z-stream release of Red Hat
Gluster Storage 3 under active development and open for bug fixes, by setting
the release flag 'rhgs?3.4.z' to '?'.
If this bug should be proposed for a different release, please manually change
the proposed release flag.
--- Additional comment from Jilju Joy on 2018-12-11 10:37:00 UTC ---
Logs and sos report will be shared shortly.
--- Additional comment from Jilju Joy on 2018-12-11 11:59:20 UTC ---
Logs and sosreport :
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/jj/1658132/
--- Additional comment from Soumya Koduri on 2018-12-11 16:26:13 UTC ---
(gdb) bt
#0 __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:74
#1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
iov=, count=, myframe=0x7f5b74002cb0) at
glfs-mgmt.c:625
#2 0x00007f5c8e9e8960 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f5a08cc5760, pollin=pollin at entry=0x7f5b7f09acb0) at
rpc-clnt.c:778
#3 0x00007f5c8e9e8d03 in rpc_clnt_notify (trans=,
mydata=0x7f5a08cc5790, event=, data=0x7f5b7f09acb0) at
rpc-clnt.c:971
#4 0x00007f5c8e9e4a73 in rpc_transport_notify (this=this at entry=0x7f5a08cc5930,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f5b7f09acb0)
at rpc-transport.c:538
#5 0x00007f5c849e5576 in socket_event_poll_in (this=this at entry=0x7f5a08cc5930,
notify_handled=) at socket.c:2322
#6 0x00007f5c849e7b1c in socket_event_handler (fd=565, idx=0, gen=1,
data=0x7f5a08cc5930, poll_in=1, poll_out=0, poll_err=0) at socket.c:2474
#7 0x00007f5c8ec7e824 in event_dispatch_epoll_handler (event=0x7f59e1f44500,
event_pool=0x7f5a08cb74f0) at event-epoll.c:583
#8 event_dispatch_epoll_worker (data=0x7f5b760922a0) at event-epoll.c:659
#9 0x00007f5d20e44dd5 in start_thread (arg=0x7f59e1f45700) at
pthread_create.c:307
#10 0x00007f5d2050fead in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) f 1
#1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
iov=, count=, myframe=0x7f5b74002cb0) at
glfs-mgmt.c:625
625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
(gdb) l
620
621 ret = 0;
622 size = rsp.op_ret;
623
624 if ((size == fs->oldvollen) &&
625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
626 gf_msg (frame->this->name, GF_LOG_INFO, 0,
627 API_MSG_VOLFILE_INFO,
628 "No change in volfile, continuing");
629 goto out;
(gdb) p fs->olvollen
There is no member named olvollen.
(gdb) p fs->oldvollen
$1 = 1674
(gdb) p size
$2 = 1674
(gdb) p fs->oldvolfile
$3 = 0x7f5b76097cd0 "volume testvol82201-client-0\n type protocol/client\n
option send-gids true\n option transport.socket.keepalive-count 9\n
option transport.socket.keepalive-interval 2\n option transport.sock"...
(gdb) p rsp.spec
$4 = 0x7f5b7f9da9d0 "volume testvol82201-client-0\n type protocol/client\n
option send-gids true\n option transport.socket.keepalive-count 9\n
option transport.socket.keepalive-interval 2\n option transport.sock"...
(gdb)
The crash happened while doing memcmp of fs->oldvolfile and the new volfile
received in the response (rsp.spec). The contents of both the variables seem
fine in the core.
>From code reading observed that we update fs->oldvollen and fs->oldvolfile
under fs->mutex lock, but that lock is not taken while reading those values
here in glfs_mgmt_spec_cbk. That could have resulted in the crash while
accessing un/partially intialized variable.
@Jilju,
Are you able to consistently reproduce this issue?
--- Additional comment from Daniel Gryniewicz on 2018-12-11 16:33:41 UTC ---
Are the buffers smaller than 1674? It might be going off the end of one of the
buffers.
--- Additional comment from Jilju Joy on 2018-12-12 04:50:00 UTC ---
(In reply to Soumya Koduri from comment #4)
> (gdb) bt
> #0 __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:74
> #1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
> iov=, count=, myframe=0x7f5b74002cb0) at
> glfs-mgmt.c:625
> #2 0x00007f5c8e9e8960 in rpc_clnt_handle_reply
> (clnt=clnt at entry=0x7f5a08cc5760, pollin=pollin at entry=0x7f5b7f09acb0) at
> rpc-clnt.c:778
> #3 0x00007f5c8e9e8d03 in rpc_clnt_notify (trans=,
> mydata=0x7f5a08cc5790, event=, data=0x7f5b7f09acb0) at
> rpc-clnt.c:971
> #4 0x00007f5c8e9e4a73 in rpc_transport_notify
> (this=this at entry=0x7f5a08cc5930,
> event=event at entry=RPC_TRANSPORT_MSG_RECEIVED,
> data=data at entry=0x7f5b7f09acb0) at rpc-transport.c:538
> #5 0x00007f5c849e5576 in socket_event_poll_in
> (this=this at entry=0x7f5a08cc5930, notify_handled=) at
> socket.c:2322
> #6 0x00007f5c849e7b1c in socket_event_handler (fd=565, idx=0, gen=1,
> data=0x7f5a08cc5930, poll_in=1, poll_out=0, poll_err=0) at socket.c:2474
> #7 0x00007f5c8ec7e824 in event_dispatch_epoll_handler
> (event=0x7f59e1f44500, event_pool=0x7f5a08cb74f0) at event-epoll.c:583
> #8 event_dispatch_epoll_worker (data=0x7f5b760922a0) at event-epoll.c:659
> #9 0x00007f5d20e44dd5 in start_thread (arg=0x7f59e1f45700) at
> pthread_create.c:307
> #10 0x00007f5d2050fead in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> (gdb) f 1
> #1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
> iov=, count=, myframe=0x7f5b74002cb0) at
> glfs-mgmt.c:625
> 625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
> (gdb) l
> 620
> 621 ret = 0;
> 622 size = rsp.op_ret;
> 623
> 624 if ((size == fs->oldvollen) &&
> 625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
> 626 gf_msg (frame->this->name, GF_LOG_INFO, 0,
> 627 API_MSG_VOLFILE_INFO,
> 628 "No change in volfile, continuing");
> 629 goto out;
> (gdb) p fs->olvollen
> There is no member named olvollen.
> (gdb) p fs->oldvollen
> $1 = 1674
> (gdb) p size
> $2 = 1674
> (gdb) p fs->oldvolfile
> $3 = 0x7f5b76097cd0 "volume testvol82201-client-0\n type
> protocol/client\n option send-gids true\n option
> transport.socket.keepalive-count 9\n option
> transport.socket.keepalive-interval 2\n option transport.sock"...
> (gdb) p rsp.spec
> $4 = 0x7f5b7f9da9d0 "volume testvol82201-client-0\n type
> protocol/client\n option send-gids true\n option
> transport.socket.keepalive-count 9\n option
> transport.socket.keepalive-interval 2\n option transport.sock"...
> (gdb)
>
>
> The crash happened while doing memcmp of fs->oldvolfile and the new volfile
> received in the response (rsp.spec). The contents of both the variables seem
> fine in the core.
>
> From code reading observed that we update fs->oldvollen and fs->oldvolfile
> under fs->mutex lock, but that lock is not taken while reading those values
> here in glfs_mgmt_spec_cbk. That could have resulted in the crash while
> accessing un/partially intialized variable.
>
> @Jilju,
>
> Are you able to consistently reproduce this issue?
Hi Soumya,
The first occurrence is reported here. Kept the setup in same state for the
favour of debugging. I can share the setup if required or I can try to
reproduce.
--- Additional comment from Worker Ant on 2018-12-18 17:05:42 UTC ---
REVIEW: https://review.gluster.org/21882 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2018-12-26 02:17:03 UTC ---
REVIEW: https://review.gluster.org/21882 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#2) for review on master by Amar Tumballi
--- Additional comment from Worker Ant on 2018-12-26 10:33:07 UTC ---
REVIEW: https://review.gluster.org/21927 (gfapi: nit cleanup related to
releasing fs->mutex lock) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2018-12-31 16:10:41 UTC ---
REVIEW: https://review.gluster.org/21927 (gfapi: nit cleanup related to
releasing fs->mutex lock) posted (#2) for review on master by Kaleb KEITHLEY
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1658132
[Bug 1658132] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
[Bug 1660577] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:29:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:29:00 +0000
Subject: [Bugs] [Bug 1660577] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1663131
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
[Bug 1663131] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 09:29:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:29:28 +0000
Subject: [Bugs] [Bug 1663132] New: [Ganesha] Ganesha failed on one node
while exporting volumes in loop
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663132
Bug ID: 1663132
Summary: [Ganesha] Ganesha failed on one node while exporting
volumes in loop
Product: GlusterFS
Version: 4.1
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Keywords: ZStream
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Depends On: 1660577
Blocks: 1658132, 1663131
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1660577 +++
+++ This bug was initially created as a clone of Bug #1658132 +++
Description of problem:
-----------------------
ganesha entered failed state in one node of the four node cluster while
exporting volumes in loop. Tried to export 109 volumes one after the other in
loop.
===============================================================================
Version-Release number of selected component (if applicable):
-------------------------------------------------------------
nfs-ganesha-2.5.5-10.el7rhgs.x86_64
nfs-ganesha-gluster-2.5.5-10.el7rhgs.x86_64
glusterfs-ganesha-3.12.2-28.el7rhgs.x86_64
===============================================================================
How reproducible:
-----------------
1/1
===============================================================================
Steps to Reproduce:
-------------------
1. Create 4 node ganesha cluster.
2. Create and start 100 or more volumes.
3. Verify status of all volumes.
4. Export volumes one after the other in a loop.
===============================================================================
Actual results:
---------------
Ganesha entered failed state in one of the nodes.
===============================================================================
Expected results:
-----------------
No failure should be observed.
==============================================================================
Additional info:
----------------
* All volumes were exported on other 3 nodes in the 4 node cluster.
* The failure observed is on a different node than the one from where export
operation was executed.
Setup is kept in same state and can be shared if required.
--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-12-11
10:35:37 UTC ---
This bug is automatically being proposed for a Z-stream release of Red Hat
Gluster Storage 3 under active development and open for bug fixes, by setting
the release flag 'rhgs?3.4.z' to '?'.
If this bug should be proposed for a different release, please manually change
the proposed release flag.
--- Additional comment from Jilju Joy on 2018-12-11 10:37:00 UTC ---
Logs and sos report will be shared shortly.
--- Additional comment from Jilju Joy on 2018-12-11 11:59:20 UTC ---
Logs and sosreport :
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/jj/1658132/
--- Additional comment from Soumya Koduri on 2018-12-11 16:26:13 UTC ---
(gdb) bt
#0 __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:74
#1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
iov=, count=, myframe=0x7f5b74002cb0) at
glfs-mgmt.c:625
#2 0x00007f5c8e9e8960 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f5a08cc5760, pollin=pollin at entry=0x7f5b7f09acb0) at
rpc-clnt.c:778
#3 0x00007f5c8e9e8d03 in rpc_clnt_notify (trans=,
mydata=0x7f5a08cc5790, event=, data=0x7f5b7f09acb0) at
rpc-clnt.c:971
#4 0x00007f5c8e9e4a73 in rpc_transport_notify (this=this at entry=0x7f5a08cc5930,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f5b7f09acb0)
at rpc-transport.c:538
#5 0x00007f5c849e5576 in socket_event_poll_in (this=this at entry=0x7f5a08cc5930,
notify_handled=) at socket.c:2322
#6 0x00007f5c849e7b1c in socket_event_handler (fd=565, idx=0, gen=1,
data=0x7f5a08cc5930, poll_in=1, poll_out=0, poll_err=0) at socket.c:2474
#7 0x00007f5c8ec7e824 in event_dispatch_epoll_handler (event=0x7f59e1f44500,
event_pool=0x7f5a08cb74f0) at event-epoll.c:583
#8 event_dispatch_epoll_worker (data=0x7f5b760922a0) at event-epoll.c:659
#9 0x00007f5d20e44dd5 in start_thread (arg=0x7f59e1f45700) at
pthread_create.c:307
#10 0x00007f5d2050fead in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) f 1
#1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
iov=, count=, myframe=0x7f5b74002cb0) at
glfs-mgmt.c:625
625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
(gdb) l
620
621 ret = 0;
622 size = rsp.op_ret;
623
624 if ((size == fs->oldvollen) &&
625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
626 gf_msg (frame->this->name, GF_LOG_INFO, 0,
627 API_MSG_VOLFILE_INFO,
628 "No change in volfile, continuing");
629 goto out;
(gdb) p fs->olvollen
There is no member named olvollen.
(gdb) p fs->oldvollen
$1 = 1674
(gdb) p size
$2 = 1674
(gdb) p fs->oldvolfile
$3 = 0x7f5b76097cd0 "volume testvol82201-client-0\n type protocol/client\n
option send-gids true\n option transport.socket.keepalive-count 9\n
option transport.socket.keepalive-interval 2\n option transport.sock"...
(gdb) p rsp.spec
$4 = 0x7f5b7f9da9d0 "volume testvol82201-client-0\n type protocol/client\n
option send-gids true\n option transport.socket.keepalive-count 9\n
option transport.socket.keepalive-interval 2\n option transport.sock"...
(gdb)
The crash happened while doing memcmp of fs->oldvolfile and the new volfile
received in the response (rsp.spec). The contents of both the variables seem
fine in the core.
>From code reading observed that we update fs->oldvollen and fs->oldvolfile
under fs->mutex lock, but that lock is not taken while reading those values
here in glfs_mgmt_spec_cbk. That could have resulted in the crash while
accessing un/partially intialized variable.
@Jilju,
Are you able to consistently reproduce this issue?
--- Additional comment from Daniel Gryniewicz on 2018-12-11 16:33:41 UTC ---
Are the buffers smaller than 1674? It might be going off the end of one of the
buffers.
--- Additional comment from Jilju Joy on 2018-12-12 04:50:00 UTC ---
(In reply to Soumya Koduri from comment #4)
> (gdb) bt
> #0 __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:74
> #1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
> iov=, count=, myframe=0x7f5b74002cb0) at
> glfs-mgmt.c:625
> #2 0x00007f5c8e9e8960 in rpc_clnt_handle_reply
> (clnt=clnt at entry=0x7f5a08cc5760, pollin=pollin at entry=0x7f5b7f09acb0) at
> rpc-clnt.c:778
> #3 0x00007f5c8e9e8d03 in rpc_clnt_notify (trans=,
> mydata=0x7f5a08cc5790, event=, data=0x7f5b7f09acb0) at
> rpc-clnt.c:971
> #4 0x00007f5c8e9e4a73 in rpc_transport_notify
> (this=this at entry=0x7f5a08cc5930,
> event=event at entry=RPC_TRANSPORT_MSG_RECEIVED,
> data=data at entry=0x7f5b7f09acb0) at rpc-transport.c:538
> #5 0x00007f5c849e5576 in socket_event_poll_in
> (this=this at entry=0x7f5a08cc5930, notify_handled=) at
> socket.c:2322
> #6 0x00007f5c849e7b1c in socket_event_handler (fd=565, idx=0, gen=1,
> data=0x7f5a08cc5930, poll_in=1, poll_out=0, poll_err=0) at socket.c:2474
> #7 0x00007f5c8ec7e824 in event_dispatch_epoll_handler
> (event=0x7f59e1f44500, event_pool=0x7f5a08cb74f0) at event-epoll.c:583
> #8 event_dispatch_epoll_worker (data=0x7f5b760922a0) at event-epoll.c:659
> #9 0x00007f5d20e44dd5 in start_thread (arg=0x7f59e1f45700) at
> pthread_create.c:307
> #10 0x00007f5d2050fead in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> (gdb) f 1
> #1 0x00007f5d18130664 in glfs_mgmt_getspec_cbk (req=,
> iov=, count=, myframe=0x7f5b74002cb0) at
> glfs-mgmt.c:625
> 625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
> (gdb) l
> 620
> 621 ret = 0;
> 622 size = rsp.op_ret;
> 623
> 624 if ((size == fs->oldvollen) &&
> 625 (memcmp (fs->oldvolfile, rsp.spec, size) == 0)) {
> 626 gf_msg (frame->this->name, GF_LOG_INFO, 0,
> 627 API_MSG_VOLFILE_INFO,
> 628 "No change in volfile, continuing");
> 629 goto out;
> (gdb) p fs->olvollen
> There is no member named olvollen.
> (gdb) p fs->oldvollen
> $1 = 1674
> (gdb) p size
> $2 = 1674
> (gdb) p fs->oldvolfile
> $3 = 0x7f5b76097cd0 "volume testvol82201-client-0\n type
> protocol/client\n option send-gids true\n option
> transport.socket.keepalive-count 9\n option
> transport.socket.keepalive-interval 2\n option transport.sock"...
> (gdb) p rsp.spec
> $4 = 0x7f5b7f9da9d0 "volume testvol82201-client-0\n type
> protocol/client\n option send-gids true\n option
> transport.socket.keepalive-count 9\n option
> transport.socket.keepalive-interval 2\n option transport.sock"...
> (gdb)
>
>
> The crash happened while doing memcmp of fs->oldvolfile and the new volfile
> received in the response (rsp.spec). The contents of both the variables seem
> fine in the core.
>
> From code reading observed that we update fs->oldvollen and fs->oldvolfile
> under fs->mutex lock, but that lock is not taken while reading those values
> here in glfs_mgmt_spec_cbk. That could have resulted in the crash while
> accessing un/partially intialized variable.
>
> @Jilju,
>
> Are you able to consistently reproduce this issue?
Hi Soumya,
The first occurrence is reported here. Kept the setup in same state for the
favour of debugging. I can share the setup if required or I can try to
reproduce.
--- Additional comment from Worker Ant on 2018-12-18 17:05:42 UTC ---
REVIEW: https://review.gluster.org/21882 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2018-12-26 02:17:03 UTC ---
REVIEW: https://review.gluster.org/21882 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#2) for review on master by Amar Tumballi
--- Additional comment from Worker Ant on 2018-12-26 10:33:07 UTC ---
REVIEW: https://review.gluster.org/21927 (gfapi: nit cleanup related to
releasing fs->mutex lock) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2018-12-31 16:10:41 UTC ---
REVIEW: https://review.gluster.org/21927 (gfapi: nit cleanup related to
releasing fs->mutex lock) posted (#2) for review on master by Kaleb KEITHLEY
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1658132
[Bug 1658132] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
[Bug 1660577] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
[Bug 1663131] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:29:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:29:28 +0000
Subject: [Bugs] [Bug 1660577] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1660577
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1663132
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663132
[Bug 1663132] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 09:29:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:29:28 +0000
Subject: [Bugs] [Bug 1663131] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1663132
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663132
[Bug 1663132] [Ganesha] Ganesha failed on one node while exporting volumes in
loop
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:31:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:31:12 +0000
Subject: [Bugs] [Bug 1651323] Tracker bug for all leases related issues
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651323
--- Comment #13 from Worker Ant ---
REVISION POSTED: https://review.gluster.org/21985 (gfapi: Access fs->oldvolfile
under mutex lock) posted (#3) for review on release-5 by soumya k
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:31:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:31:13 +0000
Subject: [Bugs] [Bug 1651323] Tracker bug for all leases related issues
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651323
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID|Gluster.org Gerrit 21985 |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:31:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:31:14 +0000
Subject: [Bugs] [Bug 1663131] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21985 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#3) for review on release-5 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:31:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:31:15 +0000
Subject: [Bugs] [Bug 1663131] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663131
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21985
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:44:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:44:17 +0000
Subject: [Bugs] [Bug 1663132] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663132
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21986 (gfapi: Access fs->oldvolfile under
mutex lock) posted (#1) for review on release-4.1 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:44:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:44:18 +0000
Subject: [Bugs] [Bug 1663132] [Ganesha] Ganesha failed on one node while
exporting volumes in loop
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663132
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21986
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:58:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:58:23 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #514 from Worker Ant ---
REVIEW: https://review.gluster.org/21987 (glfs-fops.c: fix the bad string
length for snprintf) posted (#1) for review on master by Kinglong Mee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 09:58:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 09:58:28 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21987
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 10:07:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 10:07:56 +0000
Subject: [Bugs] [Bug 1663089] Make GD2 container nightly and push it docker
hub
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663089
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-03 10:07:56
--- Comment #3 from Nigel Babu ---
ALright, Deepshika retriggerd the Jenkins job and we're good now.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 11:03:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 11:03:57 +0000
Subject: [Bugs] [Bug 1657743] Very high memory usage (25GB) on Gluster FUSE
mountpoint
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657743
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nbalacha at redhat.com
Assignee|bugs at gluster.org |sunkumar at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 11:12:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 11:12:44 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nbalacha at redhat.com,
| |rob.dewit at coosto.com
Flags| |needinfo?(rob.dewit at coosto.
| |com)
--- Comment #2 from Nithya Balachandran ---
Can you try installing the debuginfo packages for the gluster version you are
running and rerun bt on the core dump?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 11:41:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 11:41:23 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
robdewit changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(rob.dewit at coosto. |
|com) |
--- Comment #3 from robdewit ---
(gdb) bt
#0 0x00007fe0a5936e30 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007fe0a6b0c795 in __gf_free (free_ptr=0x7fe0843ac610) at mem-pool.c:333
#2 0x00007fe0a6ad51ee in dict_destroy (this=0x7fe0843abe78) at dict.c:701
#3 0x00007fe0a6ad5315 in dict_unref (this=) at dict.c:753
#4 0x00007fe0a0866124 in afr_local_cleanup (local=0x7fe0843ade18,
this=) at afr-common.c:2091
#5 0x00007fe0a083fee1 in afr_transaction_done (frame=,
this=) at afr-transaction.c:369
#6 0x00007fe0a08437f1 in afr_unlock_common_cbk
(frame=frame at entry=0x7fe0843ac7b8, this=this at entry=0x7fe09c0110c0,
op_ret=op_ret at entry=0, xdata=,
op_errno=, cookie=) at afr-lk-common.c:243
#7 0x00007fe0a0844562 in afr_unlock_inodelk_cbk (frame=0x7fe0843ac7b8,
cookie=, this=0x7fe09c0110c0, op_ret=0, op_errno=,
xdata=) at afr-lk-common.c:281
#8 0x00007fe0a0b101d0 in client4_0_finodelk_cbk (req=,
iov=, count=, myframe=)
at client-rpc-fops_v2.c:1398
#9 0x00007fe0a68ae534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7fe09c053bd0, pollin=pollin at entry=0x7fe09c115750) at
rpc-clnt.c:755
#10 0x00007fe0a68aee77 in rpc_clnt_notify (trans=0x7fe09c053e90,
mydata=0x7fe09c053c00, event=, data=0x7fe09c115750) at
rpc-clnt.c:923
#11 0x00007fe0a68aaf13 in rpc_transport_notify (this=this at entry=0x7fe09c053e90,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7fe09c115750)
at rpc-transport.c:525
#12 0x00007fe0a19c2a23 in socket_event_poll_in (notify_handled=true,
this=0x7fe09c053e90) at socket.c:2504
#13 socket_event_handler (fd=-1676585136, idx=1, gen=4, data=0x7fe09c053e90,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#14 0x00007fe0a6b43aeb in event_dispatch_epoll_handler (event=0x7fe0a1531ed0,
event_pool=0x17f40b0) at event-epoll.c:591
#15 event_dispatch_epoll_worker (data=0x1830840) at event-epoll.c:668
#16 0x00007fe0a5934504 in start_thread () from /lib64/libpthread.so.0
#17 0x00007fe0a521c19f in clone () from /lib64/libc.so.6
Somehow version in this bug report has been reset to 3.12, but this is actually
version 5.2
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 11:42:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 11:42:22 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
--- Comment #9 from Sayalee ---
Ran the planned test cases in the test plan shared in Comment8 and didn't see
any issues on glusterfs version 3.12.2-34
Moving this BZ to Verified.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 11:42:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 11:42:35 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
Sayalee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ON_QA |VERIFIED
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 12:16:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:16:10 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Component|fuse |replicate
Version|3.12 |5
--- Comment #4 from Nithya Balachandran ---
Assigning this to AFR team.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 12:20:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:20:11 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |ravishankar at redhat.com
Flags| |needinfo?(rob.dewit at coosto.
| |com)
--- Comment #5 from Ravishankar N ---
Quick question: Is the back trace identical to what is shared in comment #3 for
all crashes?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 12:22:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:22:53 +0000
Subject: [Bugs] [Bug 1663205] New: List dictionary is too slow
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663205
Bug ID: 1663205
Summary: List dictionary is too slow
Product: GlusterFS
Version: 4.1
Hardware: x86_64
OS: Linux
Status: NEW
Component: fuse
Severity: high
Assignee: bugs at gluster.org
Reporter: 1490889344 at qq.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
I create a distribute volume and mount to a dictionary. Then I put 25,000 files
to the dictionary. After finished, I write a program to list the dictionary. I
found the program spent 20s. It's unbelievable. And I copy the dictionary to
root dictionary. And running the program again. The time display just less than
1s. So I think there is some problems in glusterFS. Then I do some more test
for glusterFS. I found that the spent time is normal when the dictionary
contains 20,000 files, but when the number is more than 20,000, it's easy to
show bad performance. Finally, I found the reason of bad performance is stat
function for every file. I don't know why the stat function is spent lots of
time when the dictionary contains 25,000 files. I hope someone can help me.
GlusterFS vesion:
glusterfs 4.1.6
Volume info:
Volume Name: gv0
Type: Distribute
Volume ID: 7cfccb92-5b9d-4483-8212-0f02cd1197d6
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: dlaas-184:/data/glusterFS/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 12:23:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:23:26 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
--- Comment #6 from Ravishankar N ---
Also, please attach the core file to the bug.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 12:25:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:25:15 +0000
Subject: [Bugs] [Bug 1662368] [ovirt-gluster] Fuse mount crashed while
deleting a 1 TB image file from ovirt
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662368
SATHEESARAN changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1663208
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663208
[Bug 1663208] [RHV-RHGS] Fuse mount crashed while deleting a 1 TB image file
from RHV
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 12:32:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:32:51 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
robdewit changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(rob.dewit at coosto. |
|com) |
--- Comment #7 from robdewit ---
Good question! It turns out this is not always the case. I checked some other
coredumps:
coredump 1 - same backtrace
coredump 2 - untraceable
coredump 3 - A different backtrace:
(gdb) bt
#0 0x00007f2f2a1d32c0 in ?? () from /lib64/libuuid.so.1
#1 0x00007f2f2a1d24e0 in uuid_compare () from /lib64/libuuid.so.1
#2 0x00007f2f2aa57691 in gf_uuid_compare (u2=0x7f2f115e67f8
"\270x\274\226Z\301F\006\256\221\230\005\031\321N\342\001",
u1=0x7f2eff90 ) at compat-uuid.h:25
#3 __inode_find (table=table at entry=0x7f2f20063b80,
gfid=gfid at entry=0x7f2f115e67f8
"\270x\274\226Z\301F\006\256\221\230\005\031\321N\342\001") at inode.c:892
#4 0x00007f2f2aa57d79 in inode_find (table=table at entry=0x7f2f20063b80,
gfid=gfid at entry=0x7f2f115e67f8
"\270x\274\226Z\301F\006\256\221\230\005\031\321N\342\001")
at inode.c:917
#5 0x00007f2f24a1ae72 in unserialize_rsp_direntp_v2 (this=0x7f2f2000e980,
fd=, rsp=rsp at entry=0x7f2f1e164a70, entries=0x7f2f1e164aa0)
at client-helpers.c:338
#6 0x00007f2f24a59005 in client_post_readdirp_v2 (this=,
rsp=0x7f2f1e164a70, fd=, entries=,
xdata=0x7f2f1e164a68)
at client-common.c:3533
#7 0x00007f2f24a6b226 in client4_0_readdirp_cbk (req=,
iov=0x7f2f0b99d508, count=, myframe=0x7f2ef4a691f8) at
client-rpc-fops_v2.c:2333
#8 0x00007f2f2a814534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f2f2004f530, pollin=pollin at entry=0x7f2f114a8290) at
rpc-clnt.c:755
#9 0x00007f2f2a814e77 in rpc_clnt_notify (trans=0x7f2f2004f860,
mydata=0x7f2f2004f560, event=, data=0x7f2f114a8290) at
rpc-clnt.c:923
#10 0x00007f2f2a810f13 in rpc_transport_notify (this=this at entry=0x7f2f2004f860,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f2f114a8290)
at rpc-transport.c:525
#11 0x00007f2f25928a23 in socket_event_poll_in (notify_handled=true,
this=0x7f2f2004f860) at socket.c:2504
#12 socket_event_handler (fd=290095760, idx=2, gen=4, data=0x7f2f2004f860,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#13 0x00007f2f2aaa9aeb in event_dispatch_epoll_handler (event=0x7f2f1e164ed0,
event_pool=0x7510b0) at event-epoll.c:591
#14 event_dispatch_epoll_worker (data=0x7f2f2004f310) at event-epoll.c:668
#15 0x00007f2f2989a504 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f2f2918219f in clone () from /lib64/libc.so.6
coredump 4 - yet another backtrace:
(gdb) bt
#0 0x00007ff2249a58a4 in _int_free () from /lib64/libc.so.6
#1 0x00007ff2249aac9e in free () from /lib64/libc.so.6
#2 0x00007ff22631d6af in __gf_free (free_ptr=) at
mem-pool.c:356
#3 0x00007ff223bf1410 in free_fuse_state (state=0x7ff1f4760430) at
fuse-helpers.c:81
#4 0x00007ff223bf70a9 in fuse_err_cbk (frame=0x7ff1f471b1d8, cookie=, this=0x18dddb0, op_ret=0, op_errno=0, xdata=)
at fuse-bridge.c:1434
#5 0x00007ff21aebc29d in io_stats_flush_cbk (frame=0x7ff206a0b088,
cookie=, this=, op_ret=0, op_errno=0, xdata=0x0)
at io-stats.c:2286
#6 0x00007ff226385b29 in default_flush_cbk (frame=0x7ff1f4737f58,
cookie=, this=, op_ret=0, op_errno=0, xdata=0x0)
at defaults.c:1159
#7 0x00007ff21b926f77 in ra_flush_cbk (frame=0x7ff1f4737238, cookie=, this=, op_ret=0, op_errno=0, xdata=0x0) at
read-ahead.c:539
#8 0x00007ff21bb390dd in wb_flush_helper (frame=0x7ff2071e7488,
this=, fd=, xdata=0x0) at write-behind.c:1987
#9 0x00007ff22631a055 in call_resume_keep_stub (stub=0x7ff1f4744da8) at
call-stub.c:2563
#10 0x00007ff21bb3c999 in wb_do_winds (wb_inode=wb_inode at entry=0x7ff1f4742730,
tasks=tasks at entry=0x7ff220d42640) at write-behind.c:1737
#11 0x00007ff21bb3ca9c in wb_process_queue
(wb_inode=wb_inode at entry=0x7ff1f4742730) at write-behind.c:1778
#12 0x00007ff21bb41a07 in wb_fulfill_cbk (frame=frame at entry=0x7ff21d48e7c8,
cookie=, this=, op_ret=op_ret at entry=123,
op_errno=op_errno at entry=0, prebuf=prebuf at entry=0x7ff21d4ac610,
postbuf=postbuf at entry=0x7ff21d4ac6a8, xdata=xdata at entry=0x7ff21d490168) at
write-behind.c:1105
#13 0x00007ff21bdbde86 in dht_writev_cbk (frame=frame at entry=0x7ff21c08d7c8,
cookie=, this=, op_ret=123, op_errno=0,
prebuf=prebuf at entry=0x7ff21d4ac610, postbuf=postbuf at entry=0x7ff21d4ac6a8,
xdata=0x7ff21d490168) at dht-inode-write.c:140
#14 0x00007ff22003e21e in afr_writev_unwind (frame=frame at entry=0x7ff21d4a3ee8,
this=this at entry=0x7ff21c0110c0) at afr-inode-write.c:234
#15 0x00007ff22003e7e6 in afr_writev_wind_cbk (this=0x7ff21c0110c0,
frame=0x7ff21d49ab08, cookie=, op_ret=,
op_errno=,
prebuf=, postbuf=, xdata=) at
afr-inode-write.c:388
#16 afr_writev_wind_cbk (frame=0x7ff21d49ab08, cookie=,
this=0x7ff21c0110c0, op_ret=, op_errno=,
prebuf=,
postbuf=0x7ff220d42980, xdata=0x7ff21d49ae58) at afr-inode-write.c:354
#17 0x00007ff220313748 in client4_0_writev_cbk (req=,
iov=, count=, myframe=0x7ff21d483a58) at
client-rpc-fops_v2.c:685
#18 0x00007ff2260bf534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7ff21c04f530, pollin=pollin at entry=0x7ff21d49e650) at
rpc-clnt.c:755
#19 0x00007ff2260bfe77 in rpc_clnt_notify (trans=0x7ff21c04f860,
mydata=0x7ff21c04f560, event=, data=0x7ff21d49e650) at
rpc-clnt.c:923
#20 0x00007ff2260bbf13 in rpc_transport_notify (this=this at entry=0x7ff21c04f860,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7ff21d49e650)
at rpc-transport.c:525
#21 0x00007ff2211d3a23 in socket_event_poll_in (notify_handled=true,
this=0x7ff21c04f860) at socket.c:2504
#22 socket_event_handler (fd=491382352, idx=2, gen=4, data=0x7ff21c04f860,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#23 0x00007ff226354aeb in event_dispatch_epoll_handler (event=0x7ff220d42ed0,
event_pool=0x18d70b0) at event-epoll.c:591
#24 event_dispatch_epoll_worker (data=0x1913840) at event-epoll.c:668
#25 0x00007ff225145504 in start_thread () from /lib64/libpthread.so.0
#26 0x00007ff224a2d19f in clone () from /lib64/libc.so.6
coredump 5 - Another one:
(gdb) bt
#0 0x00007fad93d3ce30 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007fad94eea73e in gf_log_set_log_buf_size (buf_size=buf_size at entry=0)
at logging.c:273
#2 0x00007fad94eea8df in gf_log_disable_suppression_before_exit
(ctx=0x1334010) at logging.c:444
#3 0x00007fad94ef0f94 in gf_print_trace (signum=11, ctx=0x1334010) at
common-utils.c:922
#4
#5 0x00007fad94f0fd52 in fd_destroy (bound=true, fd=0x7fad64f216c8) at
fd.c:478
#6 fd_unref (fd=0x7fad64f216c8) at fd.c:529
#7 0x00007fad8eeba0e8 in client_local_wipe (local=local at entry=0x7fad8a924358)
at client-helpers.c:124
#8 0x00007fad8ef161e0 in client4_0_finodelk_cbk (req=,
iov=, count=, myframe=)
at client-rpc-fops_v2.c:1398
#9 0x00007fad94cb4534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7fad8804f530, pollin=pollin at entry=0x7fad8a917950) at
rpc-clnt.c:755
#10 0x00007fad94cb4e77 in rpc_clnt_notify (trans=0x7fad8804f860,
mydata=0x7fad8804f560, event=, data=0x7fad8a917950) at
rpc-clnt.c:923
#11 0x00007fad94cb0f13 in rpc_transport_notify (this=this at entry=0x7fad8804f860,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7fad8a917950)
at rpc-transport.c:525
#12 0x00007fad8fdc8a23 in socket_event_poll_in (notify_handled=true,
this=0x7fad8804f860) at socket.c:2504
#13 socket_event_handler (fd=-1970177712, idx=2, gen=4, data=0x7fad8804f860,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#14 0x00007fad94f49aeb in event_dispatch_epoll_handler (event=0x7fad8f937ed0,
event_pool=0x136b0b0) at event-epoll.c:591
#15 event_dispatch_epoll_worker (data=0x13a7840) at event-epoll.c:668
#16 0x00007fad93d3a504 in start_thread () from /lib64/libpthread.so.0
#17 0x00007fad9362219f in clone () from /lib64/libc.so.6
coredump 6 - And another:
(gdb) bt
#0 0x00007f3c6caace30 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007f3c6dc82795 in __gf_free (free_ptr=0x7f3c39e43cb0) at mem-pool.c:333
#2 0x00007f3c6dc65d90 in __inode_ctx_free (inode=inode at entry=0x7f3c39e435a8)
at inode.c:322
#3 0x00007f3c6dc66e12 in __inode_destroy (inode=0x7f3c39e435a8) at inode.c:338
#4 inode_table_prune (table=table at entry=0x7f3c58010950) at inode.c:1535
#5 0x00007f3c6dc671ec in inode_unref (inode=0x7f3c39e435a8) at inode.c:542
#6 0x00007f3c679dbf97 in afr_local_cleanup (local=0x7f3c39e1f3e8,
this=) at afr-common.c:1995
#7 0x00007f3c679b5ee1 in afr_transaction_done (frame=,
this=) at afr-transaction.c:369
#8 0x00007f3c679b97f1 in afr_unlock_common_cbk
(frame=frame at entry=0x7f3c3a11d168, this=this at entry=0x7f3c600110c0,
op_ret=op_ret at entry=0, xdata=0x0,
op_errno=, cookie=) at afr-lk-common.c:243
#9 0x00007f3c679b98ae in afr_unlock_entrylk_cbk (frame=0x7f3c3a11d168,
cookie=, this=0x7f3c600110c0, op_ret=0, op_errno=,
xdata=) at afr-lk-common.c:366
#10 0x00007f3c67c857bd in client4_0_entrylk_cbk (req=,
iov=, count=, myframe=) at
client-rpc-fops_v2.c:1446
#11 0x00007f3c6da24534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f3c60058e20, pollin=pollin at entry=0x7f3c5a5898e0) at
rpc-clnt.c:755
#12 0x00007f3c6da24e77 in rpc_clnt_notify (trans=0x7f3c600590e0,
mydata=0x7f3c60058e50, event=, data=0x7f3c5a5898e0) at
rpc-clnt.c:923
#13 0x00007f3c6da20f13 in rpc_transport_notify (this=this at entry=0x7f3c600590e0,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f3c5a5898e0)
at rpc-transport.c:525
#14 0x00007f3c68b38a23 in socket_event_poll_in (notify_handled=true,
this=0x7f3c600590e0) at socket.c:2504
#15 socket_event_handler (fd=1515755744, idx=4, gen=1, data=0x7f3c600590e0,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#16 0x00007f3c6dcb9aeb in event_dispatch_epoll_handler (event=0x7f3c65c04ed0,
event_pool=0x81a0b0) at event-epoll.c:591
#17 event_dispatch_epoll_worker (data=0x7f3c60043ad0) at event-epoll.c:668
#18 0x00007f3c6caaa504 in start_thread () from /lib64/libpthread.so.0
#19 0x00007f3c6c39219f in clone () from /lib64/libc.so.6
coredump 7 - ...
(gdb) bt
#0 0x00007f916b526b88 in list_add (head=0x7f91389ba228, new=0x7f91389b9d78) at
../../../../libglusterfs/src/list.h:31
#1 wb_set_invalidate (wb_inode=0x7f91389b9d10, set=) at
write-behind.c:246
#2 wb_fulfill_cbk (frame=frame at entry=0x7f91617a2208, cookie=,
this=, op_ret=op_ret at entry=811, op_errno=op_errno at entry=0,
prebuf=prebuf at entry=0x7f91617ade00, postbuf=postbuf at entry=0x7f91617ade98,
xdata=xdata at entry=0x7f9160484c38) at write-behind.c:1095
#3 0x00007f916b7a2e86 in dht_writev_cbk (frame=frame at entry=0x7f91617b8838,
cookie=, this=, op_ret=811, op_errno=0,
prebuf=prebuf at entry=0x7f91617ade00, postbuf=postbuf at entry=0x7f91617ade98,
xdata=0x7f9160484c38) at dht-inode-write.c:140
#4 0x00007f916ba0c21e in afr_writev_unwind (frame=frame at entry=0x7f916100d918,
this=this at entry=0x7f91640110c0) at afr-inode-write.c:234
#5 0x00007f916ba0c7e6 in afr_writev_wind_cbk (this=0x7f91640110c0,
frame=0x7f91604865e8, cookie=, op_ret=,
op_errno=,
prebuf=, postbuf=, xdata=) at
afr-inode-write.c:388
#6 afr_writev_wind_cbk (frame=0x7f91604865e8, cookie=,
this=0x7f91640110c0, op_ret=, op_errno=,
prebuf=,
postbuf=0x7f916946c980, xdata=0x7f91614414b8) at afr-inode-write.c:354
#7 0x00007f916bce1748 in client4_0_writev_cbk (req=,
iov=, count=, myframe=0x7f915cfa2f98) at
client-rpc-fops_v2.c:685
#8 0x00007f9171a8d534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f9164050110, pollin=pollin at entry=0x7f9160481290) at
rpc-clnt.c:755
#9 0x00007f9171a8de77 in rpc_clnt_notify (trans=0x7f91640503d0,
mydata=0x7f9164050140, event=, data=0x7f9160481290) at
rpc-clnt.c:923
#10 0x00007f9171a89f13 in rpc_transport_notify (this=this at entry=0x7f91640503d0,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f9160481290)
at rpc-transport.c:525
#11 0x00007f916cba1a23 in socket_event_poll_in (notify_handled=true,
this=0x7f91640503d0) at socket.c:2504
#12 socket_event_handler (fd=1615336080, idx=2, gen=4, data=0x7f91640503d0,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#13 0x00007f9171d22aeb in event_dispatch_epoll_handler (event=0x7f916946ced0,
event_pool=0x24db0b0) at event-epoll.c:591
#14 event_dispatch_epoll_worker (data=0x7f9164048dc0) at event-epoll.c:668
#15 0x00007f9170b13504 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f91703fb19f in clone () from /lib64/libc.so.6
coredump 8 - (gdb) bt
#0 0x00007f24560fbe30 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007f24572a973e in gf_log_set_log_buf_size (buf_size=buf_size at entry=0)
at logging.c:273
#2 0x00007f24572a98df in gf_log_disable_suppression_before_exit (ctx=0x840010)
at logging.c:444
#3 0x00007f24572aff94 in gf_print_trace (signum=11, ctx=0x840010) at
common-utils.c:922
#4
#5 0x00007f24572ced52 in fd_destroy (bound=true, fd=0x7f24380d3f98) at
fd.c:478
#6 fd_unref (fd=0x7f24380d3f98) at fd.c:529
#7 0x00007f24512790e8 in client_local_wipe (local=local at entry=0x7f243c0ad548)
at client-helpers.c:124
#8 0x00007f24512d51e0 in client4_0_finodelk_cbk (req=,
iov=, count=, myframe=)
at client-rpc-fops_v2.c:1398
#9 0x00007f2457073534 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f244c050110, pollin=pollin at entry=0x7f2444165d30) at
rpc-clnt.c:755
#10 0x00007f2457073e77 in rpc_clnt_notify (trans=0x7f244c0503d0,
mydata=0x7f244c050140, event=, data=0x7f2444165d30) at
rpc-clnt.c:923
#11 0x00007f245706ff13 in rpc_transport_notify (this=this at entry=0x7f244c0503d0,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f2444165d30)
at rpc-transport.c:525
#12 0x00007f2452187a23 in socket_event_poll_in (notify_handled=true,
this=0x7f244c0503d0) at socket.c:2504
#13 socket_event_handler (fd=1142316336, idx=2, gen=4, data=0x7f244c0503d0,
poll_in=, poll_out=, poll_err=0) at socket.c:2905
#14 0x00007f2457308aeb in event_dispatch_epoll_handler (event=0x7f244b1b7ed0,
event_pool=0x8770b0) at event-epoll.c:591
#15 event_dispatch_epoll_worker (data=0x7f244c043ad0) at event-epoll.c:668
#16 0x00007f24560f9504 in start_thread () from /lib64/libpthread.so.0
#17 0x00007f24559e119f in clone () from /lib64/libc.so.6
If you really need the info - I have some 20 more coredumps, I suspect they all
have different traces...
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 12:36:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:36:45 +0000
Subject: [Bugs] [Bug 1644389] [GSS] Directory listings on fuse mount are
very slow due to small number of getdents() entries
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1644389
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(ccalhoun at redhat.c
| |om)
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 12:41:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 12:41:21 +0000
Subject: [Bugs] [Bug 1662557] glusterfs process crashes,
causing "Transport endpoint not connected".
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662557
--- Comment #8 from robdewit ---
Original core file of the 1st backtrace:
https://www.dropbox.com/s/a8feic6hvho413o/core?dl=0
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 13:32:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 13:32:29 +0000
Subject: [Bugs] [Bug 1661887] Add monitoring of postgrey
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1661887
M. Scherer changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-01-03 13:32:29
--- Comment #1 from M. Scherer ---
So, notification was added, and I think it is also managed properly now.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 13:38:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 13:38:56 +0000
Subject: [Bugs] [Bug 1663223] New: profile info command is not displaying
information of bricks which are hosted on peers
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663223
Bug ID: 1663223
Summary: profile info command is not displaying information of
bricks which are hosted on peers
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
When we run "gluster v profile info" from node n1, it is showing
information of bricks from local node only. Information of bricks which are
hosted on peers is not shown in the output.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. In a cluster of more than 1 node, create and start a volume
2. start profile for the volume
3. run gluster v profile volname info
Actual results:
Expected results:
it should display information of all the bricks of volume.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 13:48:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 13:48:12 +0000
Subject: [Bugs] [Bug 1663223] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663223
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/21988 (glusterd: aggregate rsp from peers)
posted (#1) for review on master by Sanju Rakonde
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 13:48:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 13:48:13 +0000
Subject: [Bugs] [Bug 1663223] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663223
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21988
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 14:16:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:16:23 +0000
Subject: [Bugs] [Bug 1663232] New: profile info command is not displaying
information of bricks which are hosted on peers
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
Bug ID: 1663232
Summary: profile info command is not displaying information of
bricks which are hosted on peers
Product: Red Hat Gluster Storage
Version: 3.4
Status: NEW
Component: glusterd
Severity: urgent
Assignee: amukherj at redhat.com
Reporter: srakonde at redhat.com
QA Contact: bmekala at redhat.com
CC: bugs at gluster.org, rhs-bugs at redhat.com,
sankarshan at redhat.com, storage-qa-internal at redhat.com,
vbellur at redhat.com
Depends On: 1663223
Target Milestone: ---
Classification: Red Hat
+++ This bug was initially created as a clone of Bug #1663223 +++
Description of problem:
When we run "gluster v profile info" from node n1, it is showing
information of bricks from local node only. Information of bricks which are
hosted on peers is not shown in the output.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. In a cluster of more than 1 node, create and start a volume
2. start profile for the volume
3. run gluster v profile volname info
Actual results:
Expected results:
it should display information of all the bricks of volume.
Additional info:
--- Additional comment from Worker Ant on 2019-01-03 19:18:12 IST ---
REVIEW: https://review.gluster.org/21988 (glusterd: aggregate rsp from peers)
posted (#1) for review on master by Sanju Rakonde
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663223
[Bug 1663223] profile info command is not displaying information of bricks
which are hosted on peers
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 14:16:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:16:23 +0000
Subject: [Bugs] [Bug 1663223] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663223
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1663232
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
[Bug 1663232] profile info command is not displaying information of bricks
which are hosted on peers
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 14:16:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:16:27 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
Red Hat Bugzilla Rules Engine changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |ZStream
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 14:18:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:18:07 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
Assignee|amukherj at redhat.com |srakonde at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 14:36:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:36:20 +0000
Subject: [Bugs] [Bug 1663243] New: rebalance status does not display
localhost statistics when op-version is not bumped up
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663243
Bug ID: 1663243
Summary: rebalance status does not display localhost statistics
when op-version is not bumped up
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
rebalance status command is not showing information of local host when the
cluster is not running with the current max op version.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. set the cluster op-version less than the max op version
2. create and start a volume
3. start rebalance for volume and check for rebalance status
Actual results:
In the output of "rebalance status" information related to localhost is not
displayed.
Expected results:
It should display te information of localhost as well.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 14:36:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:36:56 +0000
Subject: [Bugs] [Bug 1663244] New: rebalance status does not display
localhost statistics when op-version is not bumped up
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663244
Bug ID: 1663244
Summary: rebalance status does not display localhost statistics
when op-version is not bumped up
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
rebalance status command is not showing information of local host when the
cluster is not running with the current max op version.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. set the cluster op-version less than the max op version
2. create and start a volume
3. start rebalance for volume and check for rebalance status
Actual results:
In the output of "rebalance status" information related to localhost is not
displayed.
Expected results:
It should display te information of localhost as well.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 14:41:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:41:37 +0000
Subject: [Bugs] [Bug 1663247] New: remove static memory allocations from code
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663247
Bug ID: 1663247
Summary: remove static memory allocations from code
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
In the entire code base, many structures are allocating the memory statically.
Instead we can allocate memory dynamically.
One such structure is:
struct glusterd_brickinfo {
char hostname[NAME_MAX];
char path[VALID_GLUSTERD_PATHMAX];
char real_path[VALID_GLUSTERD_PATHMAX];
char device_path[VALID_GLUSTERD_PATHMAX];
char mount_dir[VALID_GLUSTERD_PATHMAX];
char brick_id[1024]; /*Client xlator name, AFR changelog name*/
char fstype[NAME_MAX]; /* Brick file-system type */
char mnt_opts[1024]; /* Brick mount options */
..
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Jan 3 14:47:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:47:53 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |urgent
CC| |amukherj at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Jan 3 14:50:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 03 Jan 2019 14:50:25 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |Regression
--- Comment #3 from Atin Mukherjee