From bugzilla at redhat.com Fri Feb 1 03:17:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:17:34 +0000
Subject: [Bugs] [Bug 1671603] New: flooding of "dict is NULL" logging
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
Bug ID: 1671603
Summary: flooding of "dict is NULL" logging
Product: GlusterFS
Version: 5
Status: NEW
Component: core
Keywords: Triaged, ZStream
Assignee: bugs at gluster.org
Reporter: atumball at redhat.com
CC: bugs at gluster.org
Depends On: 1313567
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1313567 +++
Description of problem:
following logs flood the log files
[2016-03-01 10:45:51.688339] W [dict.c:1282:dict_foreach_match]
(-->/usr/lib64/libglusterfs.so.0(dict_foreach_match+0x65) [0x7ff139e1e5d5]
-->/usr/lib64/glusterfs/3.7.8/xlator/features/index.so(+0x3950)
[0x7ff12de49950] -->/usr/lib64/libglusterfs.so.0(dict_foreach_match+0xe1)
[0x7ff139e1e651] ) 0-dict: dict|match|action is NULL [Invalid argument]
Version-Release number of selected component (if applicable):
glusterfs-3.7.8
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
I have seen the older one
https://bugzilla.redhat.com/show_bug.cgi?id=1289893
but since i am using the latest version (3.7.8) fix is there. Could this be
related to another part of index.c ?
--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-03-01
23:14:38 UTC ---
This bug is automatically being proposed for the current z-stream release of
Red Hat Gluster Storage 3 by setting the release flag 'rhgs?3.1.z' to '?'.
If this bug should be proposed for a different release, please manually change
the proposed release flag.
--- Additional comment from Nithya Balachandran on 2016-03-09 04:16:05 UTC ---
This looks like it refers to an upstream release (3.7.8). Changing the product
to reflect this.
--- Additional comment from evangelos on 2016-03-09 09:18:21 UTC ---
is there any update on this ?
thank you very much!
--- Additional comment from Nithya Balachandran on 2016-03-09 10:43:09 UTC ---
Moving this to Anuradha who worked on the original patch.
--- Additional comment from evangelos on 2016-04-12 11:20:00 UTC ---
is there any update on this ?
--- Additional comment from Anuradha on 2016-06-23 10:01:53 UTC ---
Hi evangelos,
That fix was made in 3.7.5. You say you have been the old issue. Did you
upgrade from 3.7.5 to 3.7.8 and are seeing problem or was this volume freshly
created based on 3.7.8?
As far as I know all the fixes for dict is NULL in index translator are sent.
But there is an issue when volfiles are not updated during an upgrade.
If you had upgraded the volume, could you please provided the steps that you
used to upgrade?
Also, could you also verify one thing for me from brick volfiles of you volume?
The brick volfiles are supposed to have the following lines:
volume test-index
type features/index
option xattrop-pending-watchlist trusted.afr.test- <--------(1)
option xattrop-dirty-watchlist trusted.afr.dirty <--------(2)
option index-base /export/test/brick2/.glusterfs/indices
subvolumes test-barrier
end-volume
The two options mentioned above should exist. Otherwise you will see this
problem. You can find volfiles at /var/lib/glusterd/vols/.
Thanks.
--- Additional comment from evangelos on 2016-07-07 19:28:59 UTC ---
Hi Anuradha,
in the mean time due to various issues we had we decided to downgrade to 3.6
branch so currently i do not have a 3.7 deployment up and running. But thx for
the hint i will keep this in mind for the future.
In the meantime you can close this bugzilla.
thank you
--- Additional comment from Anuradha on 2016-07-11 09:09:11 UTC ---
Hi Evangelos,
Thanks for the update.
Closing this bug as per comment#7
Thanks,
Anuradha.
--- Additional comment from Emerson Gomes on 2019-01-27 15:42:59 UTC ---
This error is still reproduceable in 5.3 when upgrading from a 3.x volume.
I had to recreate volume from scratch in 5.3 and copy data back in order to
avoid it.
--- Additional comment from Artem Russakovskii on 2019-01-30 20:23:44 UTC ---
I just started seeing this error after upgrading from 4.1 to 5.3.
[2019-01-30 20:23:24.481581] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fd966fcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
And it floods like crazy with these.
--- Additional comment from Emerson Gomes on 2019-01-30 20:33:12 UTC ---
I "solved" the issue after upgrading to 5.3 by creating a new volume and moving
all data to it.
Apparently something is missing on the volumes after upgrade.
--- Additional comment from Artem Russakovskii on 2019-01-30 20:37:13 UTC ---
I just sent a message to the gluster mailing list about this because that's not
how this problem should be resolved. I'm curious to hear what they say.
--- Additional comment from Emerson Gomes on 2019-01-30 20:39:04 UTC ---
Absolutely. That's the second big issue I had after upgrading. The first one is
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
Still unsolved (open for more than 2 months now)
--- Additional comment from Artem Russakovskii on 2019-01-30 20:40:29 UTC ---
You know, I was *just* going to comment in a follow-up reply about whether the
issue here is possibly related to the one you just linked. Seeing tons of those
too, though at least the dupes are suppressed.
==> mnt-SITE_data1.log <==
[2019-01-30 20:38:20.783713] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fd966fcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
==> mnt-SITE_data3.log <==
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
2-epoll: Failed to dispatch handler" repeated 413 times between [2019-01-30
20:36:23.881090] and [2019-01-30 20:38:20.015593]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
2-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-0"
repeated 42 times between [2019-01-30 20:36:23.290287] and [2019-01-30
20:38:20.280306]
==> mnt-SITE_data1.log <==
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
2-SITE_data1-replicate-0: selecting local read_child SITE_data1-client-0"
repeated 50 times between [2019-01-30 20:36:22.247367] and [2019-01-30
20:38:19.459789]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
2-epoll: Failed to dispatch handler" repeated 2654 times between [2019-01-30
20:36:22.667327] and [2019-01-30 20:38:20.546355]
[2019-01-30 20:38:21.492319] I [MSGID: 108031]
[afr-common.c:2543:afr_local_discovery_cbk] 2-SITE_data1-replicate-0: selecting
local read_child SITE_data1-client-0
==> mnt-SITE_data3.log <==
[2019-01-30 20:38:22.349689] I [MSGID: 108031]
[afr-common.c:2543:afr_local_discovery_cbk] 2-SITE_data3-replicate-0: selecting
local read_child SITE_data3-client-0
==> mnt-SITE_data1.log <==
[2019-01-30 20:38:22.762941] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 2-epoll: Failed to dispatch
handler
--- Additional comment from Emerson Gomes on 2019-01-30 20:48:52 UTC ---
Yeah, both arrised after upgrading from 3.x to 5.1, persisting in 5.2 and 5.3.
The other issue is even more critical.
It causes crashes, making the mount point being inacessible ("Transport
endpoint is not connected" error) - Requiring a new manual umount/mount.
For now I have a crontab entry doing this, but I will have to downgrade if a
fix is not issued soon...
--- Additional comment from Artem Russakovskii on 2019-01-31 18:00:40 UTC ---
Damn, you weren't kidding, I wish I saw these bug reports before I updated from
rock solid 4.1.
Less than 24 hours after upgrading, I already got a crash that you referenced:
[2019-01-31 09:38:04.317604] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fcccafcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument]
[2019-01-31 09:38:04.319308] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fcccafcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument]
[2019-01-31 09:38:04.320047] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fcccafcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument]
[2019-01-31 09:38:04.320677] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fcccafcd329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
2-SITE_data1-replicate-0: selecting local read_child SITE_data1-client-3"
repeated 5 times between [2019-01-31 09:37:54.751905] and [2019-01-31
09:38:03.958061]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
2-epoll: Failed to dispatch handler" repeated 72 times between [2019-01-31
09:37:53.746741] and [2019-01-31 09:38:04.696993]
pending frames:
frame : type(1) op(READ)
frame : type(1) op(OPEN)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
time of crash:
2019-01-31 09:38:04
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.3
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fccd706664c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fccd7070cb6]
/lib64/libc.so.6(+0x36160)[0x7fccd622d160]
/lib64/libc.so.6(gsignal+0x110)[0x7fccd622d0e0]
/lib64/libc.so.6(abort+0x151)[0x7fccd622e6c1]
/lib64/libc.so.6(+0x2e6fa)[0x7fccd62256fa]
/lib64/libc.so.6(+0x2e772)[0x7fccd6225772]
/lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fccd65bb0b8]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x32c4d)[0x7fcccbb01c4d]
/usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x65778)[0x7fcccbdd1778]
/usr/lib64/libgfrpc.so.0(+0xe820)[0x7fccd6e31820]
/usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fccd6e31b6f]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fccd6e2e063]
/usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fccd0b7e0b2]
/usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fccd70c44c3]
/lib64/libpthread.so.0(+0x7559)[0x7fccd65b8559]
/lib64/libc.so.6(clone+0x3f)[0x7fccd62ef81f]
---------
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1313567
[Bug 1313567] flooding of "dict is NULL" logging
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:17:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:17:34 +0000
Subject: [Bugs] [Bug 1313567] flooding of "dict is NULL" logging
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1313567
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1671603
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
[Bug 1671603] flooding of "dict is NULL" logging
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 03:18:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:18:55 +0000
Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667103
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |atumball at redhat.com
Blocks| |1671603
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
[Bug 1671603] flooding of "dict is NULL" logging
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:18:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:18:55 +0000
Subject: [Bugs] [Bug 1671603] flooding of "dict is NULL" logging
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1667103 (glusterfs-5.4)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1667103
[Bug 1667103] GlusterFS 5.4 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:19:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:19:19 +0000
Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667103
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks|1671603 |
Depends On| |1671603
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
[Bug 1671603] flooding of "dict is NULL" logging
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:19:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:19:19 +0000
Subject: [Bugs] [Bug 1671603] flooding of "dict is NULL" logging
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1667103 (glusterfs-5.4)
Depends On|1667103 (glusterfs-5.4) |
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1667103
[Bug 1667103] GlusterFS 5.4 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:21:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:21:53 +0000
Subject: [Bugs] [Bug 1671603] flooding of "dict is NULL" logging & crash of
client process
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Summary|flooding of "dict is NULL" |flooding of "dict is NULL"
|logging |logging & crash of client
| |process
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:29:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:29:51 +0000
Subject: [Bugs] [Bug 1671213] core: move "dict is NULL" logs to DEBUG log
level
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671213
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-01 03:29:51
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22128 (core: move \"dict is NULL\" logs to
DEBUG log level) merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 03:29:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 03:29:58 +0000
Subject: [Bugs] [Bug 1671217] core: move "dict is NULL" logs to DEBUG log
level
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671217
Bug 1671217 depends on bug 1671213, which changed state.
Bug 1671213 Summary: core: move "dict is NULL" logs to DEBUG log level
https://bugzilla.redhat.com/show_bug.cgi?id=1671213
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 04:44:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 04:44:21 +0000
Subject: [Bugs] [Bug 1667804] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667804
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Version|4.1 |mainline
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:07:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:07:31 +0000
Subject: [Bugs] [Bug 1671611] New: Unable to delete directories that contain
linkto files that point to itself.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
Bug ID: 1671611
Summary: Unable to delete directories that contain linkto files
that point to itself.
Product: GlusterFS
Version: 5
Status: NEW
Component: distribute
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org
Depends On: 1667804
Blocks: 1667556, 1668989
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1667804 +++
Description of problem:
A directory containing linkto files that point to itself cannot be deleted.
Version-Release number of selected component (if applicable):
How reproducible:
Consistently
Steps to Reproduce:
1. gluster v create tvol 192.168.122.7:/bricks/brick2/tvol-{1..2}
2. gluster v start tvol
3. mount -t glusterfs -s 192.168.122.7:/tvol /mnt/g1
4. cd /mnt/g1
5. mkdir -p dir0/dir1/dir2
6. cd dir0/dir1/dir2
7. for i in {1..100}; do echo "Test file" > tfile-$i; done
8. for i in {1..100}; do mv tfile-$i ntfile-$i; done
9. gluster v remove-brick tvol 192.168.122.7:/bricks/brick2/tvol-2 start
Once the remove-brick status shows "completed",
10. gluster v remove-brick tvol 192.168.122.7:/bricks/brick2/tvol-2 stop
You should now have only linkto files in
192.168.122.7:/bricks/brick2/tvol-2/dir0/dir1/dir2 and they should all be
pointing to
tvol-client-0.
Manually change the linkto xattr value for every file in brick2 to point to
itself, in this case "tvol-client-1"(make sure the string is null terminated).
11. setfattr -n trusted.glusterfs.dht.linkto -v 0x74766f6c2d636c69656e742d3100
/bricks/brick2/tvol-2/dir0/dir1/dir2/ntfile-*
12. Try to delete the directory from the mount point:
[root at myserver g1]# rm -rf *
Actual results:
[root at myserver g1]# rm -rf *
rm: cannot remove ?dir0/dir1/dir2?: Directory not empty
Expected results:
The directory should be deleted as there are no data files inside.
Additional info:
--- Additional comment from Worker Ant on 2019-01-21 09:50:09 UTC ---
REVIEW: https://review.gluster.org/22066 (cluster/dht: Delete invalid linkto
files in rmdir) posted (#1) for review on master by N Balachandran
--- Additional comment from Worker Ant on 2019-01-22 05:23:04 UTC ---
REVIEW: https://review.gluster.org/22066 (cluster/dht: Delete invalid linkto
files in rmdir) merged (#2) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1667804
[Bug 1667804] Unable to delete directories that contain linkto files that point
to itself.
https://bugzilla.redhat.com/show_bug.cgi?id=1668989
[Bug 1668989] Unable to delete directories that contain linkto files that point
to itself.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 05:07:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:07:31 +0000
Subject: [Bugs] [Bug 1667804] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667804
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1671611
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
[Bug 1671611] Unable to delete directories that contain linkto files that point
to itself.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:07:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:07:31 +0000
Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1668989
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1671611
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
[Bug 1671611] Unable to delete directories that contain linkto files that point
to itself.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:12:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:12:41 +0000
Subject: [Bugs] [Bug 1671611] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |nbalacha at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 05:14:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:14:00 +0000
Subject: [Bugs] [Bug 1671611] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22136
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:14:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:14:01 +0000
Subject: [Bugs] [Bug 1671611] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22136 (cluster/dht: Delete invalid linkto
files in rmdir) posted (#1) for review on release-5 by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:30:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:30:22 +0000
Subject: [Bugs] [Bug 1669937] Rebalance : While rebalance is in progress ,
SGID and sticky bit which is set on the files while file migration
is in progress is seen on the mount point
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1669937
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Version|4.1 |mainline
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 05:30:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:30:39 +0000
Subject: [Bugs] [Bug 1669937] Rebalance : While rebalance is in progress ,
SGID and sticky bit which is set on the files while file migration
is in progress is seen on the mount point
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1669937
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-01 05:30:39
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22103 (cluster/dht: Remove internal
permission bits) merged (#2) on master by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 05:45:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:45:32 +0000
Subject: [Bugs] [Bug 1662264] thin-arbiter: Check with thin-arbiter file
before marking new entry change log
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662264
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/21933 (cluster/thin-arbiter: Consider
thin-arbiter before marking new entry changelog) merged (#6) on master by Amar
Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 05:51:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 05:51:19 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
--- Comment #3 from Ravishankar N ---
(In reply to David E. Smith from comment #2)
> Actually, I ran the cores through strings and grepped for a few things like
> passwords -- as you'd expect from a memory dump from a Web server, there's a
> log of sensitive information in there. Is there a safe/acceptable way to
> send the cores only to developers that can use them, or otherwise not have
> to make them publicly available while still letting the Gluster devs benefit
> from analyzing them?
Perhaps you could upload it to a shared Dropbox folder with view/download
access to the red hat email IDs I've CC'ed to this email (including me) to
begin with.
Note: I upgraded a 1x2 replica volume with 1 fuse client from v4.1.7 to v5.3
and did some basic I/O (kernel untar and iozone) and did not observe any
crashes, so maybe this this something that is hit under extreme I/O or memory
pressure. :-(
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 06:58:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 06:58:39 +0000
Subject: [Bugs] [Bug 1671637] New: geo-rep: Issue with configparser import
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671637
Bug ID: 1671637
Summary: geo-rep: Issue with configparser import
Product: GlusterFS
Version: mainline
Status: NEW
Component: geo-replication
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
'configparser' is backported to python2 and can
be installed using pip (pip install configparser).
So trying to import 'configparser' first and later
'ConfigParser' can cause issues w.r.t unicode strings.
Solution:
Always try importing 'ConfigParser' first and then
'configparser'. This solves python2/python3 compat
issues.
Version-Release number of selected component (if applicable):
mainilne
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 06:58:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 06:58:50 +0000
Subject: [Bugs] [Bug 1671637] geo-rep: Issue with configparser import
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671637
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 07:00:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 07:00:46 +0000
Subject: [Bugs] [Bug 1671637] geo-rep: Issue with configparser import
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671637
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22138
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 07:00:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 07:00:47 +0000
Subject: [Bugs] [Bug 1671637] geo-rep: Issue with configparser import
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671637
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22138 (geo-rep: Fix configparser import
issue) posted (#1) for review on master by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 07:22:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 07:22:33 +0000
Subject: [Bugs] [Bug 1671647] New: Anomalies in python-lint build job
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671647
Bug ID: 1671647
Summary: Anomalies in python-lint build job
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: spamecha at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 08:06:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 08:06:11 +0000
Subject: [Bugs] [Bug 1671647] Anomalies in python-lint build job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671647
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Comment #0 is|1 |0
private| |
CC| |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Can you also paste in a link of where this is happening?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Feb 1 08:32:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 08:32:55 +0000
Subject: [Bugs] [Bug 1665145] Writes on Gluster 5 volumes fail with EIO when
"cluster.consistent-metadata" is set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665145
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22139
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 08:32:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 08:32:56 +0000
Subject: [Bugs] [Bug 1665145] Writes on Gluster 5 volumes fail with EIO when
"cluster.consistent-metadata" is set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665145
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22139 (readdir-ahead: do not zero-out iatt
in fop cbk) posted (#1) for review on release-5 by Ravishankar N
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 09:54:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 09:54:35 +0000
Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and
leads to "Transport endpoint is not connected"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1626085
--- Comment #9 from GCth ---
Is there anything else I can do to help fixing this issue?
We had to implement monitoring and restarting solution for our glusterfs
clusters as they crash frequently, causing open files to be unavailable
and dependent applications to stop working correctly.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 10:13:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 10:13:05 +0000
Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and
leads to "Transport endpoint is not connected"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1626085
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(rhb1 at gcth.net)
--- Comment #10 from Ravishankar N ---
(In reply to GCth from comment #9)
> Is there anything else I can do to help fixing this issue?
> We had to implement monitoring and restarting solution for our glusterfs
> clusters as they crash frequently, causing open files to be unavailable
> and dependent applications to stop working correctly.
Are all crashes in AFR with the same back trace as in comment #7? What workload
are you running on your 4.1 gluster volume? It would be great if you can give a
consistent reproducer which we can try on our setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 10:23:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 10:23:14 +0000
Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option
performance.write-behind enabled
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665216
mhutter changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |bugzilla.redhat.com at mhutter
| |.net
--- Comment #5 from mhutter ---
Reproduction case: Exactly as described in the original Ticket.
# Prepare gluster volume
gluster volume set gluster-pv18 performance.write-behind off
# mount the volume
mount -t glusterfs :/gluster-pv18 /mnt/gluster-pv18
# start Postgres
docker run --name psql-test --rm -v /mnt/gluster-pv18:/var/lib/postgresql/data
docker.io/postgres:9.5
# this should work as expected
# clean up
docker stop psql-test
rm -rf /mnt/gluster-pv18/*
umount /mnt/gluster-pv18
# enable write-behind
gluster volume set gluster-pv18 performance.write-behind on
# mount the volume
mount -t glusterfs :/gluster-pv18 /mnt/gluster-pv18
# start Postgres
docker run --name psql-test --rm -v /mnt/gluster-pv18:/var/lib/postgresql/data
docker.io/postgres:9.5
# !!! this will now fail:
# creating template1 database in /var/lib/postgresql/data/base/1 ... ok
# initializing pg_authid ... LOG: invalid primary checkpoint record
# LOG: invalid secondary checkpoint record
# PANIC: could not locate a valid checkpoint record
# Aborted (core dumped)
# child process exited with exit code 134
# initdb: removing contents of data directory "/var/lib/postgresql/data"
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 10:41:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 10:41:10 +0000
Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option
performance.write-behind enabled
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665216
mhutter changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(gabisoft at freesurf |
|.ch) |
--- Comment #6 from mhutter ---
Created attachment 1525793
--> https://bugzilla.redhat.com/attachment.cgi?id=1525793&action=edit
dump-fuse, gzipped
--- Comment #7 from mhutter ---
Created attachment 1525794
--> https://bugzilla.redhat.com/attachment.cgi?id=1525794&action=edit
strace of initdb (which crashed)
Also interesting: while creating the TGZ archive (not on the gluster volume) of
all strace files (which were on the gluster volume), a lot of messages like
this appeared:
tar: strace/initdb.42: file changed as we read it
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Feb 1 13:35:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 01 Feb 2019 13:35:31 +0000
Subject: [Bugs] [Bug 1671733] New: clang-format test is checking contrib
files, but rfc.sh skips them
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671733
Bug ID: 1671733
Summary: clang-format test is checking contrib files, but
rfc.sh skips them
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: jahernan at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
The clang-format job is testing files inside 'contrib' directory. I think they
shouldn't be checked, like rfc.sh already does.
Example: https://build.gluster.org/job/clang-format/2868/console
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Execute the job on this patch:
https://review.gluster.org/c/glusterfs/+/20636
2.
3.
Actual results:
The test fails
Expected results:
The test shouldn't fail because of invalid formatting on files inside 'contrib'
directory.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 03:07:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:07:52 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #529 from Worker Ant ---
REVIEW: https://review.gluster.org/22094 (core: make gf_thread_create() easier
to use) merged (#5) on master by Xavi Hernandez
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 03:08:22 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:08:22 +0000
Subject: [Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page
cache on read after write
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664934
--- Comment #10 from Worker Ant ---
REVIEW: https://review.gluster.org/22109 (mount/fuse: expose auto-invalidation
as a mount option) merged (#13) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Feb 2 03:09:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:09:24 +0000
Subject: [Bugs] [Bug 1658116] python2 to python3 compatibilty issues
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1658116
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-02 03:09:24
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/21845 (glusterfind: python2 to python3
compat) merged (#7) on master by Amar Tumballi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Feb 2 03:10:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:10:13 +0000
Subject: [Bugs] [Bug 1670259] New GFID file recreated in a replica set after
a GFID mismatch resolution
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670259
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22112
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 03:10:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:10:15 +0000
Subject: [Bugs] [Bug 1670259] New GFID file recreated in a replica set after
a GFID mismatch resolution
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670259
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-02 03:10:15
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22112 (cluster/dht: Do not use gfid-req in
fresh lookup) merged (#7) on master by Amar Tumballi
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 03:11:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 03:11:43 +0000
Subject: [Bugs] [Bug 1671647] Anomalies in python-lint build job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671647
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
CC| |atumball at redhat.com
Severity|unspecified |high
--- Comment #2 from Amar Tumballi ---
https://build.gluster.org/job/python-lint/
All the latest builds are passing, but if you go inside Console, and watch,
there are some exceptions thrown.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 20:15:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 20:15:09 +0000
Subject: [Bugs] [Bug 1671603] flooding of "dict is NULL" logging & crash of
client process
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671603
--- Comment #1 from Artem Russakovskii ---
The fuse crash happened again yesterday, to another volume. Are there any mount
options that could help mitigate this?
In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch and
restart the mount, which works and recovers the mount point within a minute.
Not ideal, but a temporary workaround.
By the way, the way to reproduce this "Transport endpoint is not connected"
condition for testing purposes is to kill -9 the right "glusterfs
--process-name fuse" process.
monit check:
check filesystem glusterfs_data1 with path /mnt/glusterfs_data1
start program = "/bin/mount /mnt/glusterfs_data1"
stop program = "/bin/umount /mnt/glusterfs_data1"
if space usage > 90% for 5 times within 15 cycles
then alert else if succeeded for 10 cycles then alert
stack trace:
[2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
[2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 26 times between [2019-02-01
23:21:20.857333] and [2019-02-01 23:21:56.164427]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3"
repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01
23:22:03.474036]
pending frames:
frame : type(1) op(LOOKUP)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
time of crash:
2019-02-01 23:22:03
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.3
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6]
/lib64/libc.so.6(+0x36160)[0x7fa02c12d160]
/lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0]
/lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1]
/lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa]
/lib64/libc.so.6(+0x2e772)[0x7fa02c125772]
/lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x70ba1)[0x7fa025556ba1]
/usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f]
/usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820]
/usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063]
/usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2]
/usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3]
/lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559]
/lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f]
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sat Feb 2 20:16:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 20:16:21 +0000
Subject: [Bugs] [Bug 1313567] flooding of "dict is NULL" logging
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1313567
--- Comment #17 from Artem Russakovskii ---
The fuse crash happened again yesterday, to another volume. Are there any mount
options that could help mitigate this?
In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch and
restart the mount, which works and recovers the mount point within a minute.
Not ideal, but a temporary workaround.
By the way, the way to reproduce this "Transport endpoint is not connected"
condition for testing purposes is to kill -9 the right "glusterfs
--process-name fuse" process.
monit check:
check filesystem glusterfs_data1 with path /mnt/glusterfs_data1
start program = "/bin/mount /mnt/glusterfs_data1"
stop program = "/bin/umount /mnt/glusterfs_data1"
if space usage > 90% for 5 times within 15 cycles
then alert else if succeeded for 10 cycles then alert
stack trace:
[2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
[2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 26 times between [2019-02-01
23:21:20.857333] and [2019-02-01 23:21:56.164427]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3"
repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01
23:22:03.474036]
pending frames:
frame : type(1) op(LOOKUP)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
time of crash:
2019-02-01 23:22:03
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.3
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6]
/lib64/libc.so.6(+0x36160)[0x7fa02c12d160]
/lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0]
/lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1]
/lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa]
/lib64/libc.so.6(+0x2e772)[0x7fa02c125772]
/lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x70ba1)[0x7fa025556ba1]
/usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f]
/usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820]
/usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063]
/usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2]
/usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3]
/lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559]
/lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f]
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Feb 2 20:16:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 20:16:52 +0000
Subject: [Bugs] [Bug 1651246] Failed to dispatch handler
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
--- Comment #29 from Artem Russakovskii ---
The fuse crash happened again yesterday, to another volume. Are there any mount
options that could help mitigate this?
In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch and
restart the mount, which works and recovers the mount point within a minute.
Not ideal, but a temporary workaround.
By the way, the way to reproduce this "Transport endpoint is not connected"
condition for testing purposes is to kill -9 the right "glusterfs
--process-name fuse" process.
monit check:
check filesystem glusterfs_data1 with path /mnt/glusterfs_data1
start program = "/bin/mount /mnt/glusterfs_data1"
stop program = "/bin/umount /mnt/glusterfs_data1"
if space usage > 90% for 5 times within 15 cycles
then alert else if succeeded for 10 cycles then alert
stack trace:
[2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
[2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 26 times between [2019-02-01
23:21:20.857333] and [2019-02-01 23:21:56.164427]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3"
repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01
23:22:03.474036]
pending frames:
frame : type(1) op(LOOKUP)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
time of crash:
2019-02-01 23:22:03
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.3
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6]
/lib64/libc.so.6(+0x36160)[0x7fa02c12d160]
/lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0]
/lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1]
/lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa]
/lib64/libc.so.6(+0x2e772)[0x7fa02c125772]
/lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x70ba1)[0x7fa025556ba1]
/usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f]
/usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820]
/usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063]
/usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2]
/usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3]
/lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559]
/lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f]
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sat Feb 2 20:17:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sat, 02 Feb 2019 20:17:15 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
--- Comment #4 from Artem Russakovskii ---
The fuse crash happened again yesterday, to another volume. Are there any mount
options that could help mitigate this?
In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch and
restart the mount, which works and recovers the mount point within a minute.
Not ideal, but a temporary workaround.
By the way, the way to reproduce this "Transport endpoint is not connected"
condition for testing purposes is to kill -9 the right "glusterfs
--process-name fuse" process.
monit check:
check filesystem glusterfs_data1 with path /mnt/glusterfs_data1
start program = "/bin/mount /mnt/glusterfs_data1"
stop program = "/bin/umount /mnt/glusterfs_data1"
if space usage > 90% for 5 times within 15 cycles
then alert else if succeeded for 10 cycles then alert
stack trace:
[2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
[2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
[0x7fa0249e4329]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
[0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
[0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 26 times between [2019-02-01
23:21:20.857333] and [2019-02-01 23:21:56.164427]
The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3"
repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01
23:22:03.474036]
pending frames:
frame : type(1) op(LOOKUP)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
time of crash:
2019-02-01 23:22:03
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.3
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6]
/lib64/libc.so.6(+0x36160)[0x7fa02c12d160]
/lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0]
/lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1]
/lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa]
/lib64/libc.so.6(+0x2e772)[0x7fa02c125772]
/lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d]
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x70ba1)[0x7fa025556ba1]
/usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f]
/usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820]
/usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063]
/usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2]
/usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3]
/lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559]
/lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f]
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Feb 3 03:07:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 03 Feb 2019 03:07:11 +0000
Subject: [Bugs] [Bug 1651246] Failed to dispatch handler
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
--- Comment #30 from Milind Changire ---
the following line the backtrace which is the topmost line pointing to gluster
bits:
/usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d]
resolves to:
afr-common.c:2203
intersection = alloca0(priv->child_count);
-----
NOTE:
print-backtrace.sh isn't helping here because the naming convention of rpms
have changed
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sun Feb 3 05:57:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 03 Feb 2019 05:57:12 +0000
Subject: [Bugs] [Bug 1671733] clang-format test is checking contrib files,
but rfc.sh skips them
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671733
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
CC| |nigelb at redhat.com
Assignee|bugs at gluster.org |nigelb at redhat.com
--- Comment #1 from Nigel Babu ---
Fixing.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Feb 3 11:22:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 03 Feb 2019 11:22:51 +0000
Subject: [Bugs] [Bug 1671733] clang-format test is checking contrib files,
but rfc.sh skips them
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671733
--- Comment #2 from Nigel Babu ---
Pushed https://review.gluster.org/#/c/build-jobs/+/22143 for review.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sun Feb 3 15:12:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 03 Feb 2019 15:12:43 +0000
Subject: [Bugs] [Bug 1672076] New: chrome / chromium crash on gluster,
sqlite issue?
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672076
Bug ID: 1672076
Summary: chrome / chromium crash on gluster, sqlite issue?
Product: GlusterFS
Version: 5
Status: NEW
Component: glusterd
Assignee: bugs at gluster.org
Reporter: mjc at avtechpulse.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
I run Fedora 29 clients and servers, with user home folders mounted on gluster.
This worked fine with Fedora 27 clients, but on F29 clients the chrome and
chromium browsers crash. The backtrace info (see below) suggests problems with
sqlite.
Firefox runs just fine, even though it is an sqlite user too.
chromium clients mounted on local drives work fine.
- Mike
clients: glusterfs-5.3-1.fc29.x86_64,
chromium-71.0.3578.98-1.fc29.x86_64
server: glusterfs-server-5.3-1.fc29.x86_64
[root at gluster1 ~]# gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: 91ef5aed-94be-44ff-a19d-c41682808159
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster/brick1/data
Brick2: gluster2:/gluster/brick2/data
Options Reconfigured:
nfs.disable: on
server.allow-insecure: on
cluster.favorite-child-policy: mtime
[mjc at daisy ~]$ chromium-browser
[18826:18826:0130/094436.431828:ERROR:sandbox_linux.cc(364)]
InitializeSandbox() called with multiple threads in process gpu-process.
[18785:18785:0130/094440.905900:ERROR:x11_input_method_context_impl_gtk.cc(144)]
Not implemented reached in virtual void
libgtkui::X11InputMethodContextImplGtk::SetSurroundingText(const string16&,
const gfx::Range&)
Received signal 7 BUS_ADRERR 7fc30e9bd000
#0 0x7fc34b008261 base::debug::StackTrace::StackTrace()
#1 0x7fc34b00869b base::debug::(anonymous namespace)::StackDumpSignalHandler()
#2 0x7fc34b008cb7 base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x7fc3401fe030
#4 0x7fc33f5820f0 __memmove_avx_unaligned_erms
#5 0x7fc346099491 unixRead
#6 0x7fc3460d2784 readDbPage
#7 0x7fc3460d5e4f getPageNormal
#8 0x7fc3460d5f01 getPageMMap
#9 0x7fc3460958f5 btreeGetPage
#10 0x7fc3460ec47b sqlite3BtreeBeginTrans
#11 0x7fc3460fd1e8 sqlite3VdbeExec
#12 0x7fc3461056af chrome_sqlite3_step
#13 0x7fc3464071c7 sql::Statement::StepInternal()
#14 0x7fc3464072de sql::Statement::Step()
#15 0x555fd21699d7 autofill::AutofillTable::GetAutofillProfiles()
#16 0x555fd2160808
autofill::AutofillProfileSyncableService::MergeDataAndStartSyncing()
#17 0x555fd1d25207 syncer::SharedChangeProcessor::StartAssociation()
#18 0x555fd1d09652
_ZN4base8internal7InvokerINS0_9BindStateIMN6syncer21SharedChangeProcessorEFvNS_17RepeatingCallbackIFvNS3_18DataTypeController15ConfigureResultERKNS3_15SyncMergeResultESA_EEEPNS3_10SyncClientEPNS3_29GenericChangeProcessorFactoryEPNS3_9UserShareESt10unique_ptrINS3_20DataTypeErrorHandlerESt14default_deleteISK_EEEJ13scoped_refptrIS4_ESC_SE_SG_SI_NS0_13PassedWrapperISN_EEEEEFvvEE3RunEPNS0_13BindStateBaseE
#19 0x7fc34af4309d base::debug::TaskAnnotator::RunTask()
#20 0x7fc34afcda86 base::internal::TaskTracker::RunOrSkipTask()
#21 0x7fc34b01b6a2 base::internal::TaskTrackerPosix::RunOrSkipTask()
#22 0x7fc34afd07d6 base::internal::TaskTracker::RunAndPopNextTask()
#23 0x7fc34afca5e7 base::internal::SchedulerWorker::RunWorker()
#24 0x7fc34afcac84 base::internal::SchedulerWorker::RunSharedWorker()
#25 0x7fc34b01aa09 base::(anonymous namespace)::ThreadFunc()
#26 0x7fc3401f358e start_thread
#27 0x7fc33f51d6a3 __GI___clone
r8: 00000cbfd93d4a00 r9: 00000000cbfd93d4 r10: 000000000000011c r11:
0000000000000000
r12: 00000cbfd940eb00 r13: 0000000000000000 r14: 0000000000000000 r15:
00000cbfd9336c00
di: 00000cbfd93d4a00 si: 00007fc30e9bd000 bp: 00007fc30faff7e0 bx:
0000000000000800
dx: 0000000000000800 ax: 00000cbfd93d4a00 cx: 0000000000000800 sp:
00007fc30faff788
ip: 00007fc33f5820f0 efl: 0000000000010287 cgf: 002b000000000033 erf:
0000000000000004
trp: 000000000000000e msk: 0000000000000000 cr2: 00007fc30e9bd000
[end of stack trace]
Calling _exit(1). Core file will not be generated.
And a client mount log is below - although the log is full megabytes of:
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 20178 times between [2019-01-31
13:44:14.962950] and [2019-01-31 13:46:00.013310]
and
[2019-01-31 13:46:07.470163] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
so I've just shown the start of the log. I guess that's related to
https://bugzilla.redhat.com/show_bug.cgi?id=1651246.
- Mike
Mount log:
[2019-01-31 13:44:00.775353] I [MSGID: 100030] [glusterfsd.c:2715:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.3 (args:
/usr/sbin/glusterfs --process-name fuse --volfile-server=gluster1
--volfile-server=gluster2 --volfile-id=/volume1 /fileserver2)
[2019-01-31 13:44:00.817140] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-01-31 13:44:00.926491] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 2
[2019-01-31 13:44:00.928102] I [MSGID: 114020] [client.c:2354:notify]
0-volume1-client-0: parent translators are ready, attempting connect on
transport
[2019-01-31 13:44:00.931063] I [MSGID: 114020] [client.c:2354:notify]
0-volume1-client-1: parent translators are ready, attempting connect on
transport
[2019-01-31 13:44:00.932144] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-volume1-client-0: changing port to 49152 (from 0)
Final graph:
+------------------------------------------------------------------------------+
1: volume volume1-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host gluster1
5: option remote-subvolume /gluster/brick1/data
6: option transport-type socket
7: option transport.tcp-user-timeout 0
8: option transport.socket.keepalive-time 20
9: option transport.socket.keepalive-interval 2
10: option transport.socket.keepalive-count 9
11: option send-gids true
12: end-volume
13:
14: volume volume1-client-1
15: type protocol/client
16: option ping-timeout 42
17: option remote-host gluster2
18: option remote-subvolume /gluster/brick2/data
19: option transport-type socket
20: option transport.tcp-user-timeout 0
21: option transport.socket.keepalive-time 20
22: option transport.socket.keepalive-interval 2
23: option transport.socket.keepalive-count 9
24: option send-gids true
25: end-volume
26:
27: volume volume1-replicate-0
28: type cluster/replicate
29: option afr-pending-xattr volume1-client-0,volume1-client-1
30: option favorite-child-policy mtime
31: option use-compound-fops off
32: subvolumes volume1-client-0 volume1-client-1
33: end-volume
34:
35: volume volume1-dht
36: type cluster/distribute
[2019-01-31 13:44:00.932495] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
37: option lock-migration off
38: option force-migration off
39: subvolumes volume1-replicate-0
40: end-volume
41:
42: volume volume1-write-behind
43: type performance/write-behind
44: subvolumes volume1-dht
45: end-volume
46:
47: volume volume1-read-ahead
48: type performance/read-ahead
49: subvolumes volume1-write-behind
50: end-volume
51:
52: volume volume1-readdir-ahead
53: type performance/readdir-ahead
54: option parallel-readdir off
55: option rda-request-size 131072
56: option rda-cache-limit 10MB
57: subvolumes volume1-read-ahead
58: end-volume
59:
60: volume volume1-io-cache
61: type performance/io-cache
62: subvolumes volume1-readdir-ahead
63: end-volume
64:
65: volume volume1-quick-read
66: type performance/quick-read
67: subvolumes volume1-io-cache
68: end-volume
69:
70: volume volume1-open-behind
71: type performance/open-behind
72: subvolumes volume1-quick-read
73: end-volume
74:
75: volume volume1-md-cache
76: type performance/md-cache
77: subvolumes volume1-open-behind
78: end-volume
79:
80: volume volume1
81: type debug/io-stats
82: option log-level INFO
83: option latency-measurement off
84: option count-fop-hits off
85: subvolumes volume1-md-cache
86: end-volume
87:
88: volume meta-autoload
89: type meta
90: subvolumes volume1
91: end-volume
92:
+------------------------------------------------------------------------------+
[2019-01-31 13:44:00.933375] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-volume1-client-1: changing port to 49152 (from 0)
[2019-01-31 13:44:00.933549] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-31 13:44:00.934170] I [MSGID: 114046]
[client-handshake.c:1107:client_setvolume_cbk] 0-volume1-client-0: Connected to
volume1-client-0, attached to remote volume '/gluster/brick1/data'.
[2019-01-31 13:44:00.934210] I [MSGID: 108005]
[afr-common.c:5237:__afr_handle_child_up_event] 0-volume1-replicate-0:
Subvolume 'volume1-client-0' came back up; going online.
[2019-01-31 13:44:00.935291] I [MSGID: 114046]
[client-handshake.c:1107:client_setvolume_cbk] 0-volume1-client-1: Connected to
volume1-client-1, attached to remote volume '/gluster/brick2/data'.
[2019-01-31 13:44:00.937661] I [fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.24 kernel 7.28
[2019-01-31 13:44:00.937691] I [fuse-bridge.c:4878:fuse_graph_sync] 0-fuse:
switched to graph 0
[2019-01-31 13:44:14.852144] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:14.962950] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-31 13:44:15.038615] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.040956] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.041044] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.041467] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.471018] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.477003] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.482380] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.487047] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.603624] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.607726] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.607906] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 02:11:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 02:11:48 +0000
Subject: [Bugs] [Bug 1449773] Finish the installation and freebsd10.3.rht
and clean password in jenkins
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1449773
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed|2018-08-29 03:53:37 |2019-02-04 02:11:48
--- Comment #3 from Nigel Babu ---
This is now fixed. We build on the internal freebsd builder now.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 02:12:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 02:12:40 +0000
Subject: [Bugs] [Bug 1498151] Move download server to the community cage
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1498151
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-02-04 02:12:40
--- Comment #11 from Nigel Babu ---
This is now complete.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 02:59:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 02:59:25 +0000
Subject: [Bugs] [Bug 1564451] The abandon job for patches should post info
in bugzilla that some patch is abandon'd.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1564451
--- Comment #2 from Nigel Babu ---
The code for this is written in the bugzilla script, but this needs a Jenkins
job to actually call the script.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 03:47:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 03:47:49 +0000
Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported)
components in the build
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1635688
--- Comment #18 from Worker Ant ---
REVIEW: https://review.gluster.org/21877 (glusterd: manage upgrade to current
master) merged (#3) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 03:15:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 03:15:43 +0000
Subject: [Bugs] [Bug 1659394] Maintainer permissions on gluster-mixins
project for Ankush
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659394
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |nigelb at redhat.com
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-02-04 03:15:43
--- Comment #1 from Nigel Babu ---
This has been done for some time.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 04:50:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 04:50:56 +0000
Subject: [Bugs] [Bug 1636246] [GSS] SMBD crashes when streams_xattr VFS is
used with Gluster VFS
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1636246
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 05:14:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:14:44 +0000
Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile
workload tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670031
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22120
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 05:14:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:14:45 +0000
Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile
workload tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670031
--- Comment #4 from Worker Ant ---
REVIEW: https://review.gluster.org/22120 (inode: Reduce work load of
inode_table->lock section) posted (#6) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 05:41:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:41:45 +0000
Subject: [Bugs] [Bug 1672155] New: looks like devrpm-fedora jobs are failing
due to lack of storage
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672155
Bug ID: 1672155
Summary: looks like devrpm-fedora jobs are failing due to lack
of storage
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Severity: urgent
Priority: high
Assignee: bugs at gluster.org
Reporter: atumball at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
https://build.gluster.org/job/devrpm-fedora/ the recent failures seems to be
for lack of memory in the system. For example check the failure of
https://build.gluster.org/job/devrpm-fedora/14647/console
Version-Release number of selected component (if applicable):
master
How reproducible:
100%
Steps to Reproduce:
1. submit a patch to glusterfs.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 05:52:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:52:45 +0000
Subject: [Bugs] [Bug 1362129] rename of a file can cause data loss in an
replica/arbiter volume configuration
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1362129
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 05:53:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:53:05 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 05:53:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:53:07 +0000
Subject: [Bugs] [Bug 1655578] Incorrect usage of local->fd in
afr_open_ftruncate_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1655578
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 05:53:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:53:09 +0000
Subject: [Bugs] [Bug 1659439] Memory leak: dict_t leak in rda_opendir
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659439
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 05:53:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 05:53:10 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|VERIFIED |RELEASE_PENDING
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 06:27:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 06:27:53 +0000
Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and
files listing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670382
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |locbus at gmail.com,
| |nbalacha at redhat.com
Flags| |needinfo?(locbus at gmail.com)
--- Comment #2 from Nithya Balachandran ---
Can you clarify that you are doing the following:
1. The files/directories are being created from one gluster client (not
directly on the bricks)
2. The files/directories cannot be listed from another client which has mounted
the same volume
3. Are the files/directories visible on the client from which they were
created?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:21:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:21:30 +0000
Subject: [Bugs] [Bug 1672155] looks like devrpm-fedora jobs are failing due
to lack of storage
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672155
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |nigelb at redhat.com
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-02-04 07:21:30
--- Comment #1 from Nigel Babu ---
This is now fixed. The /home/jenkins/.local folder was consuming a bunch of
space as was the mock cache. I've cleared them both out now.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:36:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:36:24 +0000
Subject: [Bugs] [Bug 1636246] [GSS] SMBD crashes when streams_xattr VFS is
used with Gluster VFS
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1636246
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed|2018-10-31 03:11:36 |2019-02-04 07:36:24
--- Comment #45 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0261
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:36:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:36:28 +0000
Subject: [Bugs] [Bug 1636246] [GSS] SMBD crashes when streams_xattr VFS is
used with Gluster VFS
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1636246
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0261
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:25 +0000
Subject: [Bugs] [Bug 1362129] rename of a file can cause data loss in an
replica/arbiter volume configuration
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1362129
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed| |2019-02-04 07:41:25
--- Comment #27 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:31 +0000
Subject: [Bugs] [Bug 1646892] Portmap entries showing stale brick entries
when bricks are down
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1646892
Bug 1646892 depends on bug 1637379, which changed state.
Bug 1637379 Summary: Portmap entries showing stale brick entries when bricks are down
https://bugzilla.redhat.com/show_bug.cgi?id=1637379
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:33 +0000
Subject: [Bugs] [Bug 1642448] EC volume getting created without any
redundant brick
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1642448
Bug 1642448 depends on bug 1597252, which changed state.
Bug 1597252 Summary: EC volume getting created without any redundant brick
https://bugzilla.redhat.com/show_bug.cgi?id=1597252
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:34 +0000
Subject: [Bugs] [Bug 1654181] glusterd segmentation fault:
glusterd_op_ac_brick_op_failed (event=0x7f44e0e63f40,
ctx=0x0) at glusterd-op-sm.c:5606
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654181
Bug 1654181 depends on bug 1639476, which changed state.
Bug 1639476 Summary: glusterd segmentation fault: glusterd_op_ac_brick_op_failed (event=0x7f44e0e63f40, ctx=0x0) at glusterd-op-sm.c:5606
https://bugzilla.redhat.com/show_bug.cgi?id=1639476
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:35 +0000
Subject: [Bugs] [Bug 1630922] glusterd crashed and core generated at
gd_mgmt_v3_unlock_timer_cbk after huge number of volumes were created
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1630922
Bug 1630922 depends on bug 1599220, which changed state.
Bug 1599220 Summary: glusterd crashed and core generated at gd_mgmt_v3_unlock_timer_cbk after huge number of volumes were created
https://bugzilla.redhat.com/show_bug.cgi?id=1599220
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:35 +0000
Subject: [Bugs] [Bug 1655827] [Glusterd]: Glusterd crash while expanding
volumes using heketi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1655827
Bug 1655827 depends on bug 1652466, which changed state.
Bug 1652466 Summary: [Glusterd]: Glusterd crash while expanding volumes using heketi
https://bugzilla.redhat.com/show_bug.cgi?id=1652466
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:35 +0000
Subject: [Bugs] [Bug 1647074] when peer detach is issued,
throw a warning to remount volumes using other cluster IPs before
proceeding
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1647074
Bug 1647074 depends on bug 1639568, which changed state.
Bug 1639568 Summary: when peer detach is issued, throw a warning to remount volumes using other cluster IPs before proceeding
https://bugzilla.redhat.com/show_bug.cgi?id=1639568
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:36 +0000
Subject: [Bugs] [Bug 1615385] glusterd segfault - memcpy () at
/usr/include/bits/string3.h:51
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1615385
Bug 1615385 depends on bug 1608507, which changed state.
Bug 1608507 Summary: glusterd segfault - memcpy () at /usr/include/bits/string3.h:51
https://bugzilla.redhat.com/show_bug.cgi?id=1608507
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:36 +0000
Subject: [Bugs] [Bug 1654187] [geo-rep]: RFE - Make slave volume read-only
while setting up geo-rep (by default)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654187
Bug 1654187 depends on bug 1643370, which changed state.
Bug 1643370 Summary: [geo-rep]: RFE - Make slave volume read-only while setting up geo-rep (by default)
https://bugzilla.redhat.com/show_bug.cgi?id=1643370
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:38 +0000
Subject: [Bugs] [Bug 1362129] rename of a file can cause data loss in an
replica/arbiter volume configuration
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1362129
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:44 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed| |2019-02-04 07:41:44
--- Comment #10 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:46 +0000
Subject: [Bugs] [Bug 1665826] [geo-rep]: Directory renames not synced to
slave in Hybrid Crawl
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665826
Bug 1665826 depends on bug 1664235, which changed state.
Bug 1664235 Summary: [geo-rep]: Directory renames not synced to slave in Hybrid Crawl
https://bugzilla.redhat.com/show_bug.cgi?id=1664235
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:46 +0000
Subject: [Bugs] [Bug 1654138] Optimize for virt store fails with distribute
volume type
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654138
Bug 1654138 depends on bug 1653613, which changed state.
Bug 1653613 Summary: [Dalton] Optimize for virt store fails with distribute volume type
https://bugzilla.redhat.com/show_bug.cgi?id=1653613
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:44 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed| |2019-02-04 07:41:44
--- Comment #11 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:47 +0000
Subject: [Bugs] [Bug 1667779] glusterd leaks about 1GB memory per day on
single machine of storage pool
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667779
Bug 1667779 depends on bug 1667169, which changed state.
Bug 1667169 Summary: glusterd leaks about 1GB memory per day on single machine of storage pool
https://bugzilla.redhat.com/show_bug.cgi?id=1667169
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:48 +0000
Subject: [Bugs] [Bug 1654270] glusterd crashed with seg fault possibly
during node reboot while volume creates and deletes were happening
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654270
Bug 1654270 depends on bug 1654161, which changed state.
Bug 1654161 Summary: glusterd crashed with seg fault possibly during node reboot while volume creates and deletes were happening
https://bugzilla.redhat.com/show_bug.cgi?id=1654161
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:49 +0000
Subject: [Bugs] [Bug 1669382] [ovirt-gluster] Fuse mount crashed while
creating the preallocated image
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1669382
Bug 1669382 depends on bug 1668304, which changed state.
Bug 1668304 Summary: [RHHI-V] Fuse mount crashed while creating the preallocated image
https://bugzilla.redhat.com/show_bug.cgi?id=1668304
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:44 +0000
Subject: [Bugs] [Bug 1655578] Incorrect usage of local->fd in
afr_open_ftruncate_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1655578
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed| |2019-02-04 07:41:44
--- Comment #17 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:49 +0000
Subject: [Bugs] [Bug 1669077] [ovirt-gluster] Fuse mount crashed while
creating the preallocated image
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1669077
Bug 1669077 depends on bug 1668304, which changed state.
Bug 1668304 Summary: [RHHI-V] Fuse mount crashed while creating the preallocated image
https://bugzilla.redhat.com/show_bug.cgi?id=1668304
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:49 +0000
Subject: [Bugs] [Bug 1651322] Incorrect usage of local->fd in
afr_open_ftruncate_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651322
Bug 1651322 depends on bug 1655578, which changed state.
Bug 1655578 Summary: Incorrect usage of local->fd in afr_open_ftruncate_cbk
https://bugzilla.redhat.com/show_bug.cgi?id=1655578
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:50 +0000
Subject: [Bugs] [Bug 1655527] Incorrect usage of local->fd in
afr_open_ftruncate_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1655527
Bug 1655527 depends on bug 1655578, which changed state.
Bug 1655578 Summary: Incorrect usage of local->fd in afr_open_ftruncate_cbk
https://bugzilla.redhat.com/show_bug.cgi?id=1655578
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:50 +0000
Subject: [Bugs] [Bug 1663232] profile info command is not displaying
information of bricks which are hosted on peers
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663232
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:44 +0000
Subject: [Bugs] [Bug 1659439] Memory leak: dict_t leak in rda_opendir
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659439
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
Last Closed| |2019-02-04 07:41:44
--- Comment #13 from errata-xmlrpc ---
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:53 +0000
Subject: [Bugs] [Bug 1662906] Longevity: glusterfsd(brick process) crashed
when we do volume creates and deletes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662906
Bug 1662906 depends on bug 1662828, which changed state.
Bug 1662828 Summary: Longevity: glusterfsd(brick process) crashed when we do volume creates and deletes
https://bugzilla.redhat.com/show_bug.cgi?id=1662828
What |Removed |Added
----------------------------------------------------------------------------
Status|RELEASE_PENDING |CLOSED
Resolution|--- |ERRATA
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:54 +0000
Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in
dht_rmdir_readdirp_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1654103
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:54 +0000
Subject: [Bugs] [Bug 1655578] Incorrect usage of local->fd in
afr_open_ftruncate_cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1655578
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 07:41:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 07:41:54 +0000
Subject: [Bugs] [Bug 1659439] Memory leak: dict_t leak in rda_opendir
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659439
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Red Hat Product Errata
| |RHBA-2019:0263
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 08:48:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 08:48:15 +0000
Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and
files listing
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670382
Marcin changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(locbus at gmail.com) |
--- Comment #3 from Marcin ---
Hello Nithya,
1. Yes, the files/directory are being created from Windows2012R2 (samba client)
2. No, the files/directories cannot be listed by another client which has
mounted the same volume.
3. No, the files/directories aren't visible on the client from that were
created. In addition, I can confirm that they aren't visible, even directly on
the brick of the host to which they write data... (the solution is, for
example, restarting the host).
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 09:32:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 09:32:44 +0000
Subject: [Bugs] [Bug 1672205] New: [GSS] 'gluster get-state' command fails
if volume brick doesn't exist.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Bug ID: 1672205
Summary: [GSS] 'gluster get-state' command fails if volume
brick doesn't exist.
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Keywords: Improvement
Severity: medium
Priority: medium
Assignee: bugs at gluster.org
Reporter: srakonde at redhat.com
Depends On: 1669970
Target Milestone: ---
Group: private
Classification: Community
Description of problem:
'gluster get-state' command fails when any brick of a volume is not present or
deleted. Instead the command output should report the brick failure.
When any brick of a volume is not available or being removed 'gluster
get-state' command fails with the following error:
'Failed to get daemon state. Check glusterd log file for more details'
The requirement is 'gluster get-state' command should not fail and generate
gluster brick's state in the output.
For example:
cat /var/run/gluster/glusterd_state_XYZ
...
Volume3.name: v02
Volume3.id: c194e70d-6738-4ba3-9502-ec5603aab679
Volume3.type: Distributed-Replicate
...
## HERE #
Volume3.Brick1.port: N/A or 0 or empty?
Volume3.Brick1.rdma_port: 0
Volume3.Brick1.port_registered: N/A or 0 or empty?
Volume3.Brick1.status: Failed
Volume3.Brick1.spacefree: N/A or 0 or empty?
Volume3.Brick1.spacetotal: N/A or 0 or empty?
...
This situation can happen in production when a local storage on node is
'broken' or while using heketi with gluster. Volumes are present but bricks are
missing.
How reproducible:
Always
Version-Release number of selected component (if applicable): RHGS 3.X
Steps to Reproduce:
1. Delete a brick
2. Run command 'gluster get-state'
Actual results:
Command fails with the below message
'Failed to get daemon state. Check glusterd log file for more details'
Expected results:
'gluster get-state'Command should not fail. It should report the faulty brick's
state in the output so one can simply identify what is the problem with the
volumne.
'gluster get-state' command should return a message regarding that 'faulty
brick'.
--- Additional comment from Atin Mukherjee on 2019-01-28 15:10:36 IST ---
Root cause:
from glusterd_get_state ()
ret = sys_statvfs(brickinfo->path, &brickstat);
if (ret) {
gf_msg(this->name, GF_LOG_ERROR, errno, GD_MSG_FILE_OP_FAILED,
"statfs error: %s ", strerror(errno));
goto out;
}
memfree = brickstat.f_bfree * brickstat.f_bsize;
memtotal = brickstat.f_blocks * brickstat.f_bsize;
fprintf(fp, "Volume%d.Brick%d.spacefree: %" PRIu64 "Bytes\n",
count_bkp, count, memfree);
fprintf(fp, "Volume%d.Brick%d.spacetotal: %" PRIu64 "Bytes\n",
count_bkp, count, memtotal);
a statfs call is made on the brick path for every bricks of the volumes to
calculate the total vs free space. In this case we shouldn't error out on a
statfs failure and should report spacefree and spacetotal as unavailable or 0
bytes.
--- Additional comment from Atin Mukherjee on 2019-02-04 07:59:34 IST ---
We need to have a test coverage to ensure that get-state command should
generate an output successfully even if underlying brick(s) of volume(s) in the
cluster go bad.
--- Additional comment from sankarshan on 2019-02-04 14:48:30 IST ---
(In reply to Atin Mukherjee from comment #4)
> We need to have a test coverage to ensure that get-state command should
> generate an output successfully even if underlying brick(s) of volume(s) in
> the cluster go bad.
The test coverage flag needs to be set
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 09:33:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 09:33:20 +0000
Subject: [Bugs] [Bug 1672205] [GSS] 'gluster get-state' command fails if
volume brick doesn't exist.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Group|private |
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 09:33:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 09:33:37 +0000
Subject: [Bugs] [Bug 1672205] 'gluster get-state' command fails if volume
brick doesn't exist.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Sanju changed:
What |Removed |Added
----------------------------------------------------------------------------
Summary|[GSS] 'gluster get-state' |'gluster get-state' command
|command fails if volume |fails if volume brick
|brick doesn't exist. |doesn't exist.
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 09:53:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 09:53:18 +0000
Subject: [Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page
cache on read after write
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664934
Miklos Szeredi changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(mszeredi at redhat.c |
|om) |
--- Comment #11 from Miklos Szeredi ---
The underlying problem is that auto invalidate cannot differentiate local and
remote modification based on mtime alone.
What NFS apprently does is refresh attributes immediately after a write (not
sure how often it does this, I guess not after each individual write).
FUSE maybe should do this if auto invalidation is enabled, but if the
filesystem can do its own invalidation, possibly based on better information
than c/mtime, then that seem to be a better option.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 10:03:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 10:03:16 +0000
Subject: [Bugs] [Bug 1672205] 'gluster get-state' command fails if volume
brick doesn't exist.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22147
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 10:03:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 10:03:17 +0000
Subject: [Bugs] [Bug 1672205] 'gluster get-state' command fails if volume
brick doesn't exist.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22147 (glusterd: get-state command should
not fail if any brick is gone bad) posted (#1) for review on master by Sanju
Rakonde
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 10:46:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 10:46:14 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22148
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 10:46:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 10:46:15 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22148 (libglusterfs/common-utils.c: Fix
buffer size for checksum computation) posted (#1) for review on release-5 by
Varsha Rao
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:07:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:07:09 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22149
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:17:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:17:03 +0000
Subject: [Bugs] [Bug 1672248] New: quorum count not updated in nfs-server
vol file
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672248
Bug ID: 1672248
Summary: quorum count not updated in nfs-server vol file
Product: GlusterFS
Version: 5
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: varao at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Check the original bug 1657744.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:19:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:19:35 +0000
Subject: [Bugs] [Bug 1672249] New: quorum count value not updated in
nfs-server vol file
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672249
Bug ID: 1672249
Summary: quorum count value not updated in nfs-server vol file
Product: GlusterFS
Version: 4.1
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: varao at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Check the original bug 1657744
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:22:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:22:27 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
--- Comment #4 from Worker Ant ---
REVIEW: https://review.gluster.org/22149 (libglusterfs/common-utils.c: Fix
buffer size for checksum computation) posted (#1) for review on release-4.1 by
Varsha Rao
--- Comment #5 from Worker Ant ---
REVISION POSTED: https://review.gluster.org/22149 (libglusterfs/common-utils.c:
Fix buffer size for checksum computation) posted (#2) for review on release-4.1
by Varsha Rao
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:22:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:22:28 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID|Gluster.org Gerrit 22149 |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:22:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:22:29 +0000
Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server
vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672249
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22149
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:22:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:22:30 +0000
Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server
vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672249
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22149 (libglusterfs/common-utils.c: Fix
buffer size for checksum computation) posted (#2) for review on release-4.1 by
Varsha Rao
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:33:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:33:38 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
--- Comment #6 from Worker Ant ---
REVISION POSTED: https://review.gluster.org/22148 (libglusterfs/common-utils.c:
Fix buffer size for checksum computation) posted (#2) for review on release-5
by Varsha Rao
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:33:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:33:39 +0000
Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1657744
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID|Gluster.org Gerrit 22148 |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:33:40 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:33:40 +0000
Subject: [Bugs] [Bug 1672248] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672248
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22148
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:33:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:33:41 +0000
Subject: [Bugs] [Bug 1672248] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672248
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22148 (libglusterfs/common-utils.c: Fix
buffer size for checksum computation) posted (#2) for review on release-5 by
Varsha Rao
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:48:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:48:56 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22150
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 11:48:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:48:57 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
--- Comment #7 from Worker Ant ---
REVIEW: https://review.gluster.org/22150 (afr/shd: Cleanup self heal daemon
resources during afr fini) posted (#1) for review on master by mohammed rafi
kc
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 11:53:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:53:05 +0000
Subject: [Bugs] [Bug 1672258] New: fuse takes memory and doesn't free
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672258
Bug ID: 1672258
Summary: fuse takes memory and doesn't free
Product: GlusterFS
Version: 4.1
Hardware: x86_64
OS: Linux
Status: NEW
Component: fuse
Assignee: bugs at gluster.org
Reporter: redhat at core.ch
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Created attachment 1526739
--> https://bugzilla.redhat.com/attachment.cgi?id=1526739&action=edit
statedump 20190204
Description of problem:
Fuse will take daily more and more memory until the swap is full. Then the
system won't work properly anymore.
We have upgraded to 4.1 at the end of December 2018 and since there we have to
restart the gluster cluster and all nodes more or less every 2 weeks, because
the memory is taken. We have this situation on all div. gluster clusters.
We have gluster 4.1.7 on a ubuntu 16.04.5 LTS (xenial) installed.
System-Checks:
---
Memory and Swap:
free
total used free shared buff/cache available
Mem: 32834992 31387196 243932 9148 1203864 897800
Swap: 31999996 25951268 6048728
---
top and find out the service and get the status of the service
systemctl status data_net.mount
? data_net.mount - Mount System glusterfs on path /data_net from source
localhost:/ctgv0 with
Loaded: loaded (/etc/systemd/system/data_net.mount; static; vendor preset:
enabled)
Active: active (mounted) since Fri 2019-02-01 07:51:32 CET; 3 days ago
Where: /data_net
What: localhost:/ctgv0
Docs: https://oguya.ch/posts/2015-09-01-systemd-mount-partition/
Process: 11256 ExecUnmount=/bin/umount /data_net (code=exited,
status=0/SUCCESS)
Process: 11257 ExecMount=/bin/mount localhost:/ctgv0 /data_net -t glusterfs
-o defaults,_netdev (code=
Tasks: 20
Memory: 28.0G
CPU: 12h 43min 23.929s
CGroup: /system.slice/data_net.mount
?? 7825 /usr/sbin/glusterfs --process-name fuse
--volfile-server=localhost --volfile-id=/ctgv
??11337 /usr/sbin/glusterfs --process-name fuse
--volfile-server=localhost --volfile-id=/ctgv
Feb 01 07:51:32 nucprdstk112 systemd[1]: Mounting coretech: Mount System
glusterfs on path /data_net fro
Feb 01 07:51:32 nucprdstk112 systemd[1]: Mounted coretech: Mount System
glusterfs on path /data_net from
Feb 01 08:02:08 nucprdstk112 data_net[7825]: [2019-02-01 07:02:08.392799] C
[rpc-clnt-ping.c:166:rpc_cln
---
uptime:
12:03:44 up 17 days, 16 min, 1 user, load average: 1.30, 1.03, 1.01
---
Followed this description:
https://docs.gluster.org/en/v3/Troubleshooting/troubleshooting-memory/
gluster volume info
Volume Name: ctgv0
Type: Replicate
Volume ID: 0e70a1ba-2c70-494a-8a85-f757fe77901a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: nucprdstk112:/var/glusterfs/ctgv0/brick1
Brick2: nucprdstk113:/var/glusterfs/ctgv0/brick2
Brick3: nucprdstk114:/var/glusterfs/ctgv0/brick2
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
Version-Release number of selected component (if applicable):
gluster --version
glusterfs 4.1.7
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc.
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
How reproducible:
Steps to Reproduce:
1. Restart server and wait for 1 or 2 weeks
2.
3.
Actual results:
still takes memory every day.
Expected results:
free memory
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 11:55:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 11:55:10 +0000
Subject: [Bugs] [Bug 1672258] fuse takes memory and doesn't free
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672258
Ritzo changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |redhat at core.ch
--- Comment #1 from Ritzo ---
Created attachment 1526740
--> https://bugzilla.redhat.com/attachment.cgi?id=1526740&action=edit
statedump 20190201
another statedump file from 1st February
Thanks a lot for your advice / support
Ritzo
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 12:07:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 12:07:02 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 22151
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 12:07:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 12:07:03 +0000
Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal
deamon (shd) when a volume is stopped unless it is the last volume
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1659708
--- Comment #8 from Worker Ant ---
REVIEW: https://review.gluster.org/22151 (afr/shd: Cleanup self heal daemon
resources during afr fini) posted (#1) for review on master by mohammed rafi
kc
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 12:56:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 12:56:36 +0000
Subject: [Bugs] [Bug 1243991] "gluster volume set group "
is not in the help text
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1243991
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 12:56:36
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22067 (cli: Added the group option for
volume set) merged (#4) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:20:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:20:53 +0000
Subject: [Bugs] [Bug 1672314] New: thin-arbiter: Check with thin-arbiter
file before marking new entry change log
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672314
Bug ID: 1672314
Summary: thin-arbiter: Check with thin-arbiter file before
marking new entry change log
Product: GlusterFS
Version: 5
Status: NEW
Component: replicate
Assignee: bugs at gluster.org
Reporter: aspandey at redhat.com
CC: bugs at gluster.org
Depends On: 1662264
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1662264 +++
Description of problem:
In case of creating an entry, if a fop fails on any one of the data bricks,
we mark the changelog on that entry on the brick which was successful.
For thin arbiter volume before marking this changelog, we should check if the
brick on which fop succeeded was the good brick or not. If the bricks was bad
according to thin-arbiter file information, we should just continue with postop
changelog process. If the brick was good, we should mark the new entry
changelog and continuew with postop changelog.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Worker Ant on 2018-12-27 09:19:25 UTC ---
REVIEW: https://review.gluster.org/21933 (cluster/thin-arbiter: Consider
thin-arbiter before marking new entry changelog) posted (#1) for review on
master by Ashish Pandey
--- Additional comment from Worker Ant on 2019-02-01 05:45:32 UTC ---
REVIEW: https://review.gluster.org/21933 (cluster/thin-arbiter: Consider
thin-arbiter before marking new entry changelog) merged (#6) on master by Amar
Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1662264
[Bug 1662264] thin-arbiter: Check with thin-arbiter file before marking new
entry change log
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:20:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:20:53 +0000
Subject: [Bugs] [Bug 1662264] thin-arbiter: Check with thin-arbiter file
before marking new entry change log
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662264
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1672314
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672314
[Bug 1672314] thin-arbiter: Check with thin-arbiter file before marking new
entry change log
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:21:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:21:14 +0000
Subject: [Bugs] [Bug 1672314] thin-arbiter: Check with thin-arbiter file
before marking new entry change log
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672314
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:44:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:44:49 +0000
Subject: [Bugs] [Bug 1670303] api: bad GFAPI_4.1.6 block
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670303
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 14:44:49
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22116 (api: bad GFAPI_4.1.6 block) merged
(#2) on release-4.1 by Kaleb KEITHLEY
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:44:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:44:49 +0000
Subject: [Bugs] [Bug 1667099] GlusterFS 4.1.8 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667099
Bug 1667099 depends on bug 1670303, which changed state.
Bug 1670303 Summary: api: bad GFAPI_4.1.6 block
https://bugzilla.redhat.com/show_bug.cgi?id=1670303
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:44:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:44:49 +0000
Subject: [Bugs] [Bug 1670307] api: bad GFAPI_4.1.6 block
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670307
Bug 1670307 depends on bug 1670303, which changed state.
Bug 1670303 Summary: api: bad GFAPI_4.1.6 block
https://bugzilla.redhat.com/show_bug.cgi?id=1670303
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:45:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:45:10 +0000
Subject: [Bugs] [Bug 1670307] api: bad GFAPI_4.1.6 block
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670307
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 14:45:10
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22117 (api: bad GFAPI_4.1.6 block) merged
(#1) on release-5 by Kaleb KEITHLEY
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:45:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:45:10 +0000
Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1667103
Bug 1667103 depends on bug 1670307, which changed state.
Bug 1670307 Summary: api: bad GFAPI_4.1.6 block
https://bugzilla.redhat.com/show_bug.cgi?id=1670307
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:47:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:47:44 +0000
Subject: [Bugs] [Bug 1671217] core: move "dict is NULL" logs to DEBUG log
level
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671217
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 14:47:44
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22129 (core: move \"dict is NULL\" logs to
DEBUG log level) merged (#2) on release-5 by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 14:48:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:48:49 +0000
Subject: [Bugs] [Bug 1651246] Failed to dispatch handler
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
--- Comment #31 from Worker Ant ---
REVIEW: https://review.gluster.org/22135 (socket: don't pass return value from
protocol handler to event handler) merged (#2) on release-5 by Shyamsundar
Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:50:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:50:30 +0000
Subject: [Bugs] [Bug 1651246] Failed to dispatch handler
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
--- Comment #32 from Worker Ant ---
REVIEW: https://review.gluster.org/22134 (socket: fix issue when socket write
return with EAGAIN) merged (#2) on release-5 by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:51:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:51:38 +0000
Subject: [Bugs] [Bug 1671611] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 14:51:38
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22136 (cluster/dht: Delete invalid linkto
files in rmdir) merged (#2) on release-5 by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:51:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:51:39 +0000
Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1668989
Bug 1668989 depends on bug 1671611, which changed state.
Bug 1671611 Summary: Unable to delete directories that contain linkto files that point to itself.
https://bugzilla.redhat.com/show_bug.cgi?id=1671611
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:53:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:53:29 +0000
Subject: [Bugs] [Bug 1665145] Writes on Gluster 5 volumes fail with EIO when
"cluster.consistent-metadata" is set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1665145
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 14:53:29
--- Comment #4 from Worker Ant ---
REVIEW: https://review.gluster.org/22139 (readdir-ahead: do not zero-out iatt
in fop cbk) merged (#2) on release-5 by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 14:53:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 14:53:29 +0000
Subject: [Bugs] [Bug 1670253] Writes on Gluster 5 volumes fail with EIO when
"cluster.consistent-metadata" is set
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670253
Bug 1670253 depends on bug 1665145, which changed state.
Bug 1665145 Summary: Writes on Gluster 5 volumes fail with EIO when "cluster.consistent-metadata" is set
https://bugzilla.redhat.com/show_bug.cgi?id=1665145
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 15:08:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 15:08:15 +0000
Subject: [Bugs] [Bug 1669382] [ovirt-gluster] Fuse mount crashed while
creating the preallocated image
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1669382
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 15:08:15
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22095 (features/shard: Ref shard inode while
adding to fsync list) merged (#2) on release-5 by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Feb 4 15:14:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 15:14:57 +0000
Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and
leads to "Transport endpoint is not connected"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1626085
GCth changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(ravishankar at redha |
|t.com) |
|needinfo?(rhb1 at gcth.net) |
--- Comment #11 from GCth ---
Up until line #17 they are the same, here's another example:
Core was generated by `/usr/sbin/glusterfs --process-name fuse
--volfile-server=xxxx --'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f8d5e877560 in __gf_free (free_ptr=0x7f8d49a25378) at mem-pool.c:330
330 mem-pool.c: No such file or directory.
[Current thread is 1 (Thread 0x7f8d521bf700 (LWP 2217))]
(gdb) bt
#0 0x00007f8d5e877560 in __gf_free (free_ptr=0x7f8d49a25378) at mem-pool.c:330
#1 0x00007f8d5e842e1e in dict_destroy (this=0x7f8d4994f708) at dict.c:701
#2 0x00007f8d5e842f25 in dict_unref (this=) at dict.c:753
#3 0x00007f8d584330d4 in afr_local_cleanup (local=0x7f8d49a56cc8,
this=) at afr-common.c:2091
#4 0x00007f8d5840d584 in afr_transaction_done (frame=,
this=) at afr-transaction.c:369
#5 0x00007f8d5841483a in afr_unlock (frame=frame at entry=0x7f8d4995ec08,
this=this at entry=0x7f8d54019d40) at afr-lk-common.c:1085
#6 0x00007f8d5840aeca in afr_changelog_post_op_done
(frame=frame at entry=0x7f8d4995ec08, this=this at entry=0x7f8d54019d40) at
afr-transaction.c:778
#7 0x00007f8d5840e105 in afr_changelog_post_op_do (frame=0x7f8d4995ec08,
this=0x7f8d54019d40) at afr-transaction.c:1442
#8 0x00007f8d5840edcf in afr_changelog_post_op_now (frame=0x7f8d4995ec08,
this=0x7f8d54019d40) at afr-transaction.c:1512
#9 0x00007f8d5840ef4c in afr_delayed_changelog_wake_up_cbk (data=) at afr-transaction.c:2444
#10 0x00007f8d58410866 in afr_transaction_start
(local=local at entry=0x7f8d4cd6ed18, this=this at entry=0x7f8d54019d40) at
afr-transaction.c:2847
#11 0x00007f8d58410c89 in afr_transaction (frame=frame at entry=0x7f8d4e643068,
this=this at entry=0x7f8d54019d40, type=type at entry=AFR_DATA_TRANSACTION) at
afr-transaction.c:2918
#12 0x00007f8d583fcb70 in afr_do_writev (frame=frame at entry=0x7f8d4e245608,
this=this at entry=0x7f8d54019d40) at afr-inode-write.c:477
#13 0x00007f8d583fd81d in afr_writev (frame=frame at entry=0x7f8d4e245608,
this=this at entry=0x7f8d54019d40, fd=fd at entry=0x7f8d499f3758,
vector=0x7f8d4e932b40, count=1, offset=1024, flags=32769,
iobref=0x7f8d488cb3b0, xdata=0x0) at afr-inode-write.c:555
#14 0x00007f8d5818cbef in dht_writev (frame=frame at entry=0x7f8d4e29c598,
this=, fd=0x7f8d499f3758, vector=vector at entry=0x7f8d521be5c0,
count=count at entry=1, off=, flags=32769, iobref=0x7f8d488cb3b0,
xdata=0x0) at dht-inode-write.c:223
#15 0x00007f8d53df0b77 in wb_fulfill_head
(wb_inode=wb_inode at entry=0x7f8d49a25310, head=0x7f8d49bbcb40) at
write-behind.c:1156
#16 0x00007f8d53df0dfb in wb_fulfill (wb_inode=wb_inode at entry=0x7f8d49a25310,
liabilities=liabilities at entry=0x7f8d521be720) at write-behind.c:1233
#17 0x00007f8d53df21b6 in wb_process_queue
(wb_inode=wb_inode at entry=0x7f8d49a25310) at write-behind.c:1784
#18 0x00007f8d53df233f in wb_fulfill_cbk (frame=frame at entry=0x7f8d49cc15a8,
cookie=, this=, op_ret=op_ret at entry=1024,
op_errno=op_errno at entry=0, prebuf=prebuf at entry=0x7f8d49c7f8c0,
postbuf=, xdata=) at write-behind.c:1105
#19 0x00007f8d5818b31e in dht_writev_cbk (frame=0x7f8d498dfa48,
cookie=, this=, op_ret=1024, op_errno=0,
prebuf=0x7f8d49c7f8c0, postbuf=0x7f8d49c7f958, xdata=0x7f8d4e65a7e8) at
dht-inode-write.c:140
#20 0x00007f8d583fc2b7 in afr_writev_unwind (frame=frame at entry=0x7f8d48374db8,
this=this at entry=0x7f8d54019d40) at afr-inode-write.c:234
#21 0x00007f8d583fc83e in afr_writev_wind_cbk (frame=0x7f8d4995ec08,
cookie=, this=0x7f8d54019d40, op_ret=,
op_errno=, prebuf=, postbuf=0x7f8d521be9d0,
xdata=0x7f8d4e6e30d8) at afr-inode-write.c:388
#22 0x00007f8d586c4865 in client4_0_writev_cbk (req=,
iov=, count=, myframe=0x7f8d4e621578) at
client-rpc-fops_v2.c:685
#23 0x00007f8d5e61c130 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f8d54085540, pollin=pollin at entry=0x7f8d4ea47850) at
rpc-clnt.c:755
#24 0x00007f8d5e61c48f in rpc_clnt_notify (trans=0x7f8d54085800,
mydata=0x7f8d54085570, event=, data=0x7f8d4ea47850) at
rpc-clnt.c:923
#25 0x00007f8d5e618893 in rpc_transport_notify (this=this at entry=0x7f8d54085800,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f8d4ea47850)
at rpc-transport.c:525
#26 0x00007f8d59401671 in socket_event_poll_in (notify_handled=true,
this=0x7f8d54085800) at socket.c:2504
#27 socket_event_handler (fd=, idx=idx at entry=2, gen=4,
data=data at entry=0x7f8d54085800, poll_in=, poll_out=, poll_err=) at socket.c:2905
#28 0x00007f8d5e8ab945 in event_dispatch_epoll_handler (event=0x7f8d521bee8c,
event_pool=0x56110317e0b0) at event-epoll.c:591
#29 event_dispatch_epoll_worker (data=0x7f8d5406f7e0) at event-epoll.c:668
#30 0x00007f8d5dacb494 in start_thread (arg=0x7f8d521bf700) at
pthread_create.c:333
#31 0x00007f8d5d374acf in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:97
All the gluster instances looks similar to the following setup:
Type: Distributed-Replicate
Volume ID: e9dd963c...
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.10.11.1:/export/data1
Brick2: 10.10.11.2:/export/data1
Brick3: 10.10.11.3:/export/data1
Brick4: 10.10.11.4:/export/data1
Options Reconfigured:
cluster.favorite-child-policy: mtime
cluster.self-heal-daemon: enable
performance.cache-size: 1GB
performance.quick-read: on
performance.stat-prefetch: on
performance.read-ahead: on
performance.readdir-ahead: on
auth.allow: 10.*.*.*
transport.address-family: inet
nfs.disable: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 50000
I do not have a reproducer, the gluster instance is 2-5TB of files, mostly
small ones, with lots of directories.
They reach up to 10M inodes used as reported by df -hi, brick storage is on XFS
as recommended.
The crash of individual glusterfs process happens once every several days.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 15:16:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 15:16:09 +0000
Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and
leads to "Transport endpoint is not connected"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1626085
--- Comment #12 from GCth ---
One more - it's currently:
glusterfs 5.3
installed from
https://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Debian/stretch/amd64/apt
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Feb 4 16:08:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Feb 2019 16:08:43 +0000
Subject: [Bugs] [Bug 1672248] quorum count not updated in nfs-server vol file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672248
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-04 16:08:43
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22148 (libglusterfs/common-utils.c: Fix
buffer size for checksum computation) merged (#3) on release-5 by Shyamsundar
Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 02:59:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 02:59:24 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
--- Comment #5 from David E. Smith ---
I've added the five of you to our org's Box account; all of you should have
invitations to a shared folder, and I'm uploading a few of the cores now. I
hope they're of value to you.
The binaries are all from the CentOS Storage SIG repo at
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/ . They're all
current as of a few days ago:
[davidsmith at wuit-s-10882 ~]$ rpm -qa | grep gluster
glusterfs-5.3-1.el7.x86_64
glusterfs-client-xlators-5.3-1.el7.x86_64
glusterfs-fuse-5.3-1.el7.x86_64
glusterfs-libs-5.3-1.el7.x86_64
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 04:20:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 04:20:45 +0000
Subject: [Bugs] [Bug 1671637] geo-rep: Issue with configparser import
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671637
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--- Comment #2 from Worker Ant ---
COMMIT: https://review.gluster.org/22138 committed in master by "Amar Tumballi"
with a commit message- geo-rep: Fix configparser import
issue
'configparser' is backported to python2 and can
be installed using pip (pip install configparser).
So trying to import 'configparser' first and later
'ConfigParser' can cause issues w.r.t unicode strings.
Always try importing 'ConfigParser' first and then
'configparser'. This solves python2/python3 compat
issues.
Change-Id: I2a87c3fc46476296b8cb547338f35723518751cc
fixes: bz#1671637
Signed-off-by: Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Feb 5 05:17:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 05:17:23 +0000
Subject: [Bugs] [Bug 1672480] New: Bugs Test Module tests failing on s390x
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672480
Bug ID: 1672480
Summary: Bugs Test Module tests failing on s390x
Product: GlusterFS
Version: 4.1
Hardware: s390x
OS: Linux
Status: NEW
Component: tests
Severity: urgent
Assignee: bugs at gluster.org
Reporter: abhaysingh1722 at yahoo.in
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Observing test failures for the following test cases:-
./tests/bugs/glusterfs/bug-902610.t
./tests/bugs/posix/bug-1619720.t
./tests/bitrot/bug-1207627-bitrot-scrub-status.t
After analyzing the above test failures, we have observed that the hash values
for the bricks and files are getting differently calculated on s390x systems as
compared to those on x86.
As per the documentation given at
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/dht/ ,
To place a file in a directory, a hash is calculated for the file using both
the (containing) directory's unique GFID and the file's name.This hash is then
matched to one of the layout assignments, to yield the hashed location.
However, on s390x, certain files have hash values that are beyond the hash
range of the available bricks. Therefore, these files don't get located in
their respective hashed locations.
This has been observed in other test cases too.
For example, ./tests/bugs/distribute/bug-1161311.t,
./tests/bugs/distribute/bug-1193636.t, ./tests/basic/namespace.t.
Is there any workaround to get the correct hashed locations for the files?
Version-Release number of selected component (if applicable):
v4.1.5
How reproducible:
Build Glusterfs v4.1.5 and run the test case with ./run-tests.sh prove -vf
Steps to Reproduce:
1.
2.
3.
Actual results:
Tests FAIL
Expected results:
Tests should PASS
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 07:05:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 07:05:24 +0000
Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain
linkto files that point to itself.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1668989
Red Hat Bugzilla Rules Engine changed:
What |Removed |Added
----------------------------------------------------------------------------
Target Release|--- |RHGS 3.4.z Batch Update 4
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Feb 5 11:00:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 11:00:04 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |nbalacha at redhat.com
--- Comment #6 from Nithya Balachandran ---
(In reply to David E. Smith from comment #5)
> I've added the five of you to our org's Box account; all of you should have
> invitations to a shared folder, and I'm uploading a few of the cores now. I
> hope they're of value to you.
>
> The binaries are all from the CentOS Storage SIG repo at
> https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/ . They're
> all current as of a few days ago:
>
> [davidsmith at wuit-s-10882 ~]$ rpm -qa | grep gluster
> glusterfs-5.3-1.el7.x86_64
> glusterfs-client-xlators-5.3-1.el7.x86_64
> glusterfs-fuse-5.3-1.el7.x86_64
> glusterfs-libs-5.3-1.el7.x86_64
Thanks. We will take a look and get back to you.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 12:23:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 12:23:41 +0000
Subject: [Bugs] [Bug 1671647] Anomalies in python-lint build job
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671647
Nigel Babu changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-02-05 12:23:41
--- Comment #3 from Nigel Babu ---
Ah, this was because we were trying to lint the virtualenv. I've fixed this up
in this review and now it should fail correctly:
https://review.gluster.org/#/c/build-jobs/+/22155/
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 14:31:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 14:31:52 +0000
Subject: [Bugs] [Bug 1672656] New: glustereventsd: crash,
ABRT report for package glusterfs has reached 100 occurrences
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672656
Bug ID: 1672656
Summary: glustereventsd: crash, ABRT report for package
glusterfs has reached 100 occurrences
Product: GlusterFS
Version: 5
OS: Linux
Status: NEW
Component: eventsapi
Assignee: bugs at gluster.org
Reporter: kkeithle at redhat.com
Target Milestone: ---
Classification: Community
Description of problem:
https://retrace.fedoraproject.org/faf/reports/bthash/ee9831c192f230a223ebdbecc7ea915aaf92636f/
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 14:41:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 14:41:05 +0000
Subject: [Bugs] [Bug 1672205] 'gluster get-state' command fails if volume
brick doesn't exist.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672205
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--- Comment #2 from Worker Ant ---
COMMIT: https://review.gluster.org/22147 committed in master by "Atin
Mukherjee" with a commit message- glusterd: get-state
command should not fail if any brick is gone bad
Problem: get-state command will error out, if any of the underlying
brick(s) of volume(s) in the cluster go bad.
It is expected that get-state command should not error out, but
should generate an output successfully.
Solution: In glusterd_get_state(), a statfs call is made on the
brick path for every bricks of the volumes to calculate the total
and free memory available. If any of statfs call fails on any
brick, we should not error out and should report total memory and free
memory of that brick as 0.
This patch also handles a statfs failure scenario in
glusterd_store_retrieve_bricks().
fixes: bz#1672205
Change-Id: Ia9e8a1d8843b65949d72fd6809bd21d39b31ad83
Signed-off-by: Sanju Rakonde
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 15:16:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 15:16:51 +0000
Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile
workload tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670031
--- Comment #5 from Worker Ant ---
REVIEW: https://review.gluster.org/22156 (inode: granular locking) posted (#1)
for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 16:05:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 16:05:06 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #530 from Worker Ant ---
REVIEW: https://review.gluster.org/22157 (fuse: correctly handle setxattr
values) posted (#1) for review on master by Xavi Hernandez
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 16:34:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 16:34:47 +0000
Subject: [Bugs] [Bug 1672711] New: Upgrade from glusterfs 3.12 to gluster
4/5 broken
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672711
Bug ID: 1672711
Summary: Upgrade from glusterfs 3.12 to gluster 4/5 broken
Product: GlusterFS
Version: mainline
Status: NEW
Component: packaging
Severity: urgent
Priority: urgent
Assignee: bugs at gluster.org
Reporter: sabose at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
While updating glusterfs 3.12, run into below error :
Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
(@ovirt-4.2-centos-gluster312)
Requires: glusterfs(x86-64) = 3.12.15-1.el7
Removing: glusterfs-3.12.15-1.el7.x86_64
(@ovirt-4.2-centos-gluster312)
glusterfs(x86-64) = 3.12.15-1.el7
Updated By: glusterfs-5.3-1.el7.x86_64 (ovirt-4.3-centos-gluster5)
glusterfs(x86-64) = 5.3-1.el7
Version-Release number of selected component (if applicable):
3.12
How reproducible:
Always
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 16:43:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 16:43:45 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(desmith at wustl.edu
| |)
--- Comment #7 from Nithya Balachandran ---
David,
Can you try mounting the volume with the option lru-limit=0 and let us know if
you still see the crashes?
Regards,
Nithya
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Feb 5 17:18:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 17:18:35 +0000
Subject: [Bugs] [Bug 1672727] New: Fix timeouts so the tests pass on AWS
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672727
Bug ID: 1672727
Summary: Fix timeouts so the tests pass on AWS
Product: GlusterFS
Version: mainline
Status: NEW
Component: tests
Assignee: bugs at gluster.org
Reporter: nigelb at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Some test timeouts need a bump on AWS
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 17:21:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 17:21:47 +0000
Subject: [Bugs] [Bug 1672727] Fix timeouts so the tests pass on AWS
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672727
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/22065 (Bump up timeout for tests on AWS)
posted (#5) for review on master by Nigel Babu
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Feb 5 17:48:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 05 Feb 2019 17:48:01 +0000
Subject: [Bugs] [Bug 1663102] Change default value for client side heal to
off for replicate volumes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663102
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-02-05 17:48:01
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 00:56:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 00:56:04 +0000
Subject: [Bugs] [Bug 1672818] New: GlusterFS 6.0 tracker
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Bug ID: 1672818
Summary: GlusterFS 6.0 tracker
Product: GlusterFS
Version: 6
Status: NEW
Component: core
Keywords: Tracking, Triaged
Assignee: bugs at gluster.org
Reporter: srangana at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Tracker for the release 6.0
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 01:07:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 01:07:47 +0000
Subject: [Bugs] [Bug 1672826] New: Request gerrit dashboard addition for
release 6
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672826
Bug ID: 1672826
Summary: Request gerrit dashboard addition for release 6
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: srangana at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Request a gerrit dashboard for release 6 like the following,
-
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:4-1-dashboard
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 01:08:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 01:08:54 +0000
Subject: [Bugs] [Bug 1672828] New: Restrict gerrit merge permissions for
branch release-6 to release owners
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672828
Bug ID: 1672828
Summary: Restrict gerrit merge permissions for branch release-6
to release owners
Product: GlusterFS
Version: mainline
Status: NEW
Component: project-infrastructure
Assignee: bugs at gluster.org
Reporter: srangana at redhat.com
CC: bugs at gluster.org, gluster-infra at gluster.org
Target Milestone: ---
Classification: Community
Release owners:
- srangana at redhat.com
Also additionally add Amar Tumballi as well to have merge rights.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 01:18:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 01:18:26 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #531 from Worker Ant ---
REVIEW: https://review.gluster.org/22158 (glusterd: Update op-version for
release 7) posted (#1) for review on master by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 01:46:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 01:46:52 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #532 from Worker Ant ---
REVIEW: https://review.gluster.org/22159 (api: Update all future API versions
to rel-6) posted (#1) for review on master by Shyamsundar Ranganathan
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 03:08:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:08:10 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |rgowdapp at redhat.com
Depends On| |1664934
--- Comment #1 from Raghavendra G ---
Bug 1664934 - glusterfs-fuse client not benefiting from page cache on read
after write
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1664934
[Bug 1664934] glusterfs-fuse client not benefiting from page cache on read
after write
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 03:08:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:08:10 +0000
Subject: [Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page
cache on read after write
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1664934
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1672818 (glusterfs-6.0)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 03:09:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:09:33 +0000
Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1670718
--- Comment #2 from Raghavendra G ---
Bug 1670718 - md-cache should be loaded at a position in graph where it sees
stats in write cbk
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1670718
[Bug 1670718] md-cache should be loaded at a position in graph where it sees
stats in write cbk
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 03:09:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:09:33 +0000
Subject: [Bugs] [Bug 1670718] md-cache should be loaded at a position in
graph where it sees stats in write cbk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1670718
Raghavendra G changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1672818 (glusterfs-6.0)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 03:46:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:46:04 +0000
Subject: [Bugs] [Bug 1672851] New: With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
Bug ID: 1672851
Summary: With parallel-readdir enabled, deleting a directory
containing stale linkto files fails with "Directory
not empty"
Product: GlusterFS
Version: 4.1
Status: NEW
Component: distribute
Assignee: bugs at gluster.org
Reporter: nbalacha at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
If parallel-readdir is enabled on a volume, rm -rf fails with "Directory
not empty" if contains stale linkto files.
Version-Release number of selected component (if applicable):
How reproducible:
Consistently
Steps to Reproduce:
1. Create a 3 brick distribute volume
2. Enable parallel-readdir and readdir-ahead on the volume
3. Fuse mount the volume and mkdir dir0
4. Create some files inside dir0 and rename them so linkto files are created on
the bricks
5. Check the bricks to see which files have linkto files. Delete the data files
directly on the bricks, leaving the linkto files behind. These are now stale
linkto files.
6. Remount the volume
7. rm -rf dir0
Actual results:
[root at rhgs313-6 fuse1]# rm -rf dir0/
rm: cannot remove ?dir0/?: Directory not empty
Expected results:
dir0 should be deleted without errors
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 03:46:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:46:18 +0000
Subject: [Bugs] [Bug 1672851] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Version|4.1 |mainline
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 03:46:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 03:46:30 +0000
Subject: [Bugs] [Bug 1672851] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |nbalacha at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 04:10:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:10:11 +0000
Subject: [Bugs] [Bug 1672851] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
--- Comment #1 from Nithya Balachandran ---
RCA:
rm -rf works by first listing and unlinking all entries in and then
calling an rmdir .
As DHT readdirp does not return linkto files in the listing, they are not
unlinked as part of the rm -rf itself. dht_rmdir handles this by performing a
readdirp internally on and deleting all stale linkto files before
proceeding with the actual rmdir operation.
When parallel-readdir is enabled, the rda xlator is loaded below dht in the
graph and proactively lists and caches entries when an opendir is performed.
Entries are returned from this cache for any subsequent readdirp calls on the
directory that was opened.
DHT uses the presence of the trusted.glusterfs.dht.linkto xattr to determine
whether a file is a linkto file. As this call to opendir does not set
trusted.glusterfs.dht.linkto in the list of requested xattrs for the opendir
call, the cached entries do not contain this xattr value. As none of the
entries returned will have the xattr, DHT believes they are all data files and
fails the rmdir with ENOTEMPTY.
Turning off parallel-readdir allows the rm -rf to succeed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 04:37:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:37:57 +0000
Subject: [Bugs] [Bug 1672851] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22160 (cluster/dht: Request linkto xattrs in
dht_rmdir opendir) posted (#1) for review on master by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 04:38:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:38:46 +0000
Subject: [Bugs] [Bug 1672869] New: With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672869
Bug ID: 1672869
Summary: With parallel-readdir enabled, deleting a directory
containing stale linkto files fails with "Directory
not empty"
Product: Red Hat Gluster Storage
Version: 3.4
Status: NEW
Component: distribute
Assignee: nbalacha at redhat.com
Reporter: nbalacha at redhat.com
QA Contact: tdesala at redhat.com
CC: bugs at gluster.org, rhs-bugs at redhat.com,
sankarshan at redhat.com, storage-qa-internal at redhat.com
Depends On: 1672851
Target Milestone: ---
Classification: Red Hat
+++ This bug was initially created as a clone of Bug #1672851 +++
Description of problem:
If parallel-readdir is enabled on a volume, rm -rf fails with "Directory
not empty" if contains stale linkto files.
Version-Release number of selected component (if applicable):
How reproducible:
Consistently
Steps to Reproduce:
1. Create a 3 brick distribute volume
2. Enable parallel-readdir and readdir-ahead on the volume
3. Fuse mount the volume and mkdir dir0
4. Create some files inside dir0 and rename them so linkto files are created on
the bricks
5. Check the bricks to see which files have linkto files. Delete the data files
directly on the bricks, leaving the linkto files behind. These are now stale
linkto files.
6. Remount the volume
7. rm -rf dir0
Actual results:
[root at rhgs313-6 fuse1]# rm -rf dir0/
rm: cannot remove ?dir0/?: Directory not empty
Expected results:
dir0 should be deleted without errors
Additional info:
--- Additional comment from Nithya Balachandran on 2019-02-06 04:10:11 UTC ---
RCA:
rm -rf works by first listing and unlinking all entries in and then
calling an rmdir .
As DHT readdirp does not return linkto files in the listing, they are not
unlinked as part of the rm -rf itself. dht_rmdir handles this by performing a
readdirp internally on and deleting all stale linkto files before
proceeding with the actual rmdir operation.
When parallel-readdir is enabled, the rda xlator is loaded below dht in the
graph and proactively lists and caches entries when an opendir is performed.
Entries are returned from this cache for any subsequent readdirp calls on the
directory that was opened.
DHT uses the presence of the trusted.glusterfs.dht.linkto xattr to determine
whether a file is a linkto file. As this call to opendir does not set
trusted.glusterfs.dht.linkto in the list of requested xattrs for the opendir
call, the cached entries do not contain this xattr value. As none of the
entries returned will have the xattr, DHT believes they are all data files and
fails the rmdir with ENOTEMPTY.
Turning off parallel-readdir allows the rm -rf to succeed.
--- Additional comment from Worker Ant on 2019-02-06 04:37:57 UTC ---
REVIEW: https://review.gluster.org/22160 (cluster/dht: Request linkto xattrs in
dht_rmdir opendir) posted (#1) for review on master by N Balachandran
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
[Bug 1672851] With parallel-readdir enabled, deleting a directory containing
stale linkto files fails with "Directory not empty"
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 04:38:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:38:46 +0000
Subject: [Bugs] [Bug 1672851] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672851
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1672869
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1672869
[Bug 1672869] With parallel-readdir enabled, deleting a directory containing
stale linkto files fails with "Directory not empty"
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 04:38:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:38:47 +0000
Subject: [Bugs] [Bug 1672869] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672869
Red Hat Bugzilla Rules Engine changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |ZStream
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 04:39:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 04:39:44 +0000
Subject: [Bugs] [Bug 1672869] With parallel-readdir enabled,
deleting a directory containing stale linkto files fails with
"Directory not empty"
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672869
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 05:43:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 05:43:58 +0000
Subject: [Bugs] [Bug 1672711] Upgrade from glusterfs 3.12 to gluster 4/5
broken
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672711
Sahina Bose changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |jthottan at redhat.com
Flags| |needinfo?(jthottan at redhat.c
| |om)
--- Comment #1 from Sahina Bose ---
Jiffin, can you help with this?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 06:33:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 06:33:52 +0000
Subject: [Bugs] [Bug 1672314] thin-arbiter: Check with thin-arbiter file
before marking new entry change log
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1672314
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs at gluster.org |aspandey at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 06:36:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 06:36:58 +0000
Subject: [Bugs] [Bug 1662264] thin-arbiter: Check with thin-arbiter file
before marking new entry change log
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1662264
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/22161 (cluster/thin-arbiter: Consider
thin-arbiter before marking new entry changelog) posted (#1) for review on
release-5 by Ashish Pandey
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 07:17:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 07:17:16 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--- Comment #533 from Worker Ant ---
COMMIT: https://review.gluster.org/22158 committed in master by "Shyamsundar
Ranganathan" with a commit message- glusterd: Update
op-version for release 7
Change-Id: I0f3978d7e603e6e767dc7aa2a23ef35b1f2b43f7
Updates: bz#1193929
Signed-off-by: ShyamsundarR
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 07:23:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 07:23:49 +0000
Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days
with 'Failed to dispatch handler'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1671556
--- Comment #8 from Nithya Balachandran ---
Initial analysis of one of the cores:
[root at rhgs313-7 gluster-5.3]# gdb -c core.6014 /usr/sbin/glusterfs
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfs --direct-io-mode=disable
--fuse-mountopts=noatime,context="'.
Program terminated with signal 11, Segmentation fault.
#0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410
410 if (!xl->call_cleanup && xl->cbks->forget)
(gdb) bt
#0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410
#1 0x00007fa1809e90a2 in __inode_destroy (inode=0x7fa0d0349af8) at inode.c:432
#2 inode_table_prune (table=table at entry=0x7fa15800c3c0) at inode.c:1696
#3 0x00007fa1809e9f96 in inode_forget_with_unref (inode=0x7fa0d0349af8,
nlookup=128) at inode.c:1273
#4 0x00007fa177dae4e1 in do_forget (this=, unique=, nodeid=, nlookup=) at fuse-bridge.c:726
#5 0x00007fa177dae5bd in fuse_forget (this=,
finh=0x7fa0a41da500, msg=, iobuf=) at
fuse-bridge.c:741
#6 0x00007fa177dc5d7a in fuse_thread_proc (data=0x557a0e8ffe20) at
fuse-bridge.c:5125
#7 0x00007fa17f83bdd5 in start_thread () from /lib64/libpthread.so.0
#8 0x00007fa17f103ead in msync () from /lib64/libc.so.6
#9 0x0000000000000000 in ?? ()
(gdb) f 0
#0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410
410 if (!xl->call_cleanup && xl->cbks->forget)
(gdb) l
405 for (index = 0; index < inode->table->xl->graph->xl_count; index++)
{
406 if (inode->_ctx[index].value1 || inode->_ctx[index].value2) {
407 xl = (xlator_t *)(long)inode->_ctx[index].xl_key;
408 old_THIS = THIS;
409 THIS = xl;
410 if (!xl->call_cleanup && xl->cbks->forget)
411 xl->cbks->forget(xl, inode);
412 THIS = old_THIS;
413 }
414 }
(gdb) p *xl
Cannot access memory at address 0x0
(gdb) p index
$1 = 6
(gdb) p inode->table->xl->graph->xl_count
$3 = 13
(gdb) p inode->_ctx[index].value1
$4 = 0
(gdb) p inode->_ctx[index].value2
$5 = 140327960119304
(gdb) p/x inode->_ctx[index].value2
$6 = 0x7fa0a6370808
Based on the graph, the xlator with index = 6 is
(gdb) p ((xlator_t*)
inode->table->xl->graph->top)->next->next->next->next->next->next->next->name
$31 = 0x7fa16c0122e0 "web-content-read-ahead"
(gdb) p ((xlator_t*)
inode->table->xl->graph->top)->next->next->next->next->next->next->next->xl_id
$32 = 6
But read-ahead does not update the inode_ctx at all. There seems to be some
sort of memory corruption happening here but that needs further analysis.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:18:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:18:36 +0000
Subject: [Bugs] [Bug 1313567] flooding of "dict is NULL" logging
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1313567
Nithya Balachandran changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(archon810 at gmail.c
| |om)
--- Comment #18 from Nithya Balachandran ---
(In reply to Artem Russakovskii from comment #17)
> The fuse crash happened again yesterday, to another volume. Are there any
> mount options that could help mitigate this?
>
> In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch
> and restart the mount, which works and recovers the mount point within a
> minute. Not ideal, but a temporary workaround.
>
> By the way, the way to reproduce this "Transport endpoint is not connected"
> condition for testing purposes is to kill -9 the right "glusterfs
> --process-name fuse" process.
>
>
> monit check:
> check filesystem glusterfs_data1 with path /mnt/glusterfs_data1
> start program = "/bin/mount /mnt/glusterfs_data1"
> stop program = "/bin/umount /mnt/glusterfs_data1"
> if space usage > 90% for 5 times within 15 cycles
> then alert else if succeeded for 10 cycles then alert
>
>
> stack trace:
> [2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref]
> (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
> [0x7fa0249e4329]
> -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
> [0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
> [0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
> [2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref]
> (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329)
> [0x7fa0249e4329]
> -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5)
> [0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58)
> [0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument]
> The message "E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
> handler" repeated 26 times between [2019-02-01 23:21:20.857333] and
> [2019-02-01 23:21:56.164427]
> The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk]
> 0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3"
> repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01
> 23:22:03.474036]
> pending frames:
> frame : type(1) op(LOOKUP)
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 6
> time of crash:
> 2019-02-01 23:22:03
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 5.3
> /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c]
> /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6]
> /lib64/libc.so.6(+0x36160)[0x7fa02c12d160]
> /lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0]
> /lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1]
> /lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa]
> /lib64/libc.so.6(+0x2e772)[0x7fa02c125772]
> /lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8]
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.
> so(+0x5dc9d)[0x7fa025543c9d]
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.
> so(+0x70ba1)[0x7fa025556ba1]
> /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f]
> /usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820]
> /usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f]
> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063]
> /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2]
> /usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3]
> /lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559]
> /lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f]
Please mount the volume using the option lru-limit=0 and see if the crashes go
away. We are currently working on analysing some coredumps and will update once
we have a fix.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:53:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:53:01 +0000
Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=789278
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20755
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 08:53:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:53:25 +0000
Subject: [Bugs] [Bug 1542072] Syntactical errors in hook scripts for
managing SELinux context on bricks #2 (S10selinux-label-brick.sh +
S10selinux-del-fcontext.sh)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1542072
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 19502
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 08:53:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:53:54 +0000
Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=789278
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20860
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 08:55:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:55:45 +0000
Subject: [Bugs] [Bug 1626543] dht/tests: Create a .t to test all possible
combinations for file rename
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1626543
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21121
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:56:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:56:32 +0000
Subject: [Bugs] [Bug 1512691] PostgreSQL DB Restore: unexpected data beyond
EOF
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1512691
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20981
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:56:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:56:57 +0000
Subject: [Bugs] [Bug 1512691] PostgreSQL DB Restore: unexpected data beyond
EOF
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1512691
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20737
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:57:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:57:23 +0000
Subject: [Bugs] [Bug 1512691] PostgreSQL DB Restore: unexpected data beyond
EOF
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1512691
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20980
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:57:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:57:50 +0000
Subject: [Bugs] [Bug 1512691] PostgreSQL DB Restore: unexpected data beyond
EOF
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1512691
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21006
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Wed Feb 6 08:58:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:58:13 +0000
Subject: [Bugs] [Bug 1624701] error-out {inode,
entry}lk fops with all-zero lk-owner
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1624701
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 21058
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 08:58:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:58:41 +0000
Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=789278
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 20754
--
You are receiving this mail because:
You are the assignee for the bug.
From bugzilla at redhat.com Wed Feb 6 08:59:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Wed, 06 Feb 2019 08:59:46 +0000
Subject: [Bugs] [Bug 1602282] tests/bugs/bug-1371806_acl.t fails for
distributed regression framework
In-Reply-To: