[Bugs] [Bug 1395517] Seeing error messages [snapview-client.c:283: gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/ usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
bugzilla at redhat.com
bugzilla at redhat.com
Wed Nov 16 05:27:07 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1395517
Nithya Balachandran <nbalacha at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
--- Comment #1 from Nithya Balachandran <nbalacha at redhat.com> ---
On my systemic setup,
I am seeing lot of error messages on my clients as below
[2016-11-14 02:43:50.274000] E [snapview-client.c:283:gf_svc_lookup_cbk]
0-sysvol-snapview-client: Lookup failed on normal graph with error Transport
endpoint is not connected
[2016-11-14 02:43:50.275390] E [dht-helper.c:1666:dht_inode_ctx_time_update]
(-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
[0x7f2a4ee4175c]
-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/distribute.so(+0x4623c)
[0x7f2a4eba023c]
-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/distribute.so(+0x99b0)
[0x7f2a4eb639b0] ) 0-sysvol-dht: invalid argument: inode [Invalid argument]
I see these message repeating about every 20 min in bulk
--- Additional comment from Nithya Balachandran on 2016-11-15 09:40:06 EST ---
Steps to see the issue:
1. Create a pure dist volume with 2 bricks.
2. Fuse mount the volume.
3. Create a directory dir1 and cd into it.
4. Kill one of the brick processes (kill -9)
5. Try to create directories inside dir1
The mount log has the messages:
[2016-11-15 14:06:39.738234] E [dht-helper.c:1666:dht_inode_ctx_time_update]
(-->/usr/local/lib/glusterfs/3.10dev/xlator/protocol/c lient.so(+0x2f750)
[0x7f1fa58a8750]
-->/usr/local/lib/glusterfs/3.10dev/xlator/cluster/distribute.so(+0x39ffe)
[0x7f1fa5604ffe] -->
/usr/local/lib/glusterfs/3.10dev/xlator/cluster/distribute.so(+0xde27)
[0x7f1fa55d8e27] ) 0-time-dht: invalid argument: inode [Invalid argument]
RCA:
The function dht_lookup_dir_cbk () does not check if the lookup succeeded on at
least one subvolume before attempting to set the inode ctx.
this_call_cnt = dht_frame_return (frame);
if (is_last_call (this_call_cnt)) {
if (local->need_selfheal) {
local->need_selfheal = 0;
dht_lookup_everywhere (frame, this, &local->loc);
return 0;
}
if (local->op_ret == 0) {
ret = dht_layout_normalize (this, &local->loc, layout);
if (ret != 0) {
gf_msg_debug (this->name, 0,
"fixing assignment on %s",
local->loc.path);
goto selfheal;
}
dht_layout_set (this, local->inode, layout);
}
dht_inode_ctx_time_update (local->inode, this,
&local->stbuf, 1); <-- local->inode
is NULL here as the directory was not found on any brick.
--- Additional comment from Worker Ant on 2016-11-15 10:15:55 EST ---
REVIEW: http://review.gluster.org/15847 (cluster/dht: Check for null inode)
posted (#1) for review on master by N Balachandran (nbalacha at redhat.com)
--- Additional comment from Worker Ant on 2016-11-15 22:42:37 EST ---
COMMIT: http://review.gluster.org/15847 committed in master by Atin Mukherjee
(amukherj at redhat.com)
------
commit 8313d53accaa22feb14d284fb91245be0a32e16e
Author: N Balachandran <nbalacha at redhat.com>
Date: Tue Nov 15 20:40:08 2016 +0530
cluster/dht: Check for null inode
Check for NULL inode before attempting to
set dht inode ctx.
Change-Id: I7693c18445f138221d8417df5e95b118cedb818a
BUG: 1395261
Signed-off-by: N Balachandran <nbalacha at redhat.com>
Reviewed-on: http://review.gluster.org/15847
Smoke: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana at redhat.com>
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj at redhat.com>
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list