[Bugs] [Bug 1365511] New: inode leak in brick process
bugzilla at redhat.com
bugzilla at redhat.com
Tue Aug 9 12:16:46 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1365511
Bug ID: 1365511
Summary: inode leak in brick process
Product: GlusterFS
Version: 3.8.1
Component: unclassified
Keywords: Triaged
Assignee: bugs at gluster.org
Reporter: ndevos at redhat.com
CC: bugs at gluster.org, rgowdapp at redhat.com
Depends On: 1344885
+++ This bug was initially created as a clone of Bug #1344885 +++
+++ This bug was initially created as a clone of Bug #1344843 +++
Description of problem:
There is a leak of inodes on the brick process.
[root at unused ~]# gluster volume info
Volume Name: ra
Type: Distribute
Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: booradley:/home/export-2/ra
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
Script:
[root at unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *;
done
After completion of script, I can see active inodes in the brick itable
[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069
conn.0.bound_xl./home/export-2/ra.active_size=149
[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069 | wc -l
150
But the client fuse mount doesn't have any inodes.
[root at unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006
| grep itable
xlator.mount.fuse.itable.active_size=1
[xlator.mount.fuse.itable.active.1]
I've not done a detailed RCA. But initial gut feeling is that there is one
inode leak for every iteration of the loop. The leaked inode mostly corresponds
to /mnt/etc.
Version-Release number of selected component (if applicable):
RHGS-3.1.3 git repo. Bug seen on upstream master too.
How reproducible:
Quite consistently
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Raghavendra G on 2016-06-12 09:31:30 CEST ---
Description of problem:
There is a leak of inodes on the brick process.
[root at unused ~]# gluster volume info
Volume Name: ra
Type: Distribute
Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: booradley:/home/export-2/ra
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
Script:
[root at unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *;
done
After completion of script, I can see active inodes in the brick itable
[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069
conn.0.bound_xl./home/export-2/ra.active_size=149
[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069 | wc -l
150
But the client fuse mount doesn't have any inodes.
[root at unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006
| grep itable
xlator.mount.fuse.itable.active_size=1
[xlator.mount.fuse.itable.active.1]
I've not done a detailed RCA. But initial gut feeling is that there is one
inode leak for every iteration of the loop. The leaked inode mostly corresponds
to /mnt/etc.
Version-Release number of selected component (if applicable):
Bug seen on upstream master.
How reproducible:
Quite consistently
--- Additional comment from Vijay Bellur on 2016-06-12 09:42:06 CEST ---
REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th
client too) posted (#1) for review on master by Raghavendra G
(rgowdapp at redhat.com)
--- Additional comment from Raghavendra G on 2016-06-14 06:11:48 CEST ---
RCA is not complete and we've not found the leak. Hence moving back the bug
ASSIGNED.
--- Additional comment from Vijay Bellur on 2016-06-16 08:34:30 CEST ---
REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th
client too) posted (#2) for review on master by Raghavendra G
(rgowdapp at redhat.com)
--- Additional comment from Vijay Bellur on 2016-06-16 08:34:33 CEST ---
REVIEW: http://review.gluster.org/14739 (storage/posix: fix inode leaks) posted
(#1) for review on master by Raghavendra G (rgowdapp at redhat.com)
--- Additional comment from Vijay Bellur on 2016-06-28 22:58:22 CEST ---
COMMIT: http://review.gluster.org/14704 committed in master by Jeff Darcy
(jdarcy at redhat.com)
------
commit 60cc8ddaf6105b89e5ce3222c5c5a014deda6a15
Author: Raghavendra G <rgowdapp at redhat.com>
Date: Sun Jun 12 13:02:05 2016 +0530
libglusterfs/client_t: Dump the 0th client too
Change-Id: I565e81944b6670d26ed1962689dcfd147181b61e
BUG: 1344885
Signed-off-by: Raghavendra G <rgowdapp at redhat.com>
Reviewed-on: http://review.gluster.org/14704
Smoke: Gluster Build System <jenkins at build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy at redhat.com>
--- Additional comment from Vijay Bellur on 2016-07-05 14:46:10 CEST ---
COMMIT: http://review.gluster.org/14739 committed in master by Jeff Darcy
(jdarcy at redhat.com)
------
commit 8680261cbb7cacdc565feb578d6afd3fac50cec4
Author: Raghavendra G <rgowdapp at redhat.com>
Date: Thu Jun 16 12:03:19 2016 +0530
storage/posix: fix inode leaks
Change-Id: Ibd221ba62af4db17bea5c52d37f5c0ba30b60a7d
BUG: 1344885
Signed-off-by: Raghavendra G <rgowdapp at redhat.com>
Reviewed-on: http://review.gluster.org/14739
Smoke: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: N Balachandran <nbalacha at redhat.com>
CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu at redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj at redhat.com>
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1344885
[Bug 1344885] inode leak in brick process
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list