[Bugs] [Bug 1344885] inode leak in brick process

bugzilla at redhat.com bugzilla at redhat.com
Sun Jun 12 07:31:30 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1344885



--- Comment #1 from Raghavendra G <rgowdapp at redhat.com> ---
Description of problem:
There is a leak of inodes on the brick process.

[root at unused ~]# gluster volume info

Volume Name: ra
Type: Distribute
Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: booradley:/home/export-2/ra
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet

Script:
[root at unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *;
done

After completion of script, I can see active inodes in the brick itable

[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069  
conn.0.bound_xl./home/export-2/ra.active_size=149

[root at unused ~]# grep ra.active
/var/run/gluster/home-export-2-ra.19609.dump.1465647069  | wc -l
150

But the client fuse mount doesn't have any inodes.
[root at unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006
| grep itable
xlator.mount.fuse.itable.active_size=1
[xlator.mount.fuse.itable.active.1]

I've not done a detailed RCA. But initial gut feeling is that there is one
inode leak for every iteration of the loop. The leaked inode mostly corresponds
to /mnt/etc.

Version-Release number of selected component (if applicable):
Bug seen on upstream master.

How reproducible:
Quite consistently

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list