[Bugs] [Bug 1663519] New: Memory leak when smb.conf has "store dos attributes = yes"
bugzilla at redhat.com
bugzilla at redhat.com
Fri Jan 4 16:49:02 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1663519
Bug ID: 1663519
Summary: Memory leak when smb.conf has "store dos attributes =
yes"
Product: GlusterFS
Version: 3.12
Hardware: x86_64
OS: Linux
Status: NEW
Component: gluster-smb
Severity: urgent
Assignee: bugs at gluster.org
Reporter: ryan at magenta.tv
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Created attachment 1518442
--> https://bugzilla.redhat.com/attachment.cgi?id=1518442&action=edit
Python 3 script to replicate issue
---------------------------------------------------------------------------
Description of problem:
If glusterfs VFS is used with Samba, and the global option "store dos
attributes = yes" is set, the SMBD rss memory usage balloons.
If a FUSE mount is used with Samba, and the global option "store dos attributes
= yes" is set, the Gluster FUSE mount process rss memory usage balloons.
---------------------------------------------------------------------------
Version-Release number of selected component (if applicable):
Samba 4.9.4
Gluster 4.1
How reproducible:
Can reproduce every time with attached python script
---------------------------------------------------------------------------
Gluster volume options:
Volume Name: mcv02
Type: Distribute
Volume ID: 5debe2f4-16c4-457c-8496-fcf32b298ccf
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: mcn01:/mnt/h1a/test_data
Brick2: mcn02:/mnt/h1b/test_data
Brick3: mcn01:/mnt/h2a/test_data
Brick4: mcn02:/mnt/h2b/test_data
Options Reconfigured:
network.ping-timeout: 5
storage.batch-fsync-delay-usec: 0
performance.cache-size: 1000MB
performance.stat-prefetch: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.md-cache-timeout: 600
performance.io-thread-count: 32
performance.parallel-readdir: on
performance.nl-cache: on
performance.nl-cache-timeout: 600
cluster.lookup-optimize: on
performance.write-behind-window-size: 1MB
performance.client-io-threads: on
client.event-threads: 4
server.event-threads: 4
auth.allow: 172.30.30.*
transport.address-family: inet
features.quota: on
features.inode-quota: on
nfs.disable: on
features.quota-deem-statfs: on
cluster.brick-multiplex: off
cluster.server-quorum-ratio: 50%
---------------------------------------------------------------------------
smb.conf file:
[global]
security = user
netbios name = NAS01
clustering = no
server signing = no
max log size = 10000
log file = /var/log/samba/log-%M-test.smbd
logging = file at 1
log level = 1
passdb backend = tdbsam
guest account = nobody
map to guest = bad user
force directory mode = 0777
force create mode = 0777
create mask = 0777
directory mask = 0777
store dos attributes = yes
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
glusterfs:volfile_server = localhost
kernel share modes = No
[VFS]
vfs objects = glusterfs
glusterfs:volume = mcv02
path = /
read only = no
guest ok = yes
valid users = "nobody"
[FUSE]
read only = no
guest ok = yes
path = "/mnt/mcv02"
valid users = "nobody"
-------------------------------------------------------------------------
Steps to Reproduce:
1. Install/compile Samba (Tested with 4.8.4,4.8.6,4.9.4). Install HTOP
2. Add 'store dos attributes = yes' to the Global section of the
/etc/samba/smb.conf file
3.Restart the SMB service
4. Map the Share to a drive in windows
5. Download the attached python script, change line 41 to the mapped drive in
Windows
6. Run attached Python script from a Windows OS (Tested with Win 10 & Python
3.7.1)
7. Run 'htop' or watch the RSS memory usage of the SMBD process
Actual results:
SMBD and FUSE memory balloons over 2-4GB on the process, and does not decrease
even when IO has finished
Expected results:
SMBD and FUSE memory increases slightly, but then stabilises. Rarely going over
200MB
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list