[Gluster-devel] [rhs-smb] Fwd: Re: [Gluster-users] Possible memory leak ?

Lalatendu Mohanty lmohanty at redhat.com
Fri Sep 13 13:50:06 UTC 2013


On 09/13/2013 05:51 PM, Poornima Gurusiddaiah wrote:
> Hi,
>
> We did run Valgrind for smbd process(with vfs gluster plugin), and performed IO, gluster graph changes and other operations, but we didn't hit 'out of memory' issue.
> Could you please tell us, what operations were you performing, when it oom killed? Were there extensive gluster graph changes being made?
> Also how long was the process running before it was oom killed?
>
> Please set the following options, because it looks like the samba server is out of trusted ports.
>
> Set the option "server.allow-insecure" to 'on' on your gluster volume.
> Edit the glusterd.vol file to add the below line:
> option rpc-auth-allow-insecure on
Just wanted to add that , to use insecure ports for Samba server

  * You need to edit /etc/glusterfs/glusterd.vol in each of your node
    and add "option rpc-auth-allow-insecure on". Then restart glusterd
    service on all nodes.
  * The exact command for setting  server.allow-insecure on is "gluster
    volume set <volname> server.allow-insecure on"

> Regards,
> Poornima
>
> -------- Original Message --------
> Subject: 	Re: [Gluster-devel] [Gluster-users] Possible memory leak ?
> Date: 	Fri, 13 Sep 2013 09:58:22 +0800
> From: 	haiwei.xie-soulinfo <haiwei.xie at soulinfo.com>
> To: 	Lalatendu Mohanty <lmohanty at redhat.com>
> CC: 	gluster-devel at nongnu.org
>
>
>
> On Fri, 13 Sep 2013 02:14:05 +0530
> Lalatendu Mohanty <lmohanty at redhat.com> wrote:
>
>> On 09/12/2013 07:01 AM, haiwei.xie-soulinfo wrote:
>>> hi,
>>>      We meet memory leak in 3.4.0 & sambavfs.
>>>      With mounting cifs, our application runs,'VIRT' of smbd process will increase untill oom-kill.
>>> using 'fsync/sync' or 'echo 1 > /proc/sys/vm/drop_caches' can't resolve it. With mounting fuse, no memory leaks.
>>>
>>>      I guess fuse API or samba-gluster-vfs  has bug, any advise?
>>>      Thanks,
>> Do you have any back trace for the crash, log messages  or anything that
>> would help us debug the issue?
>>
> Thanks for your response.
>
> I just got oom kill dmesg and /var/log/samba/log.client, no smbd core files.
>
> --terrs.
>
> ------------------------------------logs------------------------------------------------
> # dmesg
> smbd invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
> smbd cpuset=/ mems_allowed=0
> Pid: 24583, comm: smbd Not tainted 2.6.32-358.el6.x86_64 #1
> Call Trace:
>    [<ffffffff810cb5d1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
>    [<ffffffff8111cd10>] ? dump_header+0x90/0x1b0
>    [<ffffffff810e91ee>] ? __delayacct_freepages_end+0x2e/0x30
>    [<ffffffff8121d0bc>] ? security_real_capable_noaudit+0x3c/0x70
>    [<ffffffff8111d192>] ? oom_kill_process+0x82/0x2a0
>    [<ffffffff8111d0d1>] ? select_bad_process+0xe1/0x120
>    [<ffffffff8111d5d0>] ? out_of_memory+0x220/0x3c0
>    [<ffffffff8112c27c>] ? __alloc_pages_nodemask+0x8ac/0x8d0
>    [<ffffffff8116087a>] ? alloc_pages_current+0xaa/0x110
>    [<ffffffff8148cde7>] ? tcp_sendmsg+0x677/0xa20
>    [<ffffffff81437b9b>] ? sock_aio_write+0x19b/0x1c0
>    [<ffffffff81437a00>] ? sock_aio_write+0x0/0x1c0
>    [<ffffffff81180b5b>] ? do_sync_readv_writev+0xfb/0x140
>    [<ffffffff81096c80>] ? autoremove_wake_function+0x0/0x40
>    [<ffffffff81180dda>] ? do_sync_read+0xfa/0x140
>    [<ffffffff8121baf6>] ? security_file_permission+0x16/0x20
>    [<ffffffff81181ae6>] ? do_readv_writev+0xd6/0x1f0
>    [<ffffffff81181c46>] ? vfs_writev+0x46/0x60
>    [<ffffffff81181d71>] ? sys_writev+0x51/0xb0
>    [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
> Mem-Info:
> Node 0 DMA per-cpu:
> CPU    0: hi:    0, btch:   1 usd:   0
> CPU    1: hi:    0, btch:   1 usd:   0
> CPU    2: hi:    0, btch:   1 usd:   0
> CPU    3: hi:    0, btch:   1 usd:   0
> CPU    4: hi:    0, btch:   1 usd:   0
> CPU    5: hi:    0, btch:   1 usd:   0
> CPU    6: hi:    0, btch:   1 usd:   0
> CPU    7: hi:    0, btch:   1 usd:   0
> Node 0 DMA32 per-cpu:
> CPU    0: hi:  186, btch:  31 usd:   0
> CPU    1: hi:  186, btch:  31 usd:   0
> CPU    2: hi:  186, btch:  31 usd:   0
> CPU    3: hi:  186, btch:  31 usd:   0
> CPU    4: hi:  186, btch:  31 usd:   0
> CPU    5: hi:  186, btch:  31 usd:   0
> CPU    6: hi:  186, btch:  31 usd:   0
> CPU    7: hi:  186, btch:  31 usd:   0
> Node 0 Normal per-cpu:
> CPU    0: hi:  186, btch:  31 usd:   0
> CPU    1: hi:  186, btch:  31 usd:   0
> CPU    2: hi:  186, btch:  31 usd:   0
> CPU    3: hi:  186, btch:  31 usd:   0
> CPU    4: hi:  186, btch:  31 usd:   0
> CPU    5: hi:  186, btch:  31 usd:   0
> CPU    6: hi:  186, btch:  31 usd:   0
> CPU    7: hi:  186, btch:  31 usd:   0
> active_anon:3480428 inactive_anon:425284 isolated_anon:0
>    active_file:2145 inactive_file:9315 isolated_file:0
>    unevictable:0 dirty:885 writeback:8363 unstable:0
>    free:84530 slab_reclaimable:4581 slab_unreclaimable:11159
>    mapped:1281 shmem:44 pagetables:19832 bounce:0
> Node 0 DMA free:15624kB min:248kB low:308kB high:372kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15204kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
> lowmem_reserve[]: 0 2978 16108 16108
> Node 0 DMA32 free:102396kB min:49936kB low:62420kB high:74904kB active_anon:2034048kB inactive_anon:506720kB active_file:1516kB inactive_file:10704kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3049892kB mlocked:0kB dirty:484kB writeback:9780kB mapped:668kB shmem:0kB slab_reclaimable:956kB slab_unreclaimable:852kB kernel_stack:24kB pagetables:4292kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:8672 all_unreclaimable? no
> lowmem_reserve[]: 0 0 13130 13130
> Node 0 Normal free:220100kB min:220148kB low:275184kB high:330220kB active_anon:11887664kB inactive_anon:1194416kB active_file:7064kB inactive_file:26556kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:2668kB writeback:24060kB mapped:4456kB shmem:176kB slab_reclaimable:17368kB slab_unreclaimable:43784kB kernel_stack:4360kB pagetables:75036kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:13056 all_unreclaimable? no
> lowmem_reserve[]: 0 0 0 0
> Node 0 DMA: 2*4kB 0*8kB 0*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15624kB
> Node 0 DMA32: 478*4kB 354*8kB 102*16kB 77*32kB 84*64kB 76*128kB 57*256kB 46*512kB 39*1024kB 1*2048kB 0*4096kB = 104072kB
> Node 0 Normal: 1361*4kB 2478*8kB 1138*16kB 322*32kB 214*64kB 151*128kB 90*256kB 78*512kB 69*1024kB 0*2048kB 0*4096kB = 220436kB
> 13403 total pagecache pages
> 1847 pages in swap cache
> Swap cache stats: add 10430145, delete 10428298, find 2130220/2140426
> Free swap  = 0kB
> Total swap = 8224760kB
> 4194288 pages RAM
> 115862 pages reserved
> 18332 pages shared
> 3976531 pages non-shared
> [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
> [  577]     0   577     2790       36   4     -17         -1000 udevd
> [ 1874]     0  1874     1539       35   1       0             0 portreserve
> [ 1881]     0  1881    62321      470   0       0             0 rsyslogd
> [ 1936]     0  1936     2720       95   0       0             0 irqbalance
> [ 1955]    32  1955     4759       58   0       0             0 rpcbind
> [ 1969]     0  1969     3727       43   0       0             0 cgdcbxd
> [ 2069]     0  2069     3387      112   4       0             0 lldpad
> [ 2119]     0  2119     2088       67   4       0             0 fcoemon
> [ 2174]    81  2174     5562      258   4       0             0 dbus-daemon
> [ 2240]     0  2240     6290       19   5       0             0 rpc.idmapd
> [ 2256]     0  2256    47335       47   0       0             0 cupsd
> [ 2308]     0  2308     1019       38   4       0             0 acpid
> [ 2318]    68  2318     6544      270   0       0             0 hald
> [ 2319]     0  2319     4526       44   0       0             0 hald-runner
> [ 2367]     0  2367     5062       44   0       0             0 hald-addon-inpu
> [ 2368]    68  2368     4451       45   0       0             0 hald-addon-acpi
> [ 2387]     0  2387     1177       36   0       0             0 hv_kvp_daemon
> [ 2408]     0  2408    61241       86   0       0             0 pcscd
> [ 2424]     0  2424    96425      113   0       0             0 automount
> [ 2445]     0  2445     1692       24   5       0             0 mcelog
> [ 2457]     0  2457    16029       61   0     -17         -1000 sshd
> [ 2465]     0  2465     5523       40   0       0             0 xinetd
> [ 2545]     0  2545    19680       75   4       0             0 master
> [ 2554]    89  2554    19742       69   0       0             0 qmgr
> [ 2569]     0  2569    27544       41   4       0             0 abrtd
> [ 2583]     0  2583    27147      251   0       0             0 ksmtuned
> [ 2603]     0  2603     5363       32   0       0             0 atd
> [ 2631]     0  2631   108928       51   0       0             0 libvirtd
> [ 2689]     0  2689    15480       32   0       0             0 certmonger
> [ 2779]    99  2779     3222       36   0       0             0 dnsmasq
> [ 2814]     0  2814  1553973    18125   0       0             0 java
> [ 2815]     0  2815    58623      122   0       0             0 linux_webservic
> [ 2816]     0  2816    35590      100   0       0             0 dcs_col.py
> [ 2818]     0  2818    30367      191   0       0             0 gdm-binary
> [ 2823]     0  2823    19276       47   0       0             0 login
> [ 2825]     0  2825     1015       31   0       0             0 mingetty
> [ 2827]     0  2827     1015       31   6       0             0 mingetty
> [ 2829]     0  2829     1015       31   5       0             0 mingetty
> [ 2831]     0  2831     1015       31   4       0             0 mingetty
> [ 2887]     0  2887    38063       51   0       0             0 gdm-simple-slav
> [ 2889]     0  2889    39228     1952   0       0             0 Xorg
> [ 2913]     0  2913    59473      184   6       0             0 linux_webservic
> [ 2921]     0  2921  1045393      232   0       0             0 console-kit-dae
> [ 2991]    42  2991     5009       39   0       0             0 dbus-launch
> [ 2996]     0  2996    11268      200   0       0             0 devkit-power-da
> [ 3035]     0  3035    12486      272   0       0             0 polkitd
> [ 3047]   499  3047    42113       61   5       0             0 rtkit-daemon
> [ 3414]     0  3414    44226       49   1       0             0 gdm-session-wor
> [ 3418]     0  3418    37631       91   4       0             0 gnome-keyring-d
> [ 3427]     0  3427    74112      330   0       0             0 gnome-session
> [ 3435]     0  3435     5009       39   0       0             0 dbus-launch
> [ 3436]     0  3436     5483      201   1       0             0 dbus-daemon
> [ 3512]     0  3512    33284      669   0       0             0 gconfd-2
> [ 3521]     0  3521   130649      341   0       0             0 gnome-settings-
> [ 3522]     0  3522    73198       50   0       0             0 seahorse-daemon
> [ 3524]     0  3524    33642       50   1       0             0 gvfsd
> [ 3530]     0  3530    67997       78   5       0             0 gvfs-fuse-daemo
> [ 3547]     0  3547    68545      194   3       0             0 metacity
> [ 3550]     0  3550    82494      274   4       0             0 gnome-panel
> [ 3552]     0  3552   134857      477   0       0             0 nautilus
> [ 3554]     0  3554   157650       52   1       0             0 bonobo-activati
> [ 3561]     0  3561    35937      205   0       0             0 gvfs-gdu-volume
> [ 3562]     0  3562    75969      172   4       0             0 wnck-applet
> [ 3564]     0  3564    78943       51   3       0             0 trashapplet
> [ 3568]     0  3568    10176      268   0       0             0 udisks-daemon
> [ 3570]     0  3570    35665      160   0       0             0 gvfsd-trash
> [ 3571]     0  3571    10084       31   0       0             0 udisks-daemon
> [ 3574]     0  3574    37077       49   4       0             0 gvfs-gphoto2-vo
> [ 3576]     0  3576    57983       58   0       0             0 gvfs-afc-volume
> [ 3583]     0  3583   100987       51   3       0             0 gnote
> [ 3585]     0  3585    72509      185   4       0             0 notification-ar
> [ 3587]     0  3587    98257      118   1       0             0 gdm-user-switch
> [ 3589]     0  3589   135218      660   0       0             0 clock-applet
> [ 3602]     0  3602    64305      231   4       0             0 gnome-power-man
> [ 3609]     0  3609   113796       52   0       0             0 gnome-volume-co
> [ 3614]     0  3614    92243       49   4       0             0 pulseaudio
> [ 3615]     0  3615    80376      242   0       0             0 python
> [ 3618]     0  3618    65768      184   0       0             0 abrt-applet
> [ 3621]     0  3621    57299       50   5       0             0 polkit-gnome-au
> [ 3622]     0  3622    65349      268   0       0             0 bluetooth-apple
> [ 3624]     0  3624    28623       48   2       0             0 im-settings-dae
> [ 3625]     0  3625   117405      608   0       0             0 gpk-update-icon
> [ 3631]     0  3631    76750      277   0       0             0 nm-applet
> [ 3634]     0  3634    63708      222   0       0             0 gdu-notificatio
> [ 3646]     0  3646    69565      136   0       0             0 notification-da
> [ 3757]     0  3757    35925      107   1       0             0 escd
> [ 3758]     0  3758    66006      287   0       0             0 gnome-screensav
> [ 3797]     0  3797     9564       51   3       0             0 gconf-im-settin
> [ 3801]     0  3801    23747       50   5       0             0 gconf-helper
> [ 3825]     0  3825    33649       51   0       0             0 gvfsd-burn
> [ 3839]     0  3839    27116       41   4       0             0 bash
> [20202]     0 20202     2789       37   4     -17         -1000 udevd
> [20203]     0 20203     2789       36   2     -17         -1000 udevd
> [32090]     0 32090   118830      923   3       0             0 glusterd
> [32118]     0 32118   119899      118   1       0             0 glusterfsd
> [32405]     0 32405   442505     4410   3       0             0 glusterfsd
> [32415]     0 32415    81285      114   5       0             0 glusterfs
> [32421]    29 32421     6621       48   0       0             0 rpc.statd
> [  825]     0   825    73101      194   5       0             0 glusterfs
> [  893]     0   893    52976       78   4       0             0 smbd
> [  896]     0   896    53105      132   0       0             0 smbd
> [21884]     0 21884    24475      100   4       0             0 sshd
> [21893]     0 21893    27116      138   0       0             0 bash
> [24132]     0 24132    29303      106   4       0             0 crond
> [24583]     0 24583  6251402  3872767   7       0             0 smbd
> [ 2533]     0  2533    24475      112   0       0             0 sshd
> [ 2535]     0  2535    27117      139   0       0             0 bash
> [ 4839]     0  4839    24469      301   4       0             0 sshd
> [ 4841]     0  4841    27117      181   0       0             0 bash
> [ 5092]    89  5092    19700      274   0       0             0 pickup
> [ 5201]     0  5201    25226      128   6       0             0 sleep
> Out of memory: Kill process 24583 (smbd) score 949 or sacrifice child
> Killed process 24583, UID 0, (smbd) total-vm:25005608kB, anon-rss:15489076kB, file-rss:1992kB
>
>
> $ cat log.192.168.101.11
> [2013/09/09 14:18:56.304225,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 14:21:59.000773,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/09 14:26:40.441479,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/09 14:26:43.652750,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:41:32.970777,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/09 16:41:36.234711,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:47:15.297889,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:47:15.997834,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:48:40.010243,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:49:58.963741,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:50:04.310460,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:50:04.319571,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:50:05.157431,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:50:05.166221,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 16:50:11.917344,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:05:55.788424,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:10:11.079053,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:10:12.536755,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:10:59.782712,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:11:00.168956,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:11:01.368425,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:44:12.541311,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 17:44:13.309838,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/09 18:11:08.740653,  0] smbd/process.c:244(read_packet_remainder)
>     read_fd_with_timeout failed for client 0.0.0.0 read error = NT_STATUS_CONNECTION_RESET.
> [2013/09/10 10:24:12.250876,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 10:24:15.479869,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 10:45:27.369512,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 10:45:30.587403,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 10:47:27.227002,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 10:47:30.443215,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 11:30:23.651192,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 11:30:26.874680,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 13:54:19.110108,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 13:54:22.338610,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 14:28:54.946258,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 15:37:04.295560,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 15:37:09.951261,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/10 15:38:44.649823,  0] src/vfs_glusterfs.c:280(vfs_gluster_connect)
>     soul: Initialized volume from server localhost
> [2013/09/10 15:38:47.858986,  0] src/vfs_glusterfs.c:613(vfs_gluster_lstat)
>     glfs_lstat(./..) failed: No data available
> [2013/09/11 11:00:24.471961,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.472132,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.472199,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.472259,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.498544,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.498652,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.498726,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.498786,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.513090,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.513172,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.513232,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.513291,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.527237,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.527316,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.527375,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.527434,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.541366,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.541446,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.541507,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.541567,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.555781,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.555883,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.555949,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:00:24.556010,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.067119,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.067301,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.067390,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.067491,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.084023,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.084148,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.084212,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.084268,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.098863,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.098997,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.099062,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.099118,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.112658,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.112769,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.112831,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.112886,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.126860,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.126978,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.127040,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.127094,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.141103,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.141211,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.141272,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:04:50.141326,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:19.696160,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:19.696304,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:19.696389,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:19.696448,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:23.551288,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:23.551421,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:23.551497,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/11 11:06:23.551554,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/12 09:57:21.251942,  0] smbd/process.c:244(read_packet_remainder)
>     read_fd_with_timeout failed for client 0.0.0.0 read error = NT_STATUS_END_OF_FILE.
> [2013/09/12 09:57:32.010914,  0] smbd/process.c:244(read_packet_remainder)
>     read_fd_with_timeout failed for client 0.0.0.0 read error = NT_STATUS_END_OF_FILE.
> [2013/09/12 09:57:42.015573,  0] smbd/process.c:497(init_smb_request)
>     init_smb_request: invalid request size 4
> [2013/09/12 09:57:52.025797,  0] smbd/process.c:244(read_packet_remainder)
>     read_fd_with_timeout failed for client 0.0.0.0 read error = NT_STATUS_END_OF_FILE.
> [2013/09/12 09:58:02.025405,  0] smbd/process.c:497(init_smb_request)
>     init_smb_request: invalid request size 4
> [2013/09/12 09:59:33.157460,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/12 09:59:33.180977,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/12 09:59:33.181019,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
> [2013/09/12 09:59:33.181057,  0] smbd/trans2.c:1253(unix_filetype)
>     unix_filetype: unknown filetype 0
>
>> -Lala
>>> -terrs
>>>
>>>> I'm aware of 2 different kinds of memory leaks on 3.3.1, one is in
>>>> geo-replication and another one is native client side memory leak.
>>>> Sadly both got mixed in https://bugzilla.redhat.com/show_bug.cgi?id=841617
>>>>
>>>> I can tell you that geo-replication leak is still present in 3.4.0 and
>>>> native client leak isn't but I don't know what patch you need to backport :(
>>>>
>>>>
>>>> On Wed, Sep 11, 2013 at 1:16 PM, John Ewing <johnewing1 at gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am using gluster 3.3.1 on Centos 6, installed from
>>>>> the glusterfs-3.3.1-1.el6.x86_64.rpm rpms.
>>>>> I am seeing the Committed_AS memory continually increasing and the
>>>>> processes  using the memory are glusterfsd instances.
>>>>>
>>>>> see http://imgur.com/K3dalTW for graph.
>>>>>
>>>>> Both nodes are exhibiting the same behaviour, I have tried the suggested
>>>>>
>>>>> echo 2 > /proc/sys/vm/drop_caches
>>>>>
>>>>> but it made no difference. It there a known issue with 3.3.1 ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> John
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at nongnu.org
>>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130913/7bfb9a47/attachment-0001.html>


More information about the Gluster-devel mailing list