[Gluster-users] GlusterD uses 50% of RAM

Atin Mukherjee amukherj at redhat.com
Wed Mar 25 05:17:10 UTC 2015


Can you share the recent cmd_log_history again. Have you triggered lots
of volume set commands? In recent past we have discovered that volume
set has some potential memory leak.

~Atin

On 03/24/2015 07:49 PM, RASTELLI Alessandro wrote:
> Hi,
> today the issue happened once again. 
> Glusterd process was using 80% of RAM and its log was filling up the /var/log.
> One month ago, when the issue last happened, you suggested to install a patch, so I did this:
> git fetch git://review.gluster.org/glusterfs refs/changes/28/9328/4
> git fetch http://review.gluster.org/glusterfs refs/changes/28/9328/4 && git checkout FETCH_HEAD
> Is this enough to install the patch or I missed something?
> 
> Thank you
> Alessandro
> 
> 
> -----Original Message-----
> From: RASTELLI Alessandro 
> Sent: martedì 24 febbraio 2015 10:28
> To: 'Atin Mukherjee'
> Cc: gluster-users at gluster.org
> Subject: RE: [Gluster-users] GlusterD uses 50% of RAM
> 
> Hi Atin,
> I managed to install the patch, it fixed the issue Thank you A.
> 
> -----Original Message-----
> From: Atin Mukherjee [mailto:amukherj at redhat.com]
> Sent: martedì 24 febbraio 2015 08:03
> To: RASTELLI Alessandro
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
> 
> 
> 
> On 02/20/2015 07:20 PM, RASTELLI Alessandro wrote:
>> I get this:
>>
>> [root at gluster03-mi glusterfs]# git fetch 
>> git://review.gluster.org/glusterfs refs/changes/28/9328/4 && git 
>> checkout FETCH_HEAD
>> fatal: Couldn't find remote ref refs/changes/28/9328/4
>>
>> What's wrong with that?
> Is your current branch at 3.6 ?
>>
>> A.
>>
>> -----Original Message-----
>> From: Atin Mukherjee [mailto:amukherj at redhat.com]
>> Sent: venerdì 20 febbraio 2015 12:54
>> To: RASTELLI Alessandro
>> Cc: gluster-users at gluster.org
>> Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
>>
>> >From the cmd log history I could see lots of volume status commands were triggered parallely. This is a known issue for 3.6 and it would cause a memory leak. http://review.gluster.org/#/c/9328/ should solve it.
>>
>> ~Atin
>>
>> On 02/20/2015 04:36 PM, RASTELLI Alessandro wrote:
>>> 10MB log
>>> sorry :)
>>>
>>> -----Original Message-----
>>> From: Atin Mukherjee [mailto:amukherj at redhat.com]
>>> Sent: venerdì 20 febbraio 2015 10:49
>>> To: RASTELLI Alessandro; gluster-users at gluster.org
>>> Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
>>>
>>> Could you please share the cmd_history.log & glusterd log file to analyze this high memory usage.
>>>
>>> ~Atin
>>>
>>> On 02/20/2015 03:10 PM, RASTELLI Alessandro wrote:
>>>> Hi,
>>>> I've noticed that one of our 6 gluster 3.6.2 nodes has "glusterd" 
>>>> process using 50% of RAM, on the other nodes usage is about 5% This can be a bug?
>>>> Should I restart glusterd daemon?
>>>> Thank you
>>>> A
>>>>
>>>> From: Volnei Puttini [mailto:volnei at vcplinux.com.br]
>>>> Sent: lunedì 9 febbraio 2015 18:06
>>>> To: RASTELLI Alessandro; gluster-users at gluster.org
>>>> Subject: Re: [Gluster-users] cannot access to CIFS export
>>>>
>>>> Hi Alessandro,
>>>>
>>>> My system:
>>>>
>>>> CentOS 7
>>>>
>>>> samba-vfs-glusterfs-4.1.1-37.el7_0.x86_64
>>>> samba-winbind-4.1.1-37.el7_0.x86_64
>>>> samba-libs-4.1.1-37.el7_0.x86_64
>>>> samba-common-4.1.1-37.el7_0.x86_64
>>>> samba-winbind-modules-4.1.1-37.el7_0.x86_64
>>>> samba-winbind-clients-4.1.1-37.el7_0.x86_64
>>>> samba-4.1.1-37.el7_0.x86_64
>>>> samba-client-4.1.1-37.el7_0.x86_64
>>>>
>>>> glusterfs 3.6.2 built on Jan 22 2015 12:59:57
>>>>
>>>> Try this, work fine for me:
>>>>
>>>> [GFSVOL]
>>>>     browseable = No
>>>>     comment = Gluster share of volume gfsvol
>>>>     path = /
>>>>     read only = No
>>>>     guest ok = Yes
>>>>     kernel share modes = No
>>>>     posix locking = No
>>>>     vfs objects = glusterfs
>>>>     glusterfs:loglevel = 7
>>>>     glusterfs:logfile = /var/log/samba/glusterfs-gfstest.log
>>>>     glusterfs:volume = vgtest
>>>>     glusterfs:volfile_server = 192.168.2.21
>>>>
>>>> On 09-02-2015 14:45, RASTELLI Alessandro wrote:
>>>> Hi,
>>>> I've created and started a new replica volume "downloadstat" with CIFS export enabled on GlusterFS 3.6.2.
>>>> I can see the following piece has been added automatically to smb.conf:
>>>> [gluster-downloadstat]
>>>> comment = For samba share of volume downloadstat vfs objects = 
>>>> glusterfs glusterfs:volume = downloadstat glusterfs:logfile = 
>>>> /var/log/samba/glusterfs-downloadstat.%M.log
>>>> glusterfs:loglevel = 7
>>>> path = /
>>>> read only = no
>>>> guest ok = yes
>>>>
>>>> I restarted smb service, without errors.
>>>> When I try to access from Win7 client to "\\gluster01-mi\gluster-downloadstat<file:///\\gluster01-mi\gluster-downloadstat>" it asks me a login (which user do I need to put?) and then gives me error "The network path was not found"
>>>> and on Gluster smb.log I see:
>>>> [2015/02/09 17:21:13.111639,  0] smbd/vfs.c:173(vfs_init_custom)
>>>>   error probing vfs module 'glusterfs': NT_STATUS_UNSUCCESSFUL
>>>> [2015/02/09 17:21:13.111709,  0] smbd/vfs.c:315(smbd_vfs_init)
>>>>   smbd_vfs_init: vfs_init_custom failed for glusterfs
>>>> [2015/02/09 17:21:13.111741,  0] smbd/service.c:902(make_connection_snum)
>>>>   vfs_init failed for service gluster-downloadstat
>>>>
>>>> Can you explain how to fix?
>>>> Thanks
>>>>
>>>> Alessandro
>>>>
>>>> From: 
>>>> gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at glust
>>>> e r .org> [mailto:gluster-users-bounces at gluster.org] On Behalf Of 
>>>> David F.
>>>> Robinson
>>>> Sent: domenica 8 febbraio 2015 18:19
>>>> To: Gluster Devel;
>>>> gluster-users at gluster.org<mailto:gluster-users at gluster.org>
>>>> Subject: [Gluster-users] cannot delete non-empty directory
>>>>
>>>> I am seeing these messsages after I delete large amounts of data using gluster 3.6.2.
>>>> cannot delete non-empty directory: 
>>>> old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_fi
>>>> n
>>>> a
>>>> l
>>>>
>>>> >From the FUSE mount (as root), the directory shows up as empty:
>>>>
>>>> # pwd
>>>> /backup/homegfs/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Stora
>>>> g
>>>> e
>>>> /Jimmy_Old/src_vj1.5_final
>>>>
>>>> # ls -al
>>>> total 5
>>>> d--------- 2 root root    4106 Feb  6 13:55 .
>>>> drwxrws--- 3  601 dmiller   72 Feb  6 13:55 ..
>>>>
>>>> However, when you look at the bricks, the files are still there (none on brick01bkp, all files are on brick02bkp).  All of the files are 0-length and have ------T permissions.
>>>> Any suggestions on how to fix this and how to prevent it from happening?
>>>>
>>>> #  ls -al
>>>> /data/brick*/homegfs_bkp/backup.0/old_shelf4/Aegis/\!\!\!Programs/Ra
>>>> v e nCFD/Storage/Jimmy_Old/src_vj1.5_final
>>>> /data/brick01bkp/homegfs_bkp/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final:
>>>> total 4
>>>> d---------+ 2 root root  10 Feb  6 13:55 .
>>>> drwxrws---+ 3  601 raven 36 Feb  6 13:55 ..
>>>>
>>>> /data/brick02bkp/homegfs_bkp/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final:
>>>> total 8
>>>> d---------+ 3 root root  4096 Dec 31  1969 .
>>>> drwxrws---+ 3  601 raven   36 Feb  6 13:55 ..
>>>> ---------T  5  601 raven    0 Nov 20 00:08 read_inset.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 readbc.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 readcn.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 readinp.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 readinp_v1_2.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 readinp_v1_3.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 rotatept.f.gz
>>>> d---------+ 2 root root   118 Feb  6 13:54 save1
>>>> ---------T  5  601 raven    0 Nov 20 00:08 sepvec.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 shadow.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 snksrc.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 source.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 step.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 stoprog.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 summer6.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 totforc.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 tritet.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 wallrsd.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 wheat.f.gz
>>>> ---------T  5  601 raven    0 Nov 20 00:08 write_inset.f.gz
>>>>
>>>>
>>>> This is using gluster 3.6.2 on a distributed gluster volume that resides on a single machine.  Both of the bricks are on one machine consisting of 2x RAID-6 arrays.
>>>>
>>>> df -h | grep brick
>>>> /dev/mapper/vg01-lvol1                       88T   22T   66T  25% /data/brick01bkp
>>>> /dev/mapper/vg02-lvol1                       88T   22T   66T  26% /data/brick02bkp
>>>>
>>>> # gluster volume info homegfs_bkp
>>>> Volume Name: homegfs_bkp
>>>> Type: Distribute
>>>> Volume ID: 96de8872-d957-4205-bf5a-076e3f35b294
>>>> Status: Started
>>>> Number of Bricks: 2
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs_bkp
>>>> Brick2: gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs_bkp
>>>> Options Reconfigured:
>>>> storage.owner-gid: 100
>>>> performance.io-thread-count: 32
>>>> server.allow-insecure: on
>>>> network.ping-timeout: 10
>>>> performance.cache-size: 128MB
>>>> performance.write-behind-window-size: 128MB
>>>> server.manage-gids: on
>>>> changelog.rollover-time: 15
>>>> changelog.fsync-interval: 3
>>>>
>>>>
>>>>
>>>> ===============================
>>>> David F. Robinson, Ph.D.
>>>> President - Corvid Technologies
>>>> 704.799.6944 x101 [office]
>>>> 704.252.1310 [cell]
>>>> 704.799.7974 [fax]
>>>> David.Robinson at corvidtec.com<mailto:David.Robinson at corvidtec.com>
>>>> http://www.corvidtechnologies.com
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>>
>>>> Gluster-users mailing list
>>>>
>>>> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>>>>
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>> --
>>> ~Atin
>>>
>>
>> --
>> ~Atin
>>
> 
> --
> ~Atin
> 

-- 
~Atin


More information about the Gluster-users mailing list