[Gluster-devel] Excessive memory usage with 1.3.12

Anand Avati avati at zresearch.com
Wed Nov 5 09:58:04 UTC 2008


Thomas,
 thanks for reporting the observation with fcntl locks. There was a memory
leak in posix-locks which was fixed in 1.4 tree, but had somehow missed the
backport to 1.3 tree. Please find the memory leak fix in the latest revision
of glusterfs--mainline--2.5.

thanks,
avati

2008/11/4 Thomas Conway-Poulsen <tecp at conwayit.dk>

> Hi Krishna
>
> Running with fuse-2.7.3glfs10.
>
> The client setup are as follows:
>
>
> ------------------------------------------------------------------------------------
> mounts to /mnt/gluster/index:
>
> ------------------------------------------------------------------------------------
> volume gluster01
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.1
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster02
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.2
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster03
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.3
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster04
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.4
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster05
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.5
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster06
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.6
> option remote-subvolume gluster-storage-index
> end-volume
>
> volume gluster-storage-index-namespace
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.1
> option remote-subvolume gluster-storage-index-namespace
> end-volume
>
> volume gluster-storage-index
> type cluster/unify
> option scheduler rr
> option namespace gluster-storage-index-namespace
> subvolumes gluster01 gluster02 gluster03 gluster04 gluster05 gluster06
> end-volume
>
> volume gluster-storage-index-iothreads
> type performance/io-threads
> subvolumes gluster-storage-index
> option cache-size 64MB
> option thread-count 8
> end-volume
>
> volume gluster-gluster-index-readahead
>  type performance/read-ahead
>  subvolumes gluster-storage-index-iothreads
> end-volume
>
> volume gluster-storage-index-writebehind
>  type performance/write-behind
>  subvolumes gluster-storage-index-readahead
> end-volume
>
> volume gluster-storage-index-iocache
>  type performance/io-cache
>  option cache-size 256MB
>  subvolumes gluster-storage-index-writebehind
> end-volume
>
>
> ------------------------------------------------------------------------------------
> mounts to /mnt/gluster/data:
>
> ------------------------------------------------------------------------------------
> volume cindex01
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.1
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume cindex02
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.2
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume cindex03
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.3
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume cindex04
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.4
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume cindex05
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.5
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume cindex06
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.6
> option remote-subvolume gluster-storage-data
> end-volume
>
> volume gluster-storage-data-namespace
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.11.253.1
> option remote-subvolume gluster-storage-data-namespace
> end-volume
>
> volume gluster-storage-data
> type cluster/unify
> option scheduler rr
> option namespace gluster-storage-data-namespace
> subvolumes cindex01 cindex02 cindex03 cindex04 cindex05 cindex06
> end-volume
>
> volume gluster-storage-data-iothreads
> type performance/io-threads
> subvolumes gluster-storage-data
> option cache-size 64MB
> option thread-count 8
> end-volume
>
> volume gluster-storage-data-writebehind
> type performance/write-behind
> subvolumes gluster-storage-data-iothreads
> end-volume
>
> volume gluster-storage-data-iocache
> type performance/io-cache
> option cache-size 128MB
> subvolumes gluster-storage-data-writebehind
> end-volume
>
> We can only see the memleak, when running multiple clients under heavy
> file-load over many files.
>
> The memory seems stable when we dont use fcnt locks-
>
> bool lock(int file_fd) {
>  struct flock lock_info;
>  lock_info.l_type = F_RDLCK;
>  lock_info.l_whence = SEEK_SET;
>  lock_info.l_start = 0;
>  lock_info.l_len = 0;
>
>  if(fcntl(file_fd, F_SETLK, &lock_info) == 0) return true;
>  else return false;
> }
>
> bool unlock(int file_fd) {
>  struct flock lock_info;
>  lock_info.l_type = F_UNLCK;
>  lock_info.l_whence = SEEK_SET;
>  lock_info.l_start = 0;
>  lock_info.l_len = 0;
>
>  if(fcntl(file_fd, F_SETLKW, &lock_info) == 0) return true;
>  else return false;
> }
>
> bool lock(int file_fd, int start, int length) {
>  struct flock lock_info;
>  lock_info.l_type = F_RDLCK;
>  lock_info.l_whence = SEEK_SET;
>  lock_info.l_start = start;
>  lock_info.l_len = length;
>
>  if(fcntl(file_fd, F_SETLK, &lock_info) == 0) return true;
>  else return false;
> }
>
> bool unlock(int file_fd, int start, int length) {
>  struct flock lock_info;
>  lock_info.l_type = F_UNLCK;
>  lock_info.l_whence = SEEK_SET;
>  lock_info.l_start = start;
>  lock_info.l_len = length;
>
>  if(fcntl(file_fd, F_SETLKW, &lock_info) == 0) return true;
>  else return false;
> }
>
>
> Hope this helps,
>
> Thomas Conway.
>
>
> On Nov 4, 2008, at 8:07 AM, Krishna Srinivas wrote:
>
>  Thomas,
>> We want to reproduce the leak in our setup to fix it. What is your
>> setup on the client side? How many servers do you have? What are the
>> applications you run on the mount point? Do you observe leak only when
>> "certain" operations are done? (I am just looking for more clues)
>>
>> Thanks
>> Krishna
>>
>> On Sun, Nov 2, 2008 at 5:08 PM, Thomas Conway-Poulsen <tecp at conwayit.dk>
>> wrote:
>>
>>> Hi devel,
>>>
>>> The Glusterfsd process is using much more memory than we expected it to
>>> use,
>>> maybe there is some memory leak?
>>>
>>> It keeps eating until the process dies.
>>>
>>> Is there any way to set the maximum memory usage ?
>>>
>>> root     15436  1.5 85.5 2180580 1764980 ?     Ssl  Oct28 105:20
>>> [glusterfs]
>>>
>>> Our server configuration:
>>> ------------------------------------------------
>>> volume gluster-storage-data-export
>>> type storage/posix
>>> option directory /mnt/gluster-storage-server/data/export
>>> end-volume
>>>
>>> volume gluster-storage-data-namespace
>>> type storage/posix
>>> option directory /mnt/gluster-storage-server/data/namespace
>>> end-volume
>>>
>>> volume gluster-storage-data-iothreads
>>> type performance/io-threads
>>> option thread-count 2
>>> option cache-size 32MB
>>> subvolumes gluster-storage-data-export
>>> end-volume
>>>
>>> volume gluster-storage-data-locks
>>> type features/posix-locks
>>> subvolumes gluster-storage-data-iothreads
>>> end-volume
>>>
>>> volume gluster-storage-data-readahead
>>> type performance/read-ahead
>>> subvolumes gluster-storage-data-locks
>>> end-volume
>>>
>>> volume gluster-storage-data-writebehind
>>> type performance/write-behind
>>> subvolumes gluster-storage-data-readahead
>>> end-volume
>>>
>>> volume gluster-storage-data
>>> type performance/io-cache
>>> option cache-size 128MB
>>> subvolumes gluster-storage-data-writebehind
>>> end-volume
>>>
>>> volume gluster-storage-index-export
>>> type storage/posix
>>> option directory /mnt/gluster-storage-server/index/export
>>> end-volume
>>>
>>> volume gluster-storage-index-namespace
>>> type storage/posix
>>> option directory /mnt/gluster-storage-server/index/namespace
>>> end-volume
>>>
>>> volume gluster-storage-index-iothreads
>>> type performance/io-threads
>>> option thread-count 2
>>> option cache-size 32MB
>>> subvolumes gluster-storage-index-export
>>> end-volume
>>>
>>> volume gluster-storage-index-locks
>>> type features/posix-locks
>>> subvolumes gluster-storage-index-export
>>> end-volume
>>>
>>> volume gluster-storage-index-readahead
>>> type performance/read-ahead
>>> subvolumes gluster-storage-index-locks
>>> end-volume
>>>
>>> volume gluster-storage-index-writebehind
>>> type performance/write-behind
>>> subvolumes gluster-storage-index-readahead
>>> end-volume
>>>
>>> volume gluster-storage-index
>>> type performance/io-cache
>>> option cache-size 128MB
>>> subvolumes gluster-storage-index-writebehind
>>> end-volume
>>>
>>> volume gluster-server
>>> type protocol/server
>>> subvolumes gluster-storage-index gluster-storage-index-namespace
>>> gluster-storage-data gluster-storage-data-namespace
>>> option transport-type tcp/server
>>> option auth.ip.gluster-storage-index.allow *
>>> option auth.ip.gluster-storage-index-namespace.allow *
>>> option auth.ip.gluster-storage-data.allow *
>>> option auth.ip.gluster-storage-data-namespace.allow *
>>> end-volume
>>>
>>>
>>> Here is the pmap dump:
>>> -----------------------------------------------------
>>> 15436:   [glusterfs]
>>> Address           Kbytes Mode  Offset           Device    Mapping
>>> 0000000000400000      16 r-x-- 0000000000000000 008:00001 glusterfs
>>> 0000000000504000       4 rw--- 0000000000004000 008:00001 glusterfs
>>> 0000000000505000  497276 rw--- 0000000000505000 000:00000   [ anon ]
>>> 0000000040000000       4 ----- 0000000040000000 000:00000   [ anon ]
>>> 0000000040001000    8192 rw--- 0000000040001000 000:00000   [ anon ]
>>> 0000000040801000       4 ----- 0000000040801000 000:00000   [ anon ]
>>> 0000000040802000    8192 rw--- 0000000040802000 000:00000   [ anon ]
>>> 0000000041002000       4 ----- 0000000041002000 000:00000   [ anon ]
>>> 0000000041003000    8192 rw--- 0000000041003000 000:00000   [ anon ]
>>>
>>
>>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>


-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20081105/729ab2a0/attachment-0003.html>


More information about the Gluster-devel mailing list