[Gluster-devel] Excessive memory usage with 1.3.12

Thomas Conway-Poulsen tecp at conwayit.dk
Tue Nov 4 12:16:48 UTC 2008


Hi Krishna

Running with fuse-2.7.3glfs10.

The client setup are as follows:

------------------------------------------------------------------------------------
mounts to /mnt/gluster/index:
------------------------------------------------------------------------------------
volume gluster01
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.1
option remote-subvolume gluster-storage-index
end-volume

volume gluster02
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.2
option remote-subvolume gluster-storage-index
end-volume

volume gluster03
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.3
option remote-subvolume gluster-storage-index
end-volume

volume gluster04
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.4
option remote-subvolume gluster-storage-index
end-volume

volume gluster05
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.5
option remote-subvolume gluster-storage-index
end-volume

volume gluster06
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.6
option remote-subvolume gluster-storage-index
end-volume

volume gluster-storage-index-namespace
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.1
option remote-subvolume gluster-storage-index-namespace
end-volume

volume gluster-storage-index
type cluster/unify
option scheduler rr
option namespace gluster-storage-index-namespace
subvolumes gluster01 gluster02 gluster03 gluster04 gluster05 gluster06
end-volume

volume gluster-storage-index-iothreads
type performance/io-threads
subvolumes gluster-storage-index
option cache-size 64MB
option thread-count 8
end-volume

volume gluster-gluster-index-readahead
  type performance/read-ahead
  subvolumes gluster-storage-index-iothreads
end-volume

volume gluster-storage-index-writebehind
  type performance/write-behind
  subvolumes gluster-storage-index-readahead
end-volume

volume gluster-storage-index-iocache
  type performance/io-cache
  option cache-size 256MB
  subvolumes gluster-storage-index-writebehind
end-volume

------------------------------------------------------------------------------------
mounts to /mnt/gluster/data:
------------------------------------------------------------------------------------
volume cindex01
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.1
option remote-subvolume gluster-storage-data
end-volume

volume cindex02
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.2
option remote-subvolume gluster-storage-data
end-volume

volume cindex03
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.3
option remote-subvolume gluster-storage-data
end-volume

volume cindex04
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.4
option remote-subvolume gluster-storage-data
end-volume

volume cindex05
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.5
option remote-subvolume gluster-storage-data
end-volume

volume cindex06
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.6
option remote-subvolume gluster-storage-data
end-volume

volume gluster-storage-data-namespace
type protocol/client
option transport-type tcp/client
option remote-host 10.11.253.1
option remote-subvolume gluster-storage-data-namespace
end-volume

volume gluster-storage-data
type cluster/unify
option scheduler rr
option namespace gluster-storage-data-namespace
subvolumes cindex01 cindex02 cindex03 cindex04 cindex05 cindex06
end-volume

volume gluster-storage-data-iothreads
type performance/io-threads
subvolumes gluster-storage-data
option cache-size 64MB
option thread-count 8
end-volume

volume gluster-storage-data-writebehind
type performance/write-behind
subvolumes gluster-storage-data-iothreads
end-volume

volume gluster-storage-data-iocache
type performance/io-cache
option cache-size 128MB
subvolumes gluster-storage-data-writebehind
end-volume

We can only see the memleak, when running multiple clients under heavy  
file-load over many files.

The memory seems stable when we dont use fcnt locks-

bool lock(int file_fd) {
  struct flock lock_info;
  lock_info.l_type = F_RDLCK;
  lock_info.l_whence = SEEK_SET;
  lock_info.l_start = 0;
  lock_info.l_len = 0;

  if(fcntl(file_fd, F_SETLK, &lock_info) == 0) return true;
  else return false;
}

bool unlock(int file_fd) {
  struct flock lock_info;
  lock_info.l_type = F_UNLCK;
  lock_info.l_whence = SEEK_SET;
  lock_info.l_start = 0;
  lock_info.l_len = 0;

  if(fcntl(file_fd, F_SETLKW, &lock_info) == 0) return true;
  else return false;
}

bool lock(int file_fd, int start, int length) {
  struct flock lock_info;
  lock_info.l_type = F_RDLCK;
  lock_info.l_whence = SEEK_SET;
  lock_info.l_start = start;
  lock_info.l_len = length;

  if(fcntl(file_fd, F_SETLK, &lock_info) == 0) return true;
  else return false;
}

bool unlock(int file_fd, int start, int length) {
  struct flock lock_info;
  lock_info.l_type = F_UNLCK;
  lock_info.l_whence = SEEK_SET;
  lock_info.l_start = start;
  lock_info.l_len = length;

  if(fcntl(file_fd, F_SETLKW, &lock_info) == 0) return true;
  else return false;
}


Hope this helps,

Thomas Conway.

On Nov 4, 2008, at 8:07 AM, Krishna Srinivas wrote:

> Thomas,
> We want to reproduce the leak in our setup to fix it. What is your
> setup on the client side? How many servers do you have? What are the
> applications you run on the mount point? Do you observe leak only when
> "certain" operations are done? (I am just looking for more clues)
>
> Thanks
> Krishna
>
> On Sun, Nov 2, 2008 at 5:08 PM, Thomas Conway-Poulsen <tecp at conwayit.dk 
> > wrote:
>> Hi devel,
>>
>> The Glusterfsd process is using much more memory than we expected  
>> it to use,
>> maybe there is some memory leak?
>>
>> It keeps eating until the process dies.
>>
>> Is there any way to set the maximum memory usage ?
>>
>> root     15436  1.5 85.5 2180580 1764980 ?     Ssl  Oct28 105:20  
>> [glusterfs]
>>
>> Our server configuration:
>> ------------------------------------------------
>> volume gluster-storage-data-export
>> type storage/posix
>> option directory /mnt/gluster-storage-server/data/export
>> end-volume
>>
>> volume gluster-storage-data-namespace
>> type storage/posix
>> option directory /mnt/gluster-storage-server/data/namespace
>> end-volume
>>
>> volume gluster-storage-data-iothreads
>> type performance/io-threads
>> option thread-count 2
>> option cache-size 32MB
>> subvolumes gluster-storage-data-export
>> end-volume
>>
>> volume gluster-storage-data-locks
>> type features/posix-locks
>> subvolumes gluster-storage-data-iothreads
>> end-volume
>>
>> volume gluster-storage-data-readahead
>> type performance/read-ahead
>> subvolumes gluster-storage-data-locks
>> end-volume
>>
>> volume gluster-storage-data-writebehind
>> type performance/write-behind
>> subvolumes gluster-storage-data-readahead
>> end-volume
>>
>> volume gluster-storage-data
>> type performance/io-cache
>> option cache-size 128MB
>> subvolumes gluster-storage-data-writebehind
>> end-volume
>>
>> volume gluster-storage-index-export
>> type storage/posix
>> option directory /mnt/gluster-storage-server/index/export
>> end-volume
>>
>> volume gluster-storage-index-namespace
>> type storage/posix
>> option directory /mnt/gluster-storage-server/index/namespace
>> end-volume
>>
>> volume gluster-storage-index-iothreads
>> type performance/io-threads
>> option thread-count 2
>> option cache-size 32MB
>> subvolumes gluster-storage-index-export
>> end-volume
>>
>> volume gluster-storage-index-locks
>> type features/posix-locks
>> subvolumes gluster-storage-index-export
>> end-volume
>>
>> volume gluster-storage-index-readahead
>> type performance/read-ahead
>> subvolumes gluster-storage-index-locks
>> end-volume
>>
>> volume gluster-storage-index-writebehind
>> type performance/write-behind
>> subvolumes gluster-storage-index-readahead
>> end-volume
>>
>> volume gluster-storage-index
>> type performance/io-cache
>> option cache-size 128MB
>> subvolumes gluster-storage-index-writebehind
>> end-volume
>>
>> volume gluster-server
>> type protocol/server
>> subvolumes gluster-storage-index gluster-storage-index-namespace
>> gluster-storage-data gluster-storage-data-namespace
>> option transport-type tcp/server
>> option auth.ip.gluster-storage-index.allow *
>> option auth.ip.gluster-storage-index-namespace.allow *
>> option auth.ip.gluster-storage-data.allow *
>> option auth.ip.gluster-storage-data-namespace.allow *
>> end-volume
>>
>>
>> Here is the pmap dump:
>> -----------------------------------------------------
>> 15436:   [glusterfs]
>> Address           Kbytes Mode  Offset           Device    Mapping
>> 0000000000400000      16 r-x-- 0000000000000000 008:00001 glusterfs
>> 0000000000504000       4 rw--- 0000000000004000 008:00001 glusterfs
>> 0000000000505000  497276 rw--- 0000000000505000 000:00000   [ anon ]
>> 0000000040000000       4 ----- 0000000040000000 000:00000   [ anon ]
>> 0000000040001000    8192 rw--- 0000000040001000 000:00000   [ anon ]
>> 0000000040801000       4 ----- 0000000040801000 000:00000   [ anon ]
>> 0000000040802000    8192 rw--- 0000000040802000 000:00000   [ anon ]
>> 0000000041002000       4 ----- 0000000041002000 000:00000   [ anon ]
>> 0000000041003000    8192 rw--- 0000000041003000 000:00000   [ anon ]
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1937 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20081104/d7a01d49/attachment-0003.p7s>


More information about the Gluster-devel mailing list