[Gluster-devel] Excessive memory usage with 1.3.12

Krishna Srinivas krishna at zresearch.com
Tue Nov 4 07:07:03 UTC 2008


Thomas,
We want to reproduce the leak in our setup to fix it. What is your
setup on the client side? How many servers do you have? What are the
applications you run on the mount point? Do you observe leak only when
"certain" operations are done? (I am just looking for more clues)

Thanks
Krishna

On Sun, Nov 2, 2008 at 5:08 PM, Thomas Conway-Poulsen <tecp at conwayit.dk> wrote:
> Hi devel,
>
> The Glusterfsd process is using much more memory than we expected it to use,
> maybe there is some memory leak?
>
> It keeps eating until the process dies.
>
> Is there any way to set the maximum memory usage ?
>
> root     15436  1.5 85.5 2180580 1764980 ?     Ssl  Oct28 105:20 [glusterfs]
>
> Our server configuration:
> ------------------------------------------------
> volume gluster-storage-data-export
> type storage/posix
> option directory /mnt/gluster-storage-server/data/export
> end-volume
>
> volume gluster-storage-data-namespace
> type storage/posix
> option directory /mnt/gluster-storage-server/data/namespace
> end-volume
>
> volume gluster-storage-data-iothreads
> type performance/io-threads
> option thread-count 2
> option cache-size 32MB
> subvolumes gluster-storage-data-export
> end-volume
>
> volume gluster-storage-data-locks
> type features/posix-locks
> subvolumes gluster-storage-data-iothreads
> end-volume
>
> volume gluster-storage-data-readahead
> type performance/read-ahead
> subvolumes gluster-storage-data-locks
> end-volume
>
> volume gluster-storage-data-writebehind
> type performance/write-behind
> subvolumes gluster-storage-data-readahead
> end-volume
>
> volume gluster-storage-data
> type performance/io-cache
> option cache-size 128MB
> subvolumes gluster-storage-data-writebehind
> end-volume
>
> volume gluster-storage-index-export
> type storage/posix
> option directory /mnt/gluster-storage-server/index/export
> end-volume
>
> volume gluster-storage-index-namespace
> type storage/posix
> option directory /mnt/gluster-storage-server/index/namespace
> end-volume
>
> volume gluster-storage-index-iothreads
> type performance/io-threads
> option thread-count 2
> option cache-size 32MB
> subvolumes gluster-storage-index-export
> end-volume
>
> volume gluster-storage-index-locks
> type features/posix-locks
> subvolumes gluster-storage-index-export
> end-volume
>
> volume gluster-storage-index-readahead
> type performance/read-ahead
> subvolumes gluster-storage-index-locks
> end-volume
>
> volume gluster-storage-index-writebehind
> type performance/write-behind
> subvolumes gluster-storage-index-readahead
> end-volume
>
> volume gluster-storage-index
> type performance/io-cache
> option cache-size 128MB
> subvolumes gluster-storage-index-writebehind
> end-volume
>
> volume gluster-server
> type protocol/server
> subvolumes gluster-storage-index gluster-storage-index-namespace
> gluster-storage-data gluster-storage-data-namespace
> option transport-type tcp/server
> option auth.ip.gluster-storage-index.allow *
> option auth.ip.gluster-storage-index-namespace.allow *
> option auth.ip.gluster-storage-data.allow *
> option auth.ip.gluster-storage-data-namespace.allow *
> end-volume
>
>
> Here is the pmap dump:
> -----------------------------------------------------
> 15436:   [glusterfs]
> Address           Kbytes Mode  Offset           Device    Mapping
> 0000000000400000      16 r-x-- 0000000000000000 008:00001 glusterfs
> 0000000000504000       4 rw--- 0000000000004000 008:00001 glusterfs
> 0000000000505000  497276 rw--- 0000000000505000 000:00000   [ anon ]
> 0000000040000000       4 ----- 0000000040000000 000:00000   [ anon ]
> 0000000040001000    8192 rw--- 0000000040001000 000:00000   [ anon ]
> 0000000040801000       4 ----- 0000000040801000 000:00000   [ anon ]
> 0000000040802000    8192 rw--- 0000000040802000 000:00000   [ anon ]
> 0000000041002000       4 ----- 0000000041002000 000:00000   [ anon ]
> 0000000041003000    8192 rw--- 0000000041003000 000:00000   [ anon ]





More information about the Gluster-devel mailing list