[Gluster-devel] GlusterFS Process Growing

Gordan Bobic gordan at bobich.net
Tue Feb 10 14:21:34 UTC 2009


On Tue, 10 Feb 2009 19:36:38 +0530, Anand Avati <avati at zresearch.com>
wrote:
> On Tue, Feb 10, 2009 at 1:47 PM, Gordan Bobic <gordan at bobich.net> wrote:
>> Is there any reason why the GlusterFS process would grow over time if no
>> performance translators are used? I'm seeing the GlusterFS process go
>> from
>> about 2.5MB when it starts, up to hundreds of MB. At least initially,
the
>> growth seems to be between 4 and 8KB/s. After a few days it seems to
>> crash
>> out. Again this is the rootfs gluster process, so a bit hard to debug in
>> detail seen as it is the rootfs that goes away at this point. Is this a
>> memory leak, or is there a more reasonable explanation?
> 
> It is possible that it could be because of the large dcache. We have a
> recent enhancement which uses a more memory efficient data structure
> for inode specific data. You can give it a try and see if it helps
> your memory usage (there are other active development happening, so
> watch out if it is a production system and wait for rc2).

Is there a parameter to limit this cache? And how does this compare to the
normal page cache on the underlying FS?

The thing that seems particularly odd is that although both the root and
the /home glusterfs daemons seem to grow over time, the growth of the root
one seems to be vastly greater. The rootfs only contains about 1.2GB of
data, but I have seen it's daemon process get to nearly 300MB. The /home FS
contains several hundred gigabytes of data, but I haven't seen it grow to
more than about 60MB. Since the /home FS gets a lot more r/w access, it
seems to be something specific to the rootfs usage that causes the process
bloat. Perhaps something like shared libraries (all of which are on the
rootfs)? Maybe some kind of a leak in mmap() related code?

Gordan





More information about the Gluster-devel mailing list