[Gluster-devel] statedump support for the new mem-pool implementation

Jeff Darcy jeff at pl.atyp.us
Wed Jul 19 22:00:37 UTC 2017



On Wed, Jul 19, 2017, at 10:40 AM, Niels de Vos wrote:
> Soumya mentioned that you have been looking at, or planning to do so,
> adding support for state-dumps for mem-pools. Could you point me to the
> BZ or GitHub Issue that has been filed for this already? I'd like to
> follow progress and review any changes that are sent.
> 
> In case you have not had time to look into the details, I might be able
> to put something together. I've been working on memory leaks for a while
> now, and just started to improve the initialization and cleanup of the
> mem-pools.

Since I wrote the current mem-pool implementation, here are the
statistics I'd consider useful.

 * How many times the sweeper has run (sweep_times)

* How long the sweeper has run (sweep_usecs)

 * Number of per-thread structures in use (length of pool_threads)

 * Number of per-thread structures on the free list (length of
 pool_free_threads)

 * For each object size:

 * * Allocations that came from hot list (allocs_hot)

 * * Allocations that came from cold list (allocs_cold)

 * * Allocations that went all the way to malloc (allocs_stdc)

 * * Frees by user to mem-pool subsystem (frees_to_list)

 * * Frees by mem-pool subsystem to OS (frees_to_system)

It might also be feasible to add instantaneous counts for items
allocated, on hot lists, and on cold lists (all per size).  These can
kind of be derived from the alloc/free totals we already have, but not
in super-obvious ways so it might be nice to have the "cooked" values
readily available.  If the items-allocated count remains high on an idle
system, that probably represents a leak in a caller.  If the on-hot-list
and/or on-cold-list values remain non-zero on an idle system, that
probably represents a leak in the mem-pool subsystem itself (or the
sweeper's not running).


More information about the Gluster-devel mailing list