[Bugs] [Bug 1418091] [RFE] Support multiple bricks in one process (multiplexing)

bugzilla at redhat.com bugzilla at redhat.com
Fri Feb 3 00:44:13 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1418091



--- Comment #10 from Worker Ant <bugzilla-bot at gluster.org> ---
COMMIT: https://review.gluster.org/16531 committed in release-3.10 by
Shyamsundar Ranganathan (srangana at redhat.com) 
------
commit 1ed73ffa16cb7fe4415acbdb095da6a4628f711a
Author: Jeff Darcy <jdarcy at redhat.com>
Date:   Fri Oct 14 10:04:07 2016 -0400

    libglusterfs: make memory pools more thread-friendly

    Early multiplexing tests revealed *massive* contention on certain
    pools' global locks - especially for dictionaries and secondarily for
    call stubs.  For the thread counts that multiplexing can create, a
    more lock-free solution is clearly needed.  Also, the current mem-pool
    implementation does a poor job releasing memory back to the system,
    artificially inflating memory usage to match whatever the worst case
    was since the process started.  This is bad in general, but especially
    so for multiplexing where there are more pools and a major point of
    the whole exercise is to reduce memory consumption.

    The basic ideas for the new design are these

      There is one pool, globally, for each power-of-two size range.
      Every attempt to create a new pool within this range will instead
      add a reference to the existing pool.

      Instead of adding pools for each translator within each multiplexed
      brick (potentially infinite and quite possibly thousands), we
      allocate one set of size-based pools per *thread* (hundreds at
      worst).

      Each per-thread pool is divided into hot and cold lists.  Every
      allocation first attempts to use the hot list, then the cold list.
      When objects are freed, they always go on the hot list.

      There is one global "pool sweeper" thread, which periodically
      reclaims everything in each pool's cold list and then "demotes" the
      current hot list to be the new cold list.

      For normal allocation activity, only a per-thread lock need be
      taken, and even that only to guard against very rare contention from
      the pool sweeper.  When threads start and stop, a global lock must
      be taken to add them to the pool sweeper's list.  Lock contention is
      therefore extremely low, and the hot/cold lists also provide good
      locality.

    A more complete explanation (of a similar earlier design) can be found
    here:

     http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

    Backport of:
    > Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
    > BUG: 1385758
    > Reviewed-on: https://review.gluster.org/15645

    BUG: 1418091
    Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2
    Signed-off-by: Jeff Darcy <jdarcy at redhat.com>
    Reviewed-on: https://review.gluster.org/16531
    Smoke: Gluster Build System <jenkins at build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana at redhat.com>

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list