[Gluster-devel] brick multiplexing and memory consumption

Raghavendra Talur rtalur at redhat.com
Wed Jun 21 04:23:08 UTC 2017


On 21-Jun-2017 9:45 AM, "Jeff Darcy" <jeff at pl.atyp.us> wrote:




On Tue, Jun 20, 2017, at 03:38 PM, Raghavendra Talur wrote:

Each process takes 795MB of virtual memory and resident memory is 10MB each.


Wow, that's even better than I thought.  I was seeing about a 3x difference
per brick (plus the fixed cost of a brick process) during development.
Your numbers suggest more than 10x.  Almost makes it seem worth the effort.
 ;)


:)


Just to be clear, I am not saying that brick multiplexing isn't working.
The aim is to prevent the glusterfsd process from getting OOM killed
because 200 bricks when multiplexed consume 20GB of virtual memory.


Yes, the OOM killer is more dangerous with multiplexing.  It likes to take
out the process that is the whole machine's reason for existence, which is
pretty darn dumb.  Perhaps we should use oom_adj/OOM_DISABLE to make it a
bit less dumb?


This is not so easy for container deployment models.


If it is found that the additional usage of 75MB of virtual memory per
every brick attach can't be removed/reduced, then the only solution would
be to fix issue 151 [1] by limiting multiplexed bricks.
[1] https://github.com/gluster/glusterfs/issues/151


This is another reason why limiting the number of brick processes is
preferable to limiting the number of bricks per process.  When we limit
bricks per process and wait until one is "full" before starting another,
then that first brick process remains a prime target for the OOM killer.
By "striping" bricks across N processes (where N ~= number of cores), none
of them become targets until we're approaching our system-wide brick limit
anyway.


+1, I now understand the reasoning behind limiting number of processes. I
was in the favor of limiting bricks per process before.

Thanks,
Raghavendra Talur
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170621/b066ef16/attachment.html>


More information about the Gluster-devel mailing list