[Gluster-devel] brick multiplexing and memory consumption
Amar Tumballi
atumball at redhat.com
Wed Jun 21 04:54:41 UTC 2017
On Wed, Jun 21, 2017 at 9:53 AM, Raghavendra Talur <rtalur at redhat.com>
wrote:
>
>
> On 21-Jun-2017 9:45 AM, "Jeff Darcy" <jeff at pl.atyp.us> wrote:
>
>
>
>
> On Tue, Jun 20, 2017, at 03:38 PM, Raghavendra Talur wrote:
>
> Each process takes 795MB of virtual memory and resident memory is 10MB
> each.
>
>
> Wow, that's even better than I thought. I was seeing about a 3x
> difference per brick (plus the fixed cost of a brick process) during
> development. Your numbers suggest more than 10x. Almost makes it seem
> worth the effort. ;)
>
>
> :)
>
>
> Just to be clear, I am not saying that brick multiplexing isn't working.
> The aim is to prevent the glusterfsd process from getting OOM killed
> because 200 bricks when multiplexed consume 20GB of virtual memory.
>
>
> Yes, the OOM killer is more dangerous with multiplexing. It likes to take
> out the process that is the whole machine's reason for existence, which is
> pretty darn dumb. Perhaps we should use oom_adj/OOM_DISABLE to make it a
> bit less dumb?
>
>
> This is not so easy for container deployment models.
>
>
> If it is found that the additional usage of 75MB of virtual memory per
> every brick attach can't be removed/reduced, then the only solution would
> be to fix issue 151 [1] by limiting multiplexed bricks.
> [1] https://github.com/gluster/glusterfs/issues/151
>
>
> This is another reason why limiting the number of brick processes is
> preferable to limiting the number of bricks per process. When we limit
> bricks per process and wait until one is "full" before starting another,
> then that first brick process remains a prime target for the OOM killer.
> By "striping" bricks across N processes (where N ~= number of cores), none
> of them become targets until we're approaching our system-wide brick limit
> anyway.
>
>
> +1, I now understand the reasoning behind limiting number of processes. I
> was in the favor of limiting bricks per process before.
>
>
Makes sense. +1 on this approach from me too. Lets get going with this IMO.
-Amar
> Thanks,
> Raghavendra Talur
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170621/97fd4330/attachment.html>
More information about the Gluster-devel
mailing list