<div dir="auto"><div><br><div class="gmail_extra"><br><div class="gmail_quote">On 21-Jun-2017 9:45 AM, "Jeff Darcy" <<a href="mailto:jeff@pl.atyp.us">jeff@pl.atyp.us</a>> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div><div class="quoted-text"><div style="font-family:Arial"><br></div>
<div><br></div>
<div><br></div>
<div>On Tue, Jun 20, 2017, at 03:38 PM, Raghavendra Talur wrote:<br></div>
<blockquote type="cite"><div dir="ltr"><div><div><div>Each process takes 795MB of virtual memory and resident memory is 10MB each.<br></div>
</div>
</div>
</div>
</blockquote><div style="font-family:Arial"><br></div>
</div><div style="font-family:Arial">Wow, that's even better than I thought. I was seeing about a 3x difference per brick (plus the fixed cost of a brick process) during development. Your numbers suggest more than 10x. Almost makes it seem worth the effort. ;)<br></div></div></blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">:) </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:Arial"></div><div class="quoted-text">
<div style="font-family:Arial"><br></div>
<blockquote type="cite"><div dir="ltr"><div><div><div>Just to be clear, I am not saying that brick multiplexing isn't working. The aim is to prevent the glusterfsd process from getting OOM killed because 200 bricks when multiplexed consume 20GB of virtual memory.<br></div>
</div>
</div>
</div>
</blockquote><div style="font-family:Arial"><br></div>
</div><div style="font-family:Arial">Yes, the OOM killer is more dangerous with multiplexing. It likes to take out the process that is the whole machine's reason for existence, which is pretty darn dumb. Perhaps we should use oom_adj/OOM_DISABLE to make it a bit less dumb?<br></div><div class="quoted-text">
<div style="font-family:Arial"></div></div></div></blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">This is not so easy for container deployment models. </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="quoted-text"><div style="font-family:Arial"><br></div>
<blockquote type="cite"><div dir="ltr"><div><div><div>If it is found that the additional usage of 75MB of virtual memory per every brick attach can't be removed/reduced, then the only solution would be to fix issue 151 [1] by limiting multiplexed bricks.<br></div>
<div>[1] <a href="https://github.com/gluster/glusterfs/issues/151" target="_blank">https://github.com/<wbr>gluster/glusterfs/issues/151</a><br></div>
</div>
</div>
</div>
</blockquote><div style="font-family:Arial"><br></div>
</div><div style="font-family:Arial">This is another reason why limiting the number of brick processes is preferable to limiting the number of bricks per process. When we limit bricks per process and wait until one is "full" before starting another, then that first brick process remains a prime target for the OOM killer. By "striping" bricks across N processes (where N ~= number of cores), none of them become targets until we're approaching our system-wide brick limit anyway.</div>
<div style="font-family:Arial"></div></div></blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">+1, I now understand the reasoning behind limiting number of processes. I was in the favor of limiting bricks per process before. </div><div dir="auto"><br></div><div dir="auto">Thanks, </div><div dir="auto">Raghavendra Talur </div><div dir="auto"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:Arial"><br></div>
</div>
</blockquote></div><br></div></div></div>