[Gluster-devel] brick multiplexing and memory consumption
Raghavendra Talur
rtalur at redhat.com
Tue Jun 20 19:38:56 UTC 2017
On Tue, Jun 20, 2017 at 8:13 PM, Jeff Darcy <jeff at pl.atyp.us> wrote:
>
>
>
> On Tue, Jun 20, 2017, at 08:45 AM, Raghavendra Talur wrote:
>
> Here is the data I gathered while debugging the considerable increase in
> memory consumption by brick process when brick multiplexing is on.
>
> before adding 14th brick to it: 3163 MB
> before glusterfs_graph_init is called 3171 (8 MB increase)
> io-stats init 3180 (9 MB increase)
> index init 3181 (1 MB increase)
> bitrot-stub init 3182 (1 MB increase)
> changelog init 3206 (24 MB increase)
> posix init 3230 (24 MB increase)
> glusterfs_autoscale_threads 3238 (8 MB increase)
> end of glusterfs_handle_attach
>
> Every brick attach is taking about 75 MB of virtual memory and it is
> consistent. Need help from respective xlators owners to confirm if init of
> those xlators really takes that much memory.
>
> This is all Virtual memory data, resident memory is very nicely at 40 MB
> after 14 bricks.
>
>
> Do you have the equivalent numbers for memory consumption of 14 bricks
> *without* multiplexing?
>
Each process takes 795MB of virtual memory and resident memory is 10MB each.
Just to be clear, I am not saying that brick multiplexing isn't working.
The aim is to prevent the glusterfsd process from getting OOM killed
because 200 bricks when multiplexed consume 20GB of virtual memory.
If it is found that the additional usage of 75MB of virtual memory per
every brick attach can't be removed/reduced, then the only solution would
be to fix issue 151 [1] by limiting multiplexed bricks.
[1] https://github.com/gluster/glusterfs/issues/151
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170621/eb176c0d/attachment.html>
More information about the Gluster-devel
mailing list