[Bugs] [Bug 1628219] New: High memory consumption depending on volume bricks count
bugzilla at redhat.com
bugzilla at redhat.com
Wed Sep 12 13:24:04 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1628219
Bug ID: 1628219
Summary: High memory consumption depending on volume bricks
count
Product: GlusterFS
Version: 3.12
Component: libgfapi
Assignee: bugs at gluster.org
Reporter: vladyslav.reutskyi at globallogic.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
We've obtained very high memory usage produced by gfapi.Volume when mounted to
big volume (with large bricks count). There are few experiment results, showing
memory used by python process mounted to different envs:
Before mount (VSZ / RSS): 212376 / 8932
(2 nodes) 12 bricks volume : 631644 / 21440
(6 nodes) 384 bricks: 861648 / 276516
(10 nodes) 600 bricks: 987116 / 432028
Almost half GB per process just on start! And even more when actively used. As
we are planning to run near 100 client nodes each with 50 processes, amount of
memory needed becomes fantastic.
Is there any reason for gfapi to use so much memory to just mount the volume?
Does that mean that server-side scaling up requires corresponding scaling up of
client side?
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list