[Bugs] [Bug 1649037] Translators allocate too much memory in their xlator_mem_acct_init()
bugzilla at redhat.com
bugzilla at redhat.com
Thu Nov 15 09:20:13 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1649037
Vijay Bellur <vbellur at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |vbellur at redhat.com
--- Comment #1 from Vijay Bellur <vbellur at redhat.com> ---
(In reply to Yaniv Kaul from comment #0)
> Description of problem:
> If I'm looking at most xlators, they use it with something like (random
> example):
> gf_mt_jbr_end
>
> which is defined in xlators/experimental/jbr-server/src/jbr-internal.h as:
> enum {
> gf_mt_jbr_private_t = gf_common_mt_end + 1,
> gf_mt_jbr_fd_ctx_t,
> gf_mt_jbr_inode_ctx_t,
> gf_mt_jbr_dirty_t,
> gf_mt_jbr_end
> };
>
>
> What is the value of gf_common_mt_end ? Some number, defined in
> libglusterfs/src/mem-types.h as 150 or so (did not really count or looked,
> but seen it's quite a large list)
>
> So gf_mt_jbr_end ends up being 155 or so.
> Then in xlator_mem_acct_init(), I see these:
> xl->mem_acct = MALLOC(sizeof(struct mem_acct) +
> sizeof(struct mem_acct_rec) * num_types);
>
> Which seems to me that we are allocating plenty of mem_acct_rec structs,
> even if we only need 5 or so? We are clearly allocating and then memsetting
> and probably not using way too many mem_acct_rec records.
>
All memory accounting happens per xlator. When a xlator invokes a libglusterfs
function, any memory allocation happening there leverages a common memory type
and the accounting happens in xl->mem_acct[common_mt]. Hence it is not very
easy to determine which memory type record would not be used and so the
allocation in init() looks ok to me.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list