[mythtv-users] mythbackend still eats memory: the current status
foceni at gmail.com
Thu May 21 11:27:38 UTC 2009
Udo van den Heuvel wrote:
> David Lister wrote:
>>> Enjoy your freedom,
>> I sure as hell will. Good luck to you with finding out how GNU libc
>> memory allocator works. :)))
> Care to share your knowledge with us? This might shed some new light
> on the memory leak issue in the backend that I wrote about.
Not that I feel like it after your uncalled for interjection, but here
goes. I'm used to working with Valgrind and memory/performance tuning,
although I code in C and not C++, each and every project of mine is
thoroughly and regularly screened. I'll talk about these graphs:
It's over an insanely long period of time if we're talking about a leak,
which can have any adverse effect on the application or rather the OS.
First to the HW-related-or-not question. It actually might be, speaking
purely hypothetically without considering this particular case. It is
almost impossible if we're talking the same architecture as mainstream,
but if your CPU is not x86, the leak could be caused by different memory
alignment and optimization assumptions either in MythTV or even inside libc.
Anyway, those graph spikes are quite normal - if the application's VSZ
was growing like this and never dropped, you'd have a leak, but not in
this case. Libc allocator - simply put - works by allocating a
contiguous block of memory via low level syscall brk/sbrk behind the
scenes. Each application's malloc request is assigned a piece of
this/its memory region. Allocator's job is to optimize it's usage and
For this to do effectively, it has to grow this block every time there
is a malloc for more memory than it can satisfy from it. On the other
hand, and depending heavily on the allocator implementation, the
allocator will return parts of this block to the OS when the unused
space reaches a given threshold (tunable via mallopt(M_TRIM_THRESHOLD,
val)). The problem is, the block can only be grown or freed at/from "the
top". If there is a chunk allocated on the top and eventually all space
"below" it is freed, allocator cannot shrink the block as if all the
memory was being used. The
The difference in VSZ and RSS is hopefully clear to all of you, if not
look it up, but you can believe me, it's normal. Just the basics: VSZ
includes RSS; VSZ includes all the shared libraries, RSS not; VSZ is the
"allocated" memory (not given by the OS - only "promised"), RSS is
memory actually used; the rest of the difference is swapped out (RSS
drops, VSZ stays). When we put this together, you can see for yourself
that to find a memory leak, you look at VSZ (extremely simply put "app
asking for memory, which it doesn't use"). If you look at RSS (when
there is no significant swapping), you see more-or-less actual memory usage.
The fact that mythbacked (but most probably libc) regularly frees what
is not used, means it isn't critically inefficient with memory usage. If
it were, there would be remaining a "cork" at the top and the memory
would never be freed. VSZ & RSS having similar shape doesn't say much
about the app (not anything unequivocal anyway) - but rather about libc
and kernel VM subsystem efficiency. :)
If there was a real leak (i.e. memory allocation without free), the
lowest BASE of the VSZ would be rising steeply in a linear manner.
Especially noticeable on sbrk trim drops.
As you can see on , during the whole 6 months, there was NO rise
whatsoever. The bottom line is at 330k the whole time, i.e. no leak.
Graph  - is that one process without interruption? If so, then again,
no leak - perhaps there's the slightest 1k rise towards the end, but
that's just too little for too short a time to call it a leak. If you
mean the big sawtooth, it's properly freed to the OS eventually, that is
not what you call a leak. If the graph begins with mythbackend startup,
this peak could even be associated with some initial data collection,
(custom?) allocator trim kicking in for the first time after initial
estimations, or whatever - it's irrelevant, because it's freed.
Hope this helps.
More information about the mythtv-users