[Rd] Issue with memory deallocation/fragmentation on systems which use glibc

Dmitriy Selivanov selivanov.dmitriy at gmail.com
Sat Jun 17 21:58:54 CEST 2017

Hello mailing list. I'm writing to discuss issue which was already
discussed here - https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=14611
- OS doesn't shrink memory of the process. Thanks to Simon Urbanek for
digging and explanation.

However it was quite hard to find this topic after I've discovered same
problem on my Ubuntu machine (and I scratched my head because there were no
such problem on my laptop running on OS X!).

So I'm not sure that such problem can be ignored.
Consider following examples - we will create large list with many small
objects (code which contains fully reproducible example here -

large_list = lapply(1:1e6, function(i) {

After that on OS X *resident memory* memory successfully shrinks to ~
100mb, but on Ubuntu is remains at ~ 1gb. I understand that this 1gb of ram
can be (and will be) reused for allocations of new small objects. So if I
will run same lapply again, process memory will stay ~ 1gb. However if
after that I will try to create large numeric vector, R will allocate
another continuous chunk of memory. And I had situations when my
long-running R process created and deleted many small objects so R heap
memory continued to grow. And eventually linux OOM killler killed R
process. On other side same code worked fine on OS X or if I manually time
to time called `mallinfo::malloc.trim()`.

My question is whether it is possible to call `malloc_trim()` with each
call of garbage collection on systems which use glibc? Calling it manually
doesn't look like good approach for me. What are potential drawbacks of
triggering `malloc_trim()` for each gc() call? I've made some tests and
such calls are not longer than single digit millisecond for very memory
fragmented .

Thanks in advance.

Dmitriy Selivanov

	[[alternative HTML version deleted]]

More information about the R-devel mailing list