When the per-thread cache is enabled, __libc_malloc uses request2size (which
does not perform an overflow check) to calculate the chunk size from the
requested allocation size. This leads to an integer overflow causing malloc
to incorrectly return the last successfully allocated block when called with
a very large size argument (close to SIZE_MAX).
This commit uses checked_request2size instead, removing the overflow.
It does not make sense to register separate cleanup functions for arena
and tcache since they're always going to be called together. Call the
tcache cleanup function from within arena_thread_freeres since it at
least makes the order of those cleanups clear in the code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
This commit adds a "subheaps" field to the malloc_info output that
shows the number of heaps that were allocated to extend a non-main
arena.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
This patch adds a single-threaded fast path to malloc, realloc,
calloc and memalloc. When we're single-threaded, we can bypass
arena_get (which always locks the arena it returns) and just use
the main arena. Also avoid retrying a different arena since
there is just the main arena.
* malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path.
(__libc_realloc): Likewise.
(_mid_memalign): Likewise.
(__libc_calloc): Likewise.
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.
* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock. Simplify the check to make it obviously
correct.
* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
The current malloc initialization is quite convoluted. Instead of
sometimes calling malloc_consolidate from ptmalloc_init, call
malloc_init_state early so that the main_arena is always initialized.
The special initialization can now be removed from malloc_consolidate.
This also fixes BZ #22159.
Check all calls to malloc_consolidate and remove calls that are
redundant initialization after ptmalloc_init, like in int_mallinfo
and __libc_mallopt (but keep the latter as consolidation is required for
set_max_fast). Update comments to improve clarity.
Remove impossible initialization check from _int_malloc, fix assert
in do_check_malloc_state to ensure arena->top != 0. Fix the obvious bugs
in do_check_free_chunk and do_check_remalloced_chunk to enable single
threaded malloc debugging (do_check_malloc_state is not thread safe!).
[BZ #22159]
* malloc/arena.c (ptmalloc_init): Call malloc_init_state.
* malloc/malloc.c (do_check_free_chunk): Fix build bug.
(do_check_remalloced_chunk): Fix build bug.
(do_check_malloc_state): Add assert that checks arena->top.
(malloc_consolidate): Remove initialization.
(int_mallinfo): Remove call to malloc_consolidate.
(__libc_mallopt): Clarify why malloc_consolidate is needed.
Currently free typically uses 2 atomic operations per call. The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins. This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations. However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics. There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).
Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.
* malloc/malloc.c (FASTCHUNKS_BIT): Remove.
(have_fastchunks): Remove.
(clear_fastchunks): Remove.
(set_fastchunks): Remove.
(malloc_state): Add have_fastchunks.
(malloc_init_state): Use have_fastchunks.
(do_check_malloc_state): Remove incorrect invariant checks.
(_int_malloc): Use have_fastchunks.
(_int_free): Likewise.
(malloc_consolidate): Likewise.
The functions tcache_get and tcache_put show up in profiles as they
are a critical part of the tcache code. Inline them to give tcache
a 16% performance gain. Since this improves multi-threaded cases
as well, it helps offset any potential performance loss due to adding
single-threaded fast paths.
* malloc/malloc.c (tcache_put): Inline.
(tcache_get): Inline.
The malloc tcache added in 2.26 will leak all of the elements remaining
in the cache and the cache structure itself when a thread exits. The
defect is that we do not set tcache_shutting_down early enough, and the
thread simply recreates the tcache and places the elements back onto a
new tcache which is subsequently lost as the thread exits (unfreed
memory). The fix is relatively simple, move the setting of
tcache_shutting_down earlier in tcache_thread_freeres. We add a test
case which uses mallinfo and some heuristics to look for unaccounted for
memory usage between the start and end of a thread start/join loop. It
is very reliable at detecting that there is a leak given the number of
iterations. Without the fix the test will consume 122MiB of leaked
memory.
Clean up calls to malloc_printerr and trim its argument list.
This also removes a few bits of work done before calling
malloc_printerr (such as unlocking operations).
The tunable/environment variable still enables the lightweight
additional malloc checking, but mallopt (M_CHECK_ACTION)
no longer has any effect.
__stack_chk_fail is called on corrupted stack. Stack backtrace is very
unreliable against corrupted stack. __libc_message is changed to accept
enum __libc_message_action and call BEFORE_ABORT only if action includes
do_backtrace. __fortify_fail_abort is added to avoid backtrace from
__stack_chk_fail.
[BZ #12189]
* debug/Makefile (CFLAGS-tst-ssp-1.c): New.
(tests): Add tst-ssp-1 if -fstack-protector works.
* debug/fortify_fail.c: Include <stdbool.h>.
(_fortify_fail_abort): New function.
(__fortify_fail): Call _fortify_fail_abort.
(__fortify_fail_abort): Add a hidden definition.
* debug/stack_chk_fail.c: Include <stdbool.h>.
(__stack_chk_fail): Call __fortify_fail_abort, instead of
__fortify_fail.
* debug/tst-ssp-1.c: New file.
* include/stdio.h (__libc_message_action): New enum.
(__libc_message): Replace int with enum __libc_message_action.
(__fortify_fail_abort): New hidden prototype.
* malloc/malloc.c (malloc_printerr): Update __libc_message calls.
* sysdeps/posix/libc_fatal.c (__libc_message): Replace int
with enum __libc_message_action. Call BEFORE_ABORT only if
action includes do_backtrace.
(__libc_fatal): Update __libc_message call.
MMap'd memory isn't shrunk without MREMAP, but IIRC this is intentional for
performance reasons. Regardless, this patch tweaks the existing comment to
be more accurate wrt the existing code.
[BZ #21411]
* malloc/malloc.c: Tweak realloc/MREMAP comment to be more accurate.
Fixes a typo introduced in commit
be7991c070. This caused
mallopt(M_ARENA_MAX) as well as the environment variable
MALLOC_ARENA_MAX to not work as intended because it set the
wrong internal parameter.
[BZ #21338]
* malloc/malloc.c: Call do_set_arena_max for M_ARENA_MAX
instead of incorrect do_set_arena_test
Additional check for chunk_size == next->prev->chunk_size in unlink()
2017-03-17 Chris Evans <scarybeasts@gmail.com>
* malloc/malloc.c (unlink): Add consistency check between size and
next->prev->size, to further harden against 1-byte overflows.
posix/wordexp-test.c used libc-internal.h for PTR_ALIGN_DOWN; similar
to what was done with libc-diag.h, I have split the definitions of
cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN
to a new header, libc-pointer-arith.h.
It then occurred to me that the remaining declarations in libc-internal.h
are mostly to do with early initialization, and probably most of the
files including it, even in the core code, don't need it anymore. Indeed,
only 19 files actually need what remains of libc-internal.h. 23 others
need libc-diag.h instead, and 12 need libc-pointer-arith.h instead.
No file needs more than one of them, and 16 don't need any of them!
So, with this patch, libc-internal.h stops including libc-diag.h as
well as losing the pointer arithmetic macros, and all including files
are adjusted.
* include/libc-pointer-arith.h: New file. Define
cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and
PTR_ALIGN_DOWN here.
* include/libc-internal.h: Definitions of above macros
moved from here. Don't include libc-diag.h anymore either.
* posix/wordexp-test.c: Include stdint.h and libc-pointer-arith.h.
Don't include libc-internal.h.
* debug/pcprofile.c, elf/dl-tunables.c, elf/soinit.c, io/openat.c
* io/openat64.c, misc/ptrace.c, nptl/pthread_clock_gettime.c
* nptl/pthread_clock_settime.c, nptl/pthread_cond_common.c
* string/strcoll_l.c, sysdeps/nacl/brk.c
* sysdeps/unix/clock_settime.c
* sysdeps/unix/sysv/linux/i386/get_clockfreq.c
* sysdeps/unix/sysv/linux/ia64/get_clockfreq.c
* sysdeps/unix/sysv/linux/powerpc/get_clockfreq.c
* sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c:
Don't include libc-internal.h.
* elf/get-dynamic-info.h, iconv/loop.c
* iconvdata/iso-2022-cn-ext.c, locale/weight.h, locale/weightwc.h
* misc/reboot.c, nis/nis_table.c, nptl_db/thread_dbP.h
* nscd/connections.c, resolv/res_send.c, soft-fp/fmadf4.c
* soft-fp/fmasf4.c, soft-fp/fmatf4.c, stdio-common/vfscanf.c
* sysdeps/ieee754/dbl-64/e_lgamma_r.c
* sysdeps/ieee754/dbl-64/k_rem_pio2.c
* sysdeps/ieee754/flt-32/e_lgammaf_r.c
* sysdeps/ieee754/flt-32/k_rem_pio2f.c
* sysdeps/ieee754/ldbl-128/k_tanl.c
* sysdeps/ieee754/ldbl-128ibm/k_tanl.c
* sysdeps/ieee754/ldbl-96/e_lgammal_r.c
* sysdeps/ieee754/ldbl-96/k_tanl.c, sysdeps/nptl/futex-internal.h:
Include libc-diag.h instead of libc-internal.h.
* elf/dl-load.c, elf/dl-reloc.c, locale/programs/locarchive.c
* nptl/nptl-init.c, string/strcspn.c, string/strspn.c
* malloc/malloc.c, sysdeps/i386/nptl/tls.h
* sysdeps/nacl/dl-map-segments.h, sysdeps/x86_64/atomic-machine.h
* sysdeps/unix/sysv/linux/spawni.c
* sysdeps/x86_64/nptl/tls.h:
Include libc-pointer-arith.h instead of libc-internal.h.
* elf/get-dynamic-info.h, sysdeps/nacl/dl-map-segments.h
* sysdeps/x86_64/atomic-machine.h:
Add multiple include guard.
Make mallopt helper functions for each mallopt parameter so that it
can be called consistently in other areas, like setting tunables.
* malloc/malloc.c (do_set_mallopt_check): New function.
(do_set_mmap_threshold): Likewise.
(do_set_mmaps_max): Likewise.
(do_set_top_pad): Likewise.
(do_set_perturb_byte): Likewise.
(do_set_trim_threshold): Likewise.
(do_set_arena_max): Likewise.
(do_set_arena_test): Likewise.
(__libc_mallopt): Use them.
After the removal of __malloc_initialize_hook, newly compiled
Emacs binaries are no longer able to use these interfaces.
malloc_get_state is only used during the Emacs build process,
so we provide a stub implementation only. Existing Emacs binaries
will not call this stub function, but still reference the symbol.
The rewritten tst-mallocstate test constructs a dumped heap
which should approximates what existing Emacs binaries pass
to glibc malloc.
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as
well as malloc.h, and the former is unnecessary. This patch removes
the duplicate. Tested on x86_64 to verify that the generated code
remains unchanged barring changed line numbers to __malloc_assert.
* malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
The M_ARENA_* mallopt parameters are in wide use in production to
control the number of arenas that a long lived process creates and
hence there is no point in stating that this interface is non-public.
Document this interface and remove the obsolete comment.
* manual/memory.texi (M_ARENA_TEST): Add documentation.
(M_ARENA_MAX): Likewise.
* malloc/malloc.c: Remove obsolete comment.
The dynamic linker currently uses __libc_memalign for TLS-related
allocations. The goal is to switch to malloc instead. If the minimal
malloc follows the ABI fundamental alignment, we can assume that malloc
provides this alignment, and thus skip explicit alignment in a few
cases as an optimization.
It was requested on libc-alpha that MALLOC_ALIGNMENT should be used,
although this results in wasted space if MALLOC_ALIGNMENT is larger
than the fundamental alignment. (The dynamic linker cannot assume
that the non-minimal malloc will provide an alignment of
MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
__malloc_initialize_hook is interposed by application code, so
the usual approach to define a compatibility symbol does not work.
This commit adds a new mechanism based on #pragma GCC poison in
<stdc-predef.h>.
For regular mmapped chunks there are two size fields (hence a reduction
by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field,
so we need to subtract SIZE_SZ bytes.
This was initially reported as Emacs bug 23726.
After the heap rewriting added in commit
4cf6c72fd2 (malloc: Rewrite dumped heap
for compatibility in __malloc_set_state), we can change malloc alignment
for new allocations because the alignment of old allocations no longer
matters.
We need to increase the malloc state version number, so that binaries
containing dumped heaps of the new layout will not try to run on
previous versions of glibc, resulting in obscure crashes.
This commit addresses a failure of tst-malloc-thread-fail on the
affected architectures (32-bit ppc and mips) because the test checks
pointer alignment.
This will allow us to change many aspects of the malloc implementation
while preserving compatibility with existing Emacs binaries.
As a result, existing Emacs binaries will have a larger RSS, and Emacs
needs a few more milliseconds to start. This overhead is specific
to Emacs (and will go away once Emacs switches to its internal malloc).
The new checks to make free and realloc compatible with the dumped heap
are confined to the mmap paths, which are already quite slow due to the
munmap overhead.
This commit weakens some security checks, but only for heap pointers
in the dumped main arena. By default, this area is empty, so those
checks are as effective as before.
The fork handler now runs so late that there is no risk anymore that
other fork handlers in the same thread use malloc, so it is no
longer necessary to install malloc hooks which made a subset
of malloc functionality available to the thread that called fork.
Previously, a thread M invoking fork would acquire locks in this order:
(M1) malloc arena locks (in the registered fork handler)
(M2) libio list lock
A thread F invoking flush (NULL) would acquire locks in this order:
(F1) libio list lock
(F2) individual _IO_FILE locks
A thread G running getdelim would use this order:
(G1) _IO_FILE lock
(G2) malloc arena lock
After executing (M1), (F1), (G1), none of the threads can make progress.
This commit changes the fork lock order to:
(M'1) libio list lock
(M'2) malloc arena locks
It explicitly encodes the lock order in the implementations of fork,
and does not rely on the registration order, thus avoiding the deadlock.