commit 524a8ef2ad
Author: Nick Alcock <nick.alcock@oracle.com>
Date: Mon Dec 26 10:08:57 2016 +0100
PLT avoidance for __stack_chk_fail [BZ #7065]
Add a hidden __stack_chk_fail_local alias to libc.so,
and make sure that on targets which use __stack_chk_fail,
this does not introduce a local PLT reference into libc.so.
which unconditionally added
strong_alias (__stack_chk_fail, __stack_chk_fail_local)
defines __stack_chk_fail_local as an alias of __stack_chk_fail in libc.a.
There is no need to add stack_chk_fail_local.o to libc.a. We only need
to add stack_chk_fail_local.oS to libc_nonshared.a.
Tested on x86-64:
[hjl@gnu-skl-1 build-x86_64-linux]$ nm libc.a | grep __stack_chk_fail
0000000000000000 T __stack_chk_fail
0000000000000000 T __stack_chk_fail_local
[hjl@gnu-skl-1 build-x86_64-linux]$ nm libc_nonshared.a | grep __stack_chk_fail_local
0000000000000000 T __stack_chk_fail_local
[hjl@gnu-skl-1 build-x86_64-linux]$
[BZ #21740]
* debug/Makefile (elide-routines.o): New.
The patch proposed by Peter Bergner [1] to libgcc in order to fix
[BZ #21707] adds a dependency on a symbol provided by the loader,
forcing the loader to be linked to tests after libgcc was linked.
It also requires to read the thread pointer during IRELA relocations.
Tested on powerpc, powerpc64, powerpc64le, s390x and x86_64.
[1] https://sourceware.org/ml/libc-alpha/2017-06/msg01383.html
[BZ #21707]
* csu/libc-start.c (LIBC_START_MAIN): Perform IREL{,A}
relocations before or after initializing the TCB on statically
linked executables. That's a per-architecture definition.
* elf/rtld.c (dl_main): Add a comment about thread-local
variables initialization.
* sysdeps/generic/libc-start.h: New file. Define
ARCH_APPLY_IREL and ARCH_SETUP_IREL.
* sysdeps/powerpc/Makefile:
[$(subdir) = elf && $(multi-arch) != no] (tests-static-internal): Add tst-tlsifunc-static.
[$(subdir) = elf && $(multi-arch) != no && $(build-shared) == yes]
(tests-internal): Add tst-tlsifunc.
* sysdeps/powerpc/tst-tlsifunc.c: New file.
* sysdeps/powerpc/tst-tlsifunc-static.c: Likewise.
* sysdeps/powerpc/powerpc64le/Makefile (f128-loader-link): New
variable.
[$(subdir) = math] (test-float128% test-ifloat128%): Force
linking to the loader after linking to libgcc.
[$(subdir) = wcsmbs || $(subdir) = stdlib] (bug-strtod bug-strtod2)
(bug-strtod2 tst-strtod-round tst-wcstod-round tst-strtod6 tst-strrom)
(tst-strfrom-locale strfrom-skeleton): Likewise.
* sysdeps/unix/sysv/linux/powerpc/libc-start.h: New file. Define
ARCH_APPLY_IREL and ARCH_SETUP_IREL.
This patch fixes the argument passing for exit syscall after
the clone function returns on hppa. This fixes misc/tst-clone2
on hppa-linux-gnu.
Checked misc/tst-clone2 on hppa-linux-gnu.
[BZ #21512]
* sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Fix argument
passing to syscall exit.
This patch adds the HWCAP_JSCVT, HWCAP_FCMA and HWCAP_LRCPC macros
from Linux 4.12 to the AArch64 bits/hwcap.h.
* sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h (HWCAP_FCMA): New macro.
(HWCAP_JSCVT, HWCAP_LRCPC): Likewise.
Single thread optimization is valid if at thread creation time the
optimization can be disabled. This is in principle true for all
stream objects that user code can access (and thus needs locking),
using the same internal list as fflush(0) uses. However in glibc
open_memstream is not on that list (BZ 21735) so the optimization
has to be disabled.
* libio/memstream.c (__open_memstream): Set _IO_FLAGS2_NEED_LOCK.
* libio/wmemstream.c (open_wmemstream): Likewise.
* nptl/tst-memstream.c: New.
There is bug report that ld.so in GLIBC 2.24 built by Binutils 2.29 will crash
on arm-linux-gnueabihf. This is confirmed, and the details is at:
https://sourceware.org/bugzilla/show_bug.cgi?id=21725.
As analyzed in the PR, the old code was with the assumption that assembler
won't set bit0 of thumb function address if it comes from PC-relative
instructions and the calculation can be finished during assembling. This
assumption however does not hold after PR gas/21458.
* sysdeps/arm/dl-machine.h (elf_machine_load_address): Also strip bit 0
of pcrel_address under Thumb mode.
Compile tst-ssp-1.c with -fstack-protector-all in case the the stack
protector heuristics do not instrument a thirty-byte array.
* debug/Makefile (CFLAGS-tst-ssp-1.c): Set to
-fstack-protector-all.
On powerpc64le, the compilation of the files related to float128 support
requires the option -mfloat128 to be passed to gcc. However, not all
possible object suffixes were covered in the Makefile. This patch uses
$(all-object-suffixes) in all remaining rules.
Tested for powerpc64le.
* sysdeps/powerpc/powerpc64le/Makefile: Use $(all-object-suffixes)
to iterate over all possible object suffixes. Add a comment
explaining the use of sysdep-CFLAGS instead of CFLAGS.
__stack_chk_fail is called on corrupted stack. Stack backtrace is very
unreliable against corrupted stack. __libc_message is changed to accept
enum __libc_message_action and call BEFORE_ABORT only if action includes
do_backtrace. __fortify_fail_abort is added to avoid backtrace from
__stack_chk_fail.
[BZ #12189]
* debug/Makefile (CFLAGS-tst-ssp-1.c): New.
(tests): Add tst-ssp-1 if -fstack-protector works.
* debug/fortify_fail.c: Include <stdbool.h>.
(_fortify_fail_abort): New function.
(__fortify_fail): Call _fortify_fail_abort.
(__fortify_fail_abort): Add a hidden definition.
* debug/stack_chk_fail.c: Include <stdbool.h>.
(__stack_chk_fail): Call __fortify_fail_abort, instead of
__fortify_fail.
* debug/tst-ssp-1.c: New file.
* include/stdio.h (__libc_message_action): New enum.
(__libc_message): Replace int with enum __libc_message_action.
(__fortify_fail_abort): New hidden prototype.
* malloc/malloc.c (malloc_printerr): Update __libc_message calls.
* sysdeps/posix/libc_fatal.c (__libc_message): Replace int
with enum __libc_message_action. Call BEFORE_ABORT only if
action includes do_backtrace.
(__libc_fatal): Update __libc_message call.
Linux 4.12 (b745fafaf70c0a98a2e1e7ac8cb14542889ceb0e) adds a new
p{read,write}v2 flag RWF_NOWAIT. This patch adds it for linux
uio-ext.h header.
Checked on x86_64-linux-gnu (on a 4.10 kernel).
[BZ #21738]
* manual/llio.texi (RWF_NOWAIT): New item.
* misc/tst-preadvwritev2-common.c (do_test_with_invalid_flags):
Add RWF_NOWAIT check.
* sysdeps/unix/sysv/linux/bits/uio-ext.h (RWF_NOWAIT): New flag.
The request PTRACE_SINGLEBLOCK was introduced in Linux 3.15. Thus the ptrace call
will fail on older kernels.
Thus the test is now testing PTRACE_SINGLEBLOCK with data argument pointing to a
buffer on stack which is assumed to fail. If the request would be interpreted as
PTRACE_GETREGS, then the ptrace call will not fail and the regs are written to buf.
If we run with a kernel with support for PTRACE_SINGLEBLOCK a ptrace call with
data=NULL, returns zero with no error. If we run with a kernel without support for
PTRACE_SINGLEBLOCK a ptrace call with data=NULL reports an error.
In the latter case, the test is just continuing with PTRACE_CONT.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/tst-ptrace-singleblock.c:
Support running on kernels without PTRACE_SINGLEBLOCK.
Since _dl_resolve_conflicts is only used in elf/rtld.c, don't include
it in libc.a.
[BZ #21742]
* elf/Makefile (dl-routines): Move dl-conflict to ...
(rtld-routines): Here.
Since there are no multiarch versions of memmove_chk and memset_chk,
test multiarch versions of memmove_chk and memset_chk only in libc.so.
[BZ #21741]
* sysdeps/x86_64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Test memmove_chk and memset_chk only
in libc.so.
This patch increases the timeouts for some tests that I've seen timing
out on slow systems in my 2.26 release testing. (In the case of
tst-tsearch.c, increasing the timeout means removing a setting of 10
that was put there before the default timeout was increased to 20
seconds, so putting the default into effect.)
* iconvdata/tst-loading.c (TIMEOUT): Define to 30.
* misc/tst-tsearch.c (TIMEOUT): Remove.
* nptl/tst-create-detached.c (TIMEOUT): Define to 100.
* nptl/tst-robust-fork.c (TIMEOUT): Likewise.
* nptl/tst-rwlock19.c (TIMEOUT): Likewise.
* string/tst-cmp.c (TIMEOUT): Define to 600.
This patch fixes some build issues when including types/sigevent_t.h
along with bits/pthreadtypes.h.
Checked on x86_64-linux-gnu and on a build on supported major ABIs.
[BZ #21715]
* sysdeps/nptl/bits/pthreadtypes.h (__have_pthread_attr_t): Fix typo
on definition.
This change forces realignment of the stack pointer in __tls_get_addr, so
that binaries compiled by GCCs older than GCC 4.9:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066
continue to work even if vector instructions are used in glibc which
require the ABI stack realignment.
__tls_get_addr_slow is added to handle the slow paths in the default
implementation of__tls_get_addr in elf/dl-tls.c. The new __tls_get_addr
calls __tls_get_addr_slow after realigning the stack. Internal calls
within ld.so go directly to the default implementation of __tls_get_addr
because they do not need stack realignment.
[BZ #21609]
* sysdeps/x86_64/Makefile (sysdep-dl-routines): Add tls_get_addr.
(gen-as-const-headers): Add rtld-offsets.sym.
* sysdeps/x86_64/dl-tls.c: New file.
* sysdeps/x86_64/rtld-offsets.sym: Likwise.
* sysdeps/x86_64/tls_get_addr.S: Likewise.
* sysdeps/x86_64/dl-tls.h: Add multiple inclusion guards.
* sysdeps/x86_64/tlsdesc.sym (TI_MODULE_OFFSET): New.
(TI_OFFSET_OFFSET): Likwise.
This patch fix the return value for error conditions for default
posix_spawn (where the errno is expected). It also avoid clobber
errno on fork call.
Checked on x86_64 (with Linux implementation removed).
[BZ# 21697]
* sysdeps/posix/spawni.c (__spawni_child): Fix return value.
(__spawnix): Do not clober errno.
Locking overhead can be significant in some stdio operations
that are common in single threaded applications.
This patch adds the _IO_FLAGS2_NEED_LOCK flag to indicate if
an _IO_FILE object needs to be locked and some of the stdio
functions just jump to their _unlocked variant when not. The
flag is set on all _IO_FILE objects when the first thread is
created. A new GLIBC_PRIVATE libc symbol, _IO_enable_locks,
was added to do this from libpthread.
The optimization can be applied to more stdio functions,
currently it is only applied to single flag check or single
non-wide-char standard operations. The flag should probably
be never set for files with _IO_USER_LOCK, but that's just a
further optimization, not a correctness requirement.
The optimization is valid in a single thread because stdio
operations are non-as-safe (so lock state is not observable
from a signal handler) and stdio locks are recursive (so lock
state is not observable via deadlock). The optimization is not
valid if a thread may be created while an stdio lock is taken
and thus it should be disabled if any user code may run during
an stdio operation (interposed malloc, printf hooks, etc).
This makes the optimization more complicated for some stdio
operations (e.g. printf), but those are bigger and thus less
important to optimize so this patch does not try to do that.
* libio/libio.h (_IO_FLAGS2_NEED_LOCK, _IO_need_lock): Define.
* libio/libioP.h (_IO_enable_locks): Declare.
* libio/Versions (_IO_enable_locks): New symbol.
* libio/genops.c (_IO_enable_locks): Define.
(_IO_old_init): Initialize flags2.
* libio/feof.c.c (_IO_feof): Avoid locking when not needed.
* libio/ferror.c (_IO_ferror): Likewise.
* libio/fputc.c (fputc): Likewise.
* libio/putc.c (_IO_putc): Likewise.
* libio/getc.c (_IO_getc): Likewise.
* libio/getchar.c (getchar): Likewise.
* libio/ioungetc.c (_IO_ungetc): Likewise.
* nptl/pthread_create.c (__pthread_create_2_1): Enable stdio locks.
* libio/iofopncook.c (_IO_fopencookie): Enable locking for the file.
* sysdeps/pthread/flockfile.c (__flockfile): Likewise.
A dot-less host name without an /etc/resolv.conf file caused an
assertion failure in update_from_conf because the function would not
deal correctly with the empty search list case.
Thanks to Andreas Schwab for debugging assistence.
This patch updates build-many-glibcs.py to use the current release
branch of binutils and current releases of GMP and the Linux kernel.
* scripts/build-many-glibcs.py (Context.checkout): Default
binutils version to 2.29 branch, GMP version to 6.1.2 and Linux
kernel version to 4.12.
This commit enhances the stub resolver to reload the configuration
in the per-thread _res object if the /etc/resolv.conf file has
changed. The resolver checks whether the application has modified
_res and will not overwrite the _res object in that case.
The struct resolv_context mechanism is used to check the
configuration file only once per name lookup.
This commit adds the remaining unchanging members (which are loaded
from /etc/resolv.conf) to struct resolv_conf.
The extended name server list is currently not used by the stub
resolver. The switch depends on a cleanup: The _u._ext.nssocks
array stores just a single socket, and needs to be replaced with
a single socket value.
(The compatibility gethostname implementation does not use the
extended addres sort list, either. Updating the compat code is
not worthwhile.)
This change uses the extended resolver state in struct resolv_conf to
store the search list. If applications have not patched the _res
object directly, this extended search list will be used by the stub
resolver during name resolution.
This change provides additional resolver configuration state which
is not exposed through the _res ABI. It reuses the existing
initstamp field in the supposedly-private part of _res. Some effort
is undertaken to avoid memory safety issues introduced by applications
which directly patch the _res object.
With this commit, only the initstamp field is moved into struct
resolv_conf. Additional members will be added later, eventually
migrating the entire resolver configuration.
struct resolv_context objects provide a temporary resolver context
which does not change during a name lookup operation. Only when the
outmost context is created, the stub resolver configuration is
verified to be current (at present, only against previous res_init
calls). Subsequent attempts to obtain the context will reuse the
result of the initial verification operation.
struct resolv_context can also be extended in the future to store
data which needs to be deallocated during thread cancellation.
posix/sched_cpucount.c assumes that size of __cpu_mask == size of long,
which is incorrect for x32. This patch uses __builtin_popcount, which
is availabe in GCC 4.9, in posix/sched_cpucount.c.
Tested on i686, x86-64 and x32 with multi-arch disabled.
[BZ #21696]
* posix/sched_cpucount.c: Don't include <limits.h>.
(__sched_cpucount): Use __builtin_popcount.
In math/math.h, __MATH_TG will expand signbit to __builtin_signbit*,
e.g.: __builtin_signbitf128, before GCC 6. However, there has never
been a __builtin_signbitf128 in GCC and the type-generic builtin is
only available since GCC 6. For older GCC, this patch defines
__builtin_signbitf128 to __signbitf128, so that the internal function
is used instead of the non-existent builtin.
This patch also changes the implementation of __signbitf128, because
it was reusing the implementation of __signbitl from ldbl-128, which
calls __builtin_signbitl. Using the long double version of the
builtin is not correct on machines where _Float128 is ABI-distinct
from long double (i.e.: ia64, powerpc64le, x86, x86_84). The new
implementation does not rely on builtins when being built with GCC
versions older than 6.0.
The new code does not currently affect powerpc64le builds, because
only GCC 6.2 fulfills the requirements from configure. It might
affect powerpc64le builds if those requirements are backported to
older versions of the compiler. The new code affects x86_64 builds,
since glibc is supposed to build correctly with older versions of GCC.
Tested for powerpc64le and x86_64.
* include/math.h (__signbitf128): Define as hidden.
* sysdeps/ieee754/float128/s_signbitf128.c (__signbitf128):
Reimplement without builtins.
* sysdeps/ia64/bits/floatn.h [!__GNUC_PREREQ (6, 0)]
(__builtin_signbitf128): Define to __signbitf128.
* sysdeps/powerpc/bits/floatn.h: Likewise.
* sysdeps/x86/bits/floatn.h: Likewise.
Add a new tunable (glibc.tune.cpu) to override CPU identification on
aarch64. This is useful in two cases: one where it is desirable to
pretend to be another CPU for purposes of testing or because routines
written for that CPU are beneficial for specific workloads and second
where the underlying kernel does not support emulation of MRS to get
the MIDR of the CPU.
* elf/dl-tunables.h (tunable_is_name): Move from...
* elf/dl-tunables.c (is_name): ... here.
(parse_tunables, __tunables_init): Adjust.
* manual/tunables.texi: Document glibc.tune.cpu.
* sysdeps/aarch64/dl-tunables.list: New file.
* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (struct
cpu_list): New type.
(cpu_list): New list of CPU names and their MIDR.
(get_midr_from_mcpu): New function.
(init_cpu_features): Override MIDR if necessary.
The string function implementations implemented so far do not use any
instructions that may deviate from standard aarch64, so it is possible
for all routines to run on all armv8 hardware. Select all
implementations in the benchmarks and tests.
* sysdeps/aarch64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Unconditionally select thunderx
routine for testing.
GCC 7 changed the definition of max_align_t on i386:
https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=9b5c49ef97e63cc63f1ffa13baf771368105ebe2
As a result, glibc malloc no longer returns memory blocks which are as
aligned as max_align_t requires.
This causes malloc/tst-malloc-thread-fail to fail with an error like this
one:
error: allocation function 0, size 144 not aligned to 16
This patch moves the MALLOC_ALIGNMENT definition to <malloc-alignment.h>
and increases the malloc alignment to 16 for i386.
[BZ #21120]
* malloc/malloc-internal.h (MALLOC_ALIGNMENT): Moved to ...
* sysdeps/generic/malloc-alignment.h: Here. New file.
* sysdeps/i386/malloc-alignment.h: Likewise.
* sysdeps/generic/malloc-machine.h: Include <malloc-alignment.h>.
This patch improves the default posix implementation of posix_spawn{p}
and align with Linux one. The main idea is to fix some issues already
fixed in Linux code, and deprecated vfork internal usage (source of
various bug reports). In a short:
- It moves POSIX_SPAWN_USEVFORK usage and sets it a no-op. Since
the process that actually spawn the new process do not share
memory with parent (with vfork), it fixes BZ#14750 for this
implementation.
- It uses a pipe to correctly obtain the return upon failure
of execution (BZ#18433).
- It correctly enable/disable asynchronous cancellation (checked
on ptl/tst-exec5.c).
- It correctly disable/enable signal handling.
Using this version instead of Linux shows only one regression,
posix/tst-spawn3, because of pipe2 usage which increase total
number of file descriptor.
* sysdeps/posix/spawni.c (__spawni_child): New function.
(__spawni): Rename to __spawnix.
This patch adds tgmath.h support for _Float128, so eliminating the
awkward caveat in NEWS about the type not being supported there. This
does inevitably increase the size of macro expansions (which grows
particularly fast when you have nested calls to tgmath.h macros), but
only when _Float128 is supported and the declarations of _Float128
interfaces are visible; otherwise the expansions are unchanged.
Tested for x86_64 and arm.
* math/tgmath.h: Include <bits/libc-header-start.h> and
<bits/floatn.h>.
(__TGMATH_F128): New macro.
(__TGMATH_CF128): Likewise.
(__TGMATH_UNARY_REAL_ONLY): Use __TGMATH_F128.
(__TGMATH_UNARY_REAL_RET_ONLY): Likewise.
(__TGMATH_BINARY_FIRST_REAL_ONLY): Likewise.
(__TGMATH_BINARY_FIRST_REAL_STD_ONLY): New macro.
(__TGMATH_BINARY_REAL_ONLY): Use __TGMATH_F128.
(__TGMATH_BINARY_REAL_STD_ONLY): New macro.
(__TGMATH_BINARY_REAL_RET_ONLY): Use __TGMATH_F128.
(__TGMATH_TERNARY_FIRST_SECOND_REAL_ONLY): Likewise.
(__TGMATH_TERNARY_REAL_ONLY): Likewise.
(__TGMATH_TERNARY_FIRST_REAL_RET_ONLY): Likewise.
(__TGMATH_UNARY_REAL_IMAG): Use __TGMATH_CF128.
(__TGMATH_UNARY_IMAG): Use __TGMATH_F128.
(__TGMATH_UNARY_REAL_IMAG_RET_REAL): Use __TGMATH_CF128.
(__TGMATH_BINARY_REAL_IMAG): Likewise.
(nexttoward): Use __TGMATH_BINARY_FIRST_REAL_STD_ONLY.
[__USE_MISC] (scalb): Use __TGMATH_BINARY_REAL_STD_ONLY.
* math/gen-tgmath-tests.py (Type.init_types): Enable _FloatN and
_FloatNx types if the corresponding HUGE_VAL macros are defined.
As a GNU extension, for _GNU_SOURCE glibc's complex.h provides a
clog10 function and tgmath.h supports complex arguments to the log10
macro. However, tgmath.h uses __clog10 not clog10 in defining the
macro.
There is no namespace reason (ignoring the block-scope namespace
issues that would apply equally to *every* function called by tgmath.h
macros) for using __clog10 here, since this is only for _GNU_SOURCE so
clog10 is always visible when this macro definition is used.
Furthermore, __clog10f128 is not exported, so supporting _Float128 in
tgmath.h implies using clog10 not __clog10 there. (__clog10 and
clog10 aren't used in libstdc++ either, although that library would
have a good case for using the __clog10 reserved-namespace export: the
standard C++ library includes log10 of a complex number.) This patch
duly changes the header to use clog10, and enables tests of the macro
for complex arguments.
Tested for x86_64.
* math/tgmath.h [__USE_GNU] (log10): Use clog10 not __clog10.
* math/gen-tgmath-tests.py (Tests.add_all_tests): Test log10 for
complex arguments.
The tgmath.h totalorder and totalordermag macros wrongly return a
floating-point type. They should return int, like the underlying
functions. This patch fixes them accordingly, updating tests
including enabling tests of those functions from gen-tgmath-tests.py.
Tested for x86_64.
[BZ #21687]
* math/tgmath.h (__TGMATH_BINARY_REAL_RET_ONLY): New macro.
(totalorder): Use it.
(totalordermag): Likewise.
* math/gen-tgmath-tests.py (Tests.add_all_tests): Enable tests of
totalorder and totalordermag.
* math/test-tgmath.c (F(compile_test)): Do not call totalorder or
totalordermag in arguments of calls to those functions.
(NCALLS): Change to 134.
The tgmath.h macros for function with integer return types generate
unnecessary casts to the return type. Since in those cases the return
type does not depend on the argument type, all the cases in the
conditional expressions already have the right type, and no casts are
needed; this patch removes them.
Tested for x86_64.
* math/tgmath.h (__TGMATH_UNARY_REAL_RET_ONLY): Do not take or
cast to return type argument.
(__TGMATH_TERNARY_FIRST_REAL_RET_ONLY): Likewise.
(lrint): Update call to __TGMATH_UNARY_REAL_RET_ONLY.
(llrint): Likewise.
(lround): Likewise.
(llround): Likewise.
(ilogb): Likewise.
(llogb): Likewise.
(fromfp): Update call to __TGMATH_TERNARY_FIRST_REAL_RET_ONLY.
(ufromfp): Likewise.
(fromfpx): Likewise.
(ufromfpx): Likewise.
As noted in bug 21607, NO_LONG_DOUBLE conditionals in libm tests are
no longer effective. For most this is harmless - they were only
present because of long double functions not being declared with _LIBC
defined, and _LIBC is no longer defined for building most tests. For
the few where this is actually relevant to the test, testing
LDBL_MANT_DIG > DBL_MANT_DIG is more appropriate as that limits the
test to public APIs. This patch fixes the tests accordingly.
Tested for x86_64 and arm.
[BZ #21607]
* math/basic-test.c [!NO_LONG_DOUBLE]: Change conditionals to
[LDBL_MANT_DIG > DBL_MANT_DIG].
* math/bug-nextafter.c [!NO_LONG_DOUBLE]: Remove conditionals.
* math/bug-nexttoward.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-math-isinff.cc [!NO_LONG_DOUBLE]: Likewise.
* math/test-math-iszero.cc [!NO_LONG_DOUBLE]: Likewise.
* math/test-nan-overflow.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-nan-payload.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-nearbyint-except-2.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-nearbyint-except.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-powl.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-signgam-finite-c99.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-signgam-finite.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-signgam-main.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-snan.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-tgmath-ret.c [!NO_LONG_DOUBLE]: Likewise.
* math/test-tgmath.c: Include <float.h>.
[!NO_LONG_DOUBLE]: Change conditionals to [LDBL_MANT_DIG >
DBL_MANT_DIG].
* math/test-tgmath2.c: Include <float.h>.
[!NO_LONG_DOUBLE]: Change conditionals to [LDBL_MANT_DIG >
DBL_MANT_DIG].
This patch adds a more thorough test of tgmath.h macros, verifying
both the return type and the function called for all the cases of
valid argument types. (Cases with current problems - I've just filed
four bugs - are disabled or omitted pending fixing those problems.)
The test uses a Python generator (works with both Python 2 and 3) to
generate a C file which is then built and run as a test in the usual
way (and that C file includes its own dummy definitions of libm
functions similar to existing tgmath.h tests). The motivation is to
make it easier to add tests of tgmath.h for _Float128 when adding
tgmath.h support for that type; the _FloatN / _FloatNx support is
present in the script, but disabled until the tgmath.h support is
written.
Tested for x86_64, and for arm to check things in the long double =
double case. (In that case, it's OK to call either double or long
double functions when the selected type is double or long double, as
long as the return type of the macro is exactly correct.)
* math/gen-tgmath-tests.py: New file.
* math/Makefile [PYTHON] (tests): Add test-tgmath3.
[PYTHON] (generated): Add test-tgmath3.c.
[PYTHON] (CFLAGS-test-tgmath3.c): New variable.
[PYTHON] ($(objpfx)test-tgmath3.c): New rule.