In 1e5834c38a ("Refactor Linux ipc_priv header") a different
approach to passing __IPC_64 as zero was created. Hppa kernel ABI
requires to oass __IPC_64 as zero since it does not set
CONFIG_ARCH_WANT_IPC_PARSE_VERSION in the kernel.
Checked on hppa-linux-gnu with some adjustments to avoid BZ#21016
(basically by removing hppa compat implementations and adjusting
required headers).
* sysdeps/unix/sysv/linux/hppa/ipc_priv.h: New file.
[BZ 20098]
* sysdeps/hppa/dl-fptr.c (_dl_read_access_allowed): New.
(_dl_lookup_address): Return address if it is not consistent with
being a linker defined function pointer. Likewise, return address
if address and function descriptor addresses are not accessible.
On AVX machines with XGETBV (ECX == 1) like Skylake processors,
(gdb) disass _dl_runtime_resolve_avx_opt
Dump of assembler code for function _dl_runtime_resolve_avx_opt:
0x0000000000015890 <+0>: push %rax
0x0000000000015891 <+1>: push %rcx
0x0000000000015892 <+2>: push %rdx
0x0000000000015893 <+3>: mov $0x1,%ecx
0x0000000000015898 <+8>: xgetbv
0x000000000001589b <+11>: mov %eax,%r11d
0x000000000001589e <+14>: pop %rdx
0x000000000001589f <+15>: pop %rcx
0x00000000000158a0 <+16>: pop %rax
0x00000000000158a1 <+17>: and $0x4,%r11d
0x00000000000158a5 <+21>: bnd je 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.
is slower than:
(gdb) disass _dl_runtime_resolve_avx_slow
Dump of assembler code for function _dl_runtime_resolve_avx_slow:
0x0000000000015850 <+0>: vorpd %ymm0,%ymm1,%ymm8
0x0000000000015854 <+4>: vorpd %ymm2,%ymm3,%ymm9
0x0000000000015858 <+8>: vorpd %ymm4,%ymm5,%ymm10
0x000000000001585c <+12>: vorpd %ymm6,%ymm7,%ymm11
0x0000000000015860 <+16>: vorpd %ymm8,%ymm9,%ymm9
0x0000000000015865 <+21>: vorpd %ymm10,%ymm11,%ymm10
0x000000000001586a <+26>: vpcmpeqd %xmm8,%xmm8,%xmm8
0x000000000001586f <+31>: vorpd %ymm9,%ymm10,%ymm10
0x0000000000015874 <+36>: vptest %ymm10,%ymm8
0x0000000000015879 <+41>: bnd jae 0x158b0 <_dl_runtime_resolve_avx>
0x000000000001587c <+44>: vzeroupper
0x000000000001587f <+47>: bnd jmpq 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.
(gdb)
since xgetbv takes much more cycles than single cycle operations like
vpord/vvpcmpeq/ptest. _dl_runtime_resolve_opt should be used only with
AVX512 where AVX512 instructions lead to lower CPU frequency on Skylake
server.
[BZ #21871]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
bit_arch_Use_dl_runtime_resolve_opt only with AVX512F.
(cherry picked from commit d2cf37c0a2)
1. Fix the results for negative subnormals by ignoring the signal when
normalizing the value.
2. Fix the output when the high part is a power of 2 and the low part
is a nonzero number with opposite sign. This fix is based on commit
380bd0fd24.
After applying this patch, logbl() tests pass cleanly on POWER >= 7.
Tested on powerpc, powerpc64 and powerpc64le
[BZ #21280]
* sysdeps/powerpc/power7/fpu/s_logbl.c (__logbl): Ignore the
signal of subnormals and adjust the exponent of power of 2 down
when low part has opposite sign.
(cherry picked from commit c064f6a613)
This change forces realignment of the stack pointer in __tls_get_addr, so
that binaries compiled by GCCs older than GCC 4.9:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066
continue to work even if vector instructions are used in glibc which
require the ABI stack realignment.
__tls_get_addr_slow is added to handle the slow paths in the default
implementation of__tls_get_addr in elf/dl-tls.c. The new __tls_get_addr
calls __tls_get_addr_slow after realigning the stack. Internal calls
within ld.so go directly to the default implementation of __tls_get_addr
because they do not need stack realignment.
[BZ #21609]
* sysdeps/x86_64/Makefile (sysdep-dl-routines): Add tls_get_addr.
(gen-as-const-headers): Add rtld-offsets.sym.
* sysdeps/x86_64/dl-tls.c: New file.
* sysdeps/x86_64/rtld-offsets.sym: Likwise.
* sysdeps/x86_64/tls_get_addr.S: Likewise.
* sysdeps/x86_64/dl-tls.h: Add multiple inclusion guards.
* sysdeps/x86_64/tlsdesc.sym (TI_MODULE_OFFSET): New.
(TI_OFFSET_OFFSET): Likwise.
(cherry picked from commit 031e519c95)
Since commit d957c4d3fa (i386: Compile
rtld-*.os with -mno-sse -mno-mmx -mfpmath=387), vector intrinsics can
no longer be used in ld.so, even if the compiled code never makes it
into the final ld.so link. This commit adds the missing IS_IN (libc)
guard to the SSE 4.2 strcspn implementation, so that it can be used from
ld.so in the future.
(cherry picked from commit 69052a3a95)
The LD_HWCAP_MASK environment variable may alter the selection of
function variants for some architectures. For AT_SECURE process it
means that if an outdated routine has a bug that would otherwise not
affect newer platforms by default, LD_HWCAP_MASK will allow that bug
to be exploited.
To be on the safe side, ignore and disable LD_HWCAP_MASK for setuid
binaries.
[BZ #21209]
* elf/rtld.c (process_envvars): Ignore LD_HWCAP_MASK for
AT_SECURE processes.
* sysdeps/generic/unsecvars.h: Add LD_HWCAP_MASK.
* elf/tst-env-setuid.c (test_parent): Test LD_HWCAP_MASK.
(test_child): Likewise.
* elf/Makefile (tst-env-setuid-ENV): Add LD_HWCAP_MASK.
(cherry picked from commit 1c1243b6fc)
x86_64 libmvec tests have been failing to build lately with GCC
mainline with -Wuninitialized errors, and Markus Trippelsdorf traced
this to an aliasing issue
<https://sourceware.org/ml/libc-alpha/2017-03/msg00169.html>.
This patch fixes the aliasing issue, so that the vectors-of-pointers
are initialized using a union instead of pointer casts. This also
fixes the testsuite build failures with GCC mainline.
Tested for x86_64 (full testsuite with GCC 6; testsuite build with GCC
mainline with build-many-glibcs.py).
* sysdeps/x86/fpu/test-math-vector-sincos.h (INIT_VEC_PTRS_LOOP):
Use a union when storing pointers.
(VECTOR_WRAPPER_fFF_2): Do not take address of integer vector and
cast result when passing to INIT_VEC_PTRS_LOOP.
(VECTOR_WRAPPER_fFF_3): Likewise.
(VECTOR_WRAPPER_fFF_4): Likewise.
(cherry picked from commit ffe308e4fc)
This patch fixes the regression added by 23d2770 for final address
overflow calculation. The subtraction of the considered size (16)
at line 120 is at wrong place, for sizes less than 16 subsequent
overflow check will not take in consideration an invalid size (since
the subtraction will be negative). Also, the lea instruction also
does not raise the carry flag (CF) that is used in subsequent jbe
to check for overflow.
The fix is to follow x86_64 logic from 3daef2c where the overflow
is first check and a sub instruction is issued. In case of resulting
negative size, CF will be set by the sub instruction and a NULL
result will be returned. The patch also add similar tests reported
in bug report.
Checked on i686-linux-gnu and x86_64-linux-gnu.
* string/test-memchr.c (do_test): Add BZ#21182 checks for address
near end of a page.
* sysdeps/i386/i686/multiarch/memchr-sse2.S (__memchr): Fix
overflow calculation.
Cherry-pick of 3abeeec5f4.
On Skylake server, AVX512 load/store instructions in memcpy/memset may
lead to lower CPU turbo frequency in certain situations. Use of AVX2
in memcpy/memset has been observed to have improved overall performance
in many workloads due to the higher frequency.
Since AVX512ER is unique to Xeon Phi, this patch sets Prefer_No_AVX512
if AVX512ER isn't available so that AVX2 versions of memcpy/memset are
used on Skylake server.
[BZ #21396]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
Prefer_No_AVX512 if AVX512ER isn't available.
* sysdeps/x86/cpu-features.h (bit_arch_Prefer_No_AVX512): New.
(index_arch_Prefer_No_AVX512): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Don't use
AVX512 version if Prefer_No_AVX512 is set.
* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk):
Likewise.
* sysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Likewise.
* sysdeps/x86_64/multiarch/memmove_chk.S (__memmove_chk):
Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk):
Likewise.
* sysdeps/x86_64/multiarch/memset.S (memset): Likewise.
* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk):
Likewise.
(cherry picked from commit 4cb334c4d6)
AVX512ER won't be implemented in any Xeon processors and will be in
all Xeon Phi processors. Don't check CPU model number when setting
Prefer_No_VZEROUPPER for Xeon Phi. Instead, set Prefer_No_VZEROUPPER
if AVX512ER is available. It works with current and future Xeon Phi
and non-Xeon Phi processors.
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
Prefer_No_VZEROUPPER if AVX512ER is available.
* sysdeps/x86/cpu-features.h
(bit_cpu_AVX512PF): New.
(bit_cpu_AVX512ER): Likewise.
(bit_cpu_AVX512CD): Likewise.
(bit_cpu_AVX512BW): Likewise.
(bit_cpu_AVX512VL): Likewise.
(index_cpu_AVX512PF): Likewise.
(index_cpu_AVX512ER): Likewise.
(index_cpu_AVX512CD): Likewise.
(index_cpu_AVX512BW): Likewise.
(index_cpu_AVX512VL): Likewise.
(reg_AVX512PF): Likewise.
(reg_AVX512ER): Likewise.
(reg_AVX512CD): Likewise.
(reg_AVX512BW): Likewise.
(reg_AVX512VL): Likewise.
(cherry picked from commit 1c53cb49de)
On Skylake server, _dl_runtime_resolve_avx512_opt is used to preserve
the first 8 vector registers. The code layout is
if only %xmm0 - %xmm7 registers are used
preserve %xmm0 - %xmm7 registers
if only %ymm0 - %ymm7 registers are used
preserve %ymm0 - %ymm7 registers
preserve %zmm0 - %zmm7 registers
Branch predication always executes the fallthrough code path to preserve
%zmm0 - %zmm7 registers speculatively, even though only %xmm0 - %xmm7
registers are used. This leads to lower CPU frequency on Skylake
server. This patch changes the fallthrough code path to preserve
%xmm0 - %xmm7 registers instead:
if whole %zmm0 - %zmm7 registers are used
preserve %zmm0 - %zmm7 registers
if only %ymm0 - %ymm7 registers are used
preserve %ymm0 - %ymm7 registers
preserve %xmm0 - %xmm7 registers
Tested on Skylake server.
[BZ #21258]
* sysdeps/x86_64/dl-trampoline.S (_dl_runtime_resolve_opt):
Define only if _dl_runtime_resolve is defined to
_dl_runtime_resolve_sse_vex.
* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_opt):
Fallthrough to _dl_runtime_resolve_sse_vex.
(cherry picked from commit c15f8eb50c)
When glibc is built with -fstack-check, trying to use posix_spawn can
lead to segfaults due to gcc internally probing stack memory too far.
The new spawn API will allocate a minimum of 1 page, but the stack
checking logic might probe a couple of pages. When it tries to walk
them, everything falls apart.
The gcc internal docs [1] state the default interval checking is one
page. Which means we need two pages (the current one, and the next
probed). No target currently defines it larger.
Further, it mentions that the default minimum stack size needed to
recover from an overflow is 4/8KiB for sjlj or 8/12KiB for others.
But some Linux targets (like mips and ppc) go up to 16KiB (and some
non-Linux targets go up to 24KiB).
Let's create each child with a minimum of 32KiB slack space to support
them all, and give us future breathing room.
No test is added as existing ones crash. Even a simple call is
enough to trigger the problem:
char *argv[] = { "/bin/ls", NULL };
posix_spawn(NULL, "/bin/ls", NULL, NULL, argv, NULL);
[1] https://gcc.gnu.org/onlinedocs/gcc-6.3.0/gccint/Stack-Checking.html
(cherry picked from commit 21f042c804)
The ia64-specific clone2 call expects the base of the stack mapping and
the stack size as sep arguments, not an initial stack value as on other
stack-grows-down architectures. Reuse the stack-grows-up macro so we
pass in the right stack base.
Reported-by: Matt Turner <mattst88@gentoo.org>
(cherry picked from commit ddc3fb3334)
When glibc is compiled with gcc 6.2 that has been configured with
to default to PIC/PIE, the static version of __mempcpy_chk is not built,
as the test is done on PIC instead of SHARED. Fix the test to check for
SHARED, like it is done for similar functions like __memcpy_chk.
2017-03-12 Mike Frysinger <vapier@gentoo.org>
* sysdeps/x86_64/mempcpy_chk.S (__mempcpy_chk): Check for SHARED
instead of PIC.
(cherry picked from commit fbe355fbd1)
IFUNC relocation against definition in unrelocated shared library
will lead to segfault when the IFUNC function is called. This
patch allows such IFUNC relocations with a warning. This isn't
a real fix for
https://sourceware.org/bugzilla/show_bug.cgi?id=21041
It simply allows the program to load. The program will segfault
when longjmp is called.
* sysdeps/i386/dl-machine.h (elf_machine_rel): Replace
_dl_fatal_printf with _dl_error_printf for IFUNC relocation
against unrelocated shared library.
* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
A setxid program that uses a glibc with tunables disabled may pass on
GLIBC_TUNABLES as is to its child processes. If the child process
ends up using a different glibc that has tunables enabled, it will end
up getting access to unsafe tunables. To fix this, remove
GLIBC_TUNABLES from the environment for setxid process.
* sysdeps/generic/unsecvars.h: Add GLIBC_TUNABLES.
* elf/tst-env-setuid-tunables.c
(test_child_tunables)[!HAVE_TUNABLES]: Verify that
GLIBC_TUNABLES is removed in a setgid process.
Since memset-vec-unaligned-erms.S has VDUP_TO_VEC0_AND_SET_RETURN at
function entry, memset optimized for AVX2 and AVX512 will always use
ymm/zmm register. VZEROUPPER should be placed before ret in
L(stosb):
movq %rdx, %rcx
movzbl %sil, %eax
movq %rdi, %rdx
rep stosb
movq %rdx, %rax
ret
since it can be reached from
L(stosb_more_2x_vec):
cmpq $REP_STOSB_THRESHOLD, %rdx
ja L(stosb)
[BZ #21081]
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
(L(stosb)): Add VZEROUPPER before ret.
The commit documents the ownership rules around 'struct pthread' and
when a thread can read or write to the descriptor. With those ownership
rules in place it becomes obvious that pd->stopped_start should not be
touched in several of the paths during thread startup, particularly so
for detached threads. In the case of detached threads, between the time
the thread is created by the OS kernel and the creating thread checks
pd->stopped_start, the detached thread might have already exited and the
memory for pd unmapped. As a regression test we add a simple test which
exercises this exact case by quickly creating detached threads with
large enough stacks to ensure the thread stack cache is bypassed and the
stacks are unmapped. Before the fix the testcase segfaults, after the
fix it works correctly and completes without issue.
For a detailed discussion see:
https://www.sourceware.org/ml/libc-alpha/2017-01/msg00505.html
The problem is basically that sys/ucontext.h is defining R0..R15
which happens to conflict with some packages like Firefox when
trying to build on SH.
The very same problem existed on arm back then [1] and it was fixed by
renaming R0..R15 to REG_R0..REG_R15. This patch imploy a similar
strategy for SH.
Checked on sh4-linux-gnu with run-built-tests=no and I also got reports
that it fixes Firefox build on Debian sh4.
* sysdeps/unix/sysv/linux/sh/sh3/ucontext_i.sym: Use new REG_R*
constants instead of the old R* ones.
* sysdeps/unix/sysv/linux/sh/sh4/ucontext_i.sym: Likewise.
* sysdeps/unix/sysv/linux/sh/sys/ucontext.h (NGPREG): Rename...
(NGREG): ... to this, to fit in with other architectures.
(gpregset_t): Use new NGREG macro.
[__USE_GNU]: Remove condition; all architectures other than tile
are unconditional.
(R*): Rename to REG_R*.
I noticed that some libm-test-ulps files still had long-obsolete
entries for *_tonearest functions, which will no longer be used since
functions with FE_TONEAREST explicitly set aren't tested separately
from those functions with it as the default rounding mode any more.
This patch removes those obsolete entries. However, as they are a
sign of libm-test-ulps not having been regenerated from scratch for a
long time, I strongly advise people testing on those platforms to
remove / truncate the libm-test-ulps file, run "make regen-ulps" and
commit the regenerated-from-scratch file. (Ideally any failures of
libm tests still present after regeneration would be investigated /
fixed - there are several open "math" bugs spread across these
platforms - but simply regenerating from scratch improves things.)
* sysdeps/hppa/fpu/libm-test-ulps: Remove *_tonearest entries.
* sysdeps/ia64/fpu/libm-test-ulps: Likewise.
* sysdeps/m68k/m680x0/fpu/libm-test-ulps: Likewise.
* sysdeps/microblaze/libm-test-ulps: Likewise.
* sysdeps/sh/libm-test-ulps: Likewise.
This patch adjusts s390 specific lock elision code after review
of the following patches:
-S390: Use own tbegin macro instead of __builtin_tbegin.
(8bfc4a2ab4)
-S390: Use new __libc_tbegin_retry macro in elision-lock.c.
(53c5c3d5ac)
-S390: Optimize lock-elision by decrementing adapt_count at unlock.
(dd037fb3df)
The futex value is not tested before starting a transaction,
__glibc_likely is used instead of __builtin_expect and comments
are adjusted.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/htm.h: Adjust comments.
* sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.
* sysdeps/unix/sysv/linux/s390/elision-lock.c: Adjust comments.
(__lll_lock_elision): Do not test futex before starting a
transaction. Use __glibc_likely instead of __builtin_expect.
* sysdeps/unix/sysv/linux/s390/elision-trylock.c: Adjust comments.
(__lll_trylock_elision): Do not test futex before starting a
transaction. Use __glibc_likely instead of __builtin_expect.
MicroBlaze had clock_* functions exported from librt in glibc 2.18 and
2.19, as confirmed in
<https://sourceware.org/ml/libc-alpha/2017-01/msg00369.html>, and they
then disappeared in 2.20, presumably as a result of the fix
<https://sourceware.org/ml/libc-alpha/2014-02/msg00598.html> for a
Versions.def bug that had resulted in their unintended inclusion in
2.18 (followed by removal of the Versions.def mechanism that allowed
such bugs).
As they were released in that library, they should be considered part
of the GLIBC_2.18 ABI and so restored for the sake of any binaries
that expect them in that library. This patch restores them by adding
a MicroBlaze version of clock-compat.c that overrides SHLIB_COMPAT.
Tested (compilation only) with build-many-glibcs.py (where this fixes
the librt ABI test failure; elf/check-execstack still fails and still
needs architecture maintainer attention to fix it or XFAIL it with an
appropriate explanatory comment).
[BZ #21061]
* sysdeps/unix/sysv/linux/microblaze/clock-compat.c: New file.
Bug 21047 reports that the clang assembler disallows the ARM
implementations of _FPU_GETCW and _FPU_SETCW.
These are deliberately written the way they are, using generic
coprocessor instructions (from the days when VFP was just one possible
coprocessor for ARM) that have the right encodings, to handle the case
of the instructions being used runtime-conditionally inside glibc,
where use of these macros is not meant to result in either the
assembler requiring VFP to be enabled at assembly time or in it
marking the object as using VFP. However, more recent ARM ARM
versions have restricted the definitions of the coprocessor
instructions and reportedly the clang assembler follows that in
disallowing those names for VFP instructions.
In the non-__SOFTFP__ case - which in fact is the only case where
these macro definitions can be used outside the build of glibc itself
- using VFP instruction names is of course fine, since we know that
VFP is enabled for that compilation. Thus, this patch uses the
current VFP names for these instructions in that case to improve
compatibility for this header file.
Tested for hard-float and soft-float builds of glibc, including that
installed stripped shared libraries are unchanged by the patch.
[BZ #21047]
* sysdeps/arm/fpu_control.h [!__SOFTFP__] (_FPU_GETCW): Use VFP
name for instruction.
[!__SOFTFP__] (_FPU_SETCW): Likewise.
The soft-float powerpc version of swapcontext does not restore the
signal mask, resulting in stdlib/tst-setcontext2 failing:
after getcontext
after setcontext
after swapcontext
FAIL: SIGUSR2 is blocked after swapcontext.
This patch fixes this by adjusting the arguments passed to
__sigprocmask so that it restores the saved signal mask as well as
saving the existing one. (For hard-float, this code is only used for
a compat symbol, not for the current version of swapcontext.)
Tested for soft-float powerpc.
[BZ #21045]
* sysdeps/unix/sysv/linux/powerpc/powerpc32/swapcontext-common.S
(__CONTEXT_FUNC_NAME): Pass address of signal mask to be restored
to __sigprocmask.
As was done in b224637928, check for large size causing an overflow
in the loop that walks over the array.
Branching out of line here is the fastest approach for handling this
problem, since tile can bundle the instructions to compute the branch
test in parallel with doing the required memchr loop setup computation.
Unfortunately, the existing saturated ops (e.g. tilegx addxsc) are
all signed saturing ops, so don't help with unsigned saturation.
In 1e5834c38a ("Refactor Linux ipc_priv header") a different
approach to passing __IPC_64 as zero was created. The tile
architecture also needs to pass __IPC_64 as zero since it does
not set CONFIG_ARCH_WANT_IPC_PARSE_VERSION in the kernel.
So create a minimal ipc_priv.h that specifies __IPC_64 as zero.
Robust mutexes acquired at the time of a call to fork() do not remain
acquired by the forked child process. We have to clear the list of
acquired robust mutexes before registering this list with the kernel;
otherwise, if some of the robust mutexes are process-shared, the parent
process can alter the child's robust mutex list, which can lead to
deadlocks or even modification of memory that may not be occupied by a
mutex anymore.
[BZ #19402]
* sysdeps/nptl/fork.c (__libc_fork): Clear list of acquired robust
mutexes.
lll_robust_unlock on i386 and x86_64 first sets the futex word to
FUTEX_WAITERS|0 before calling __lll_unlock_wake, which will set the
futex word to 0. If the thread is killed between these steps, then the
futex word will be FUTEX_WAITERS|0, and the kernel (at least current
upstream) will not set it to FUTEX_OWNER_DIED|FUTEX_WAITERS because 0 is
not equal to the TID of the crashed thread.
The lll_robust_lock assembly code on i386 and x86_64 is not prepared to
deal with this case because the fastpath tries to only CAS 0 to TID and
not FUTEX_WAITERS|0 to TID; the slowpath simply waits until it can CAS 0
to TID or the futex_word has the FUTEX_OWNER_DIED bit set.
This issue is fixed by removing the custom x86 assembly code and using
the generic C code instead. However, instead of adding more duplicate
code to the custom x86 lowlevellock.h, the code of the lll_robust* functions
is inlined into the single call sites that exist for each of these functions
in the pthread_mutex_* functions. The robust mutex paths in the latter
have been slightly reorganized to make them simpler.
This patch is meant to be easy to backport, so C11-style atomics are not
used.
[BZ #20985]
* nptl/Makefile: Adapt.
* nptl/pthread_mutex_cond_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
* nptl/pthread_mutex_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
(__pthread_mutex_lock_full): Inline lll_robust* functions and adapt.
* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Inline
lll_robust* functions and adapt.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* sysdeps/nptl/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* nptl/lowlevelrobustlock.c: Remove file.
* nptl/lowlevelrobustlock.sym: Likewise.
* sysdeps/unix/sysv/linux/i386/lowlevelrobustlock.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevelrobustlock.S: Likewise.
After this update, math/test-ildouble, math/test-ldouble and
math/test-ldouble-finite pass on hard float, POWER < 7 builds.
Tested on powerpc, powerpc64 and powerpc64le.