11728 Commits

Author SHA1 Message Date
Alex Butler
6d69c4aad4 aarch64: MTE compatible strncmp
Add support for MTE to strncmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.

The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.

Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
(cherry picked from commit 03e1378f94173fc192a81e421457198f7b8a34a0)
2024-11-04 17:15:10 +00:00
Andreas Schwab
8b84316420 Fix iconv buffer handling with IGNORE error handler (bug #18830)
(cherry picked from commit 4802be92c891903caaf8cae47f685da6f26d4b9a)
2020-11-30 22:59:53 +00:00
Florian Weimer
4b8628acab math/test-sinl-pseudo: Use stack protector only if available
This fixes commit 9333498794cde1d5cca518bad ("Avoid ldbl-96 stack
corruption from range reduction of pseudo-zero (bug 25487).").

(cherry picked from commit c10acd40262486dac597001aecc20ad9d3bd0e4a)
2020-11-30 22:59:53 +00:00
Joseph Myers
59420258af Avoid ldbl-96 stack corruption from range reduction of pseudo-zero (bug 25487).
Bug 25487 reports stack corruption in ldbl-96 sinl on a pseudo-zero
argument (an representation where all the significand bits, including
the explicit high bit, are zero, but the exponent is not zero, which
is not a valid representation for the long double type).

Although this is not a valid long double representation, existing
practice in this area (see bug 4586, originally marked invalid but
subsequently fixed) is that we still seek to avoid invalid memory
accesses as a result, in case of programs that treat arbitrary binary
data as long double representations, although the invalid
representations of the ldbl-96 format do not need to be consistently
handled the same as any particular valid representation.

This patch makes the range reduction detect pseudo-zero and unnormal
representations that would otherwise go to __kernel_rem_pio2, and
returns a NaN for them instead of continuing with the range reduction
process.  (Pseudo-zero and unnormal representations whose unbiased
exponent is less than -1 have already been safely returned from the
function before this point without going through the rest of range
reduction.)  Pseudo-zero representations would previously result in
the value passed to __kernel_rem_pio2 being all-zero, which is
definitely unsafe; unnormal representations would previously result in
a value passed whose high bit is zero, which might well be unsafe
since that is not a form of input expected by __kernel_rem_pio2.

Tested for x86_64.

(cherry picked from commit 9333498794cde1d5cca518badf79533a24114b6f)
2020-11-30 22:59:53 +00:00
Alexander Anisimov
b29853702e arm: CVE-2020-6096: Fix multiarch memcpy for negative length [BZ #25620]
Unsigned branch instructions could be used for r2 to fix the wrong
behavior when a negative length is passed to memcpy.
This commit fixes the armv7 version.

(cherry picked from commit beea361050728138b82c57dda0c4810402d342b9)
2020-11-16 08:00:00 +00:00
Evgeny Eremin
bad8d5ff60 arm: CVE-2020-6096: fix memcpy and memmove for negative length [BZ #25620]
Unsigned branch instructions could be used for r2 to fix the wrong
behavior when a negative length is passed to memcpy and memmove.
This commit fixes the generic arm implementation of memcpy amd memmove.

(cherry picked from commit 79a4fa341b8a89cb03f84564fd72abaa1a2db394)
2020-11-16 08:00:00 +00:00
Andreas Schwab
34ce87638c Fix array overflow in backtrace on PowerPC (bug 25423)
When unwinding through a signal frame the backtrace function on PowerPC
didn't check array bounds when storing the frame address.  Fixes commit
d400dcac5e ("PowerPC: fix backtrace to handle signal trampolines").

(cherry picked from commit d93769405996dfc11d216ddbe415946617b5a494)
2020-11-16 08:00:00 +00:00
Florian Weimer
0df8ecff9e misc/test-errno-linux: Handle EINVAL from quotactl
In commit 3dd4d40b420846dd35869ccc8f8627feef2cff32 ("xfs: Sanity check
flags of Q_XQUOTARM call"), Linux 5.4 added checking for the flags
argument, causing the test to fail due to too restrictive test
expectations.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 1f7525d924b608a3e43b10fcfb3d46b8a6e9e4f9)
2020-11-16 08:00:00 +00:00
Szabolcs Nagy
dc7f51bda9 aarch64: Fix DT_AARCH64_VARIANT_PCS handling [BZ #26798]
The variant PCS support was ineffective because in the common case
linkmap->l_mach.plt == 0 but then the symbol table flags were ignored
and normal lazy binding was used instead of resolving the relocs early.
(This was a misunderstanding about how GOT[1] is setup by the linker.)

In practice this mainly affects SVE calls when the vector length is
more than 128 bits, then the top bits of the argument registers get
clobbered during lazy binding.

Fixes bug 26798.

(cherry picked from commit 558251bd8785760ad40fcbfeaaee5d27fa5b0fe4)
2020-11-04 12:30:21 +00:00
Szabolcs Nagy
8edc96aa33 aarch64: add HWCAP_ATOMICS to HWCAP_IMPORTANT
This enables searching shared libraries in atomics/ when the hardware
supports LSE atomics of armv8.1 so one can provide optimized variants
of libraries in a portable way.

LSE atomics does not affect library abi, the new instructions can
interoperate with old ones.

I considered the earlier comments on the patch

https://sourceware.org/ml/libc-alpha/2018-04/msg00400.html
https://sourceware.org/ml/libc-alpha/2018-04/msg00625.html

It turns out that the way glibc dynamic linker decides on the search
path is not very flexible: it wants to use hwcap bits and associated
strings.  So some targets reuse hwcap bits for glibc internal purposes
to affect the search logic.  But hwcap is an interface with the kernel,
glibc should not allocate bits in it for its internal logic as that
limits future hwcap extensions and confusing to users who expect to see
hwcap bits in ifunc resolvers.  Instead of rewriting the dynamic linker
path logic (which affects all targets) this patch just uses the existing
mechanism, however this means that the path name has to be the hwcap
name "atomics" and cannot be changed to something more meaningful to
users.

It is hard to tell how much performance benefit this can give, in
principle armv8.1 atomics can be better optimized in the hardware, so it
can make a difference for synchronization heavy code.  On some systems
such multilib setup may be the only viable way to get optimized
libraries used.

	* sysdeps/unix/sysv/linux/aarch64/dl-procinfo.h (HWCAP_IMPORTANT): Add
	HWCAP_ATOMICS.

(cherry picked from commit 397c54c1afa531242602fe3ac7bb47eff0e909f9)
2020-03-25 16:24:32 +00:00
Szabolcs Nagy
599ebfacc0 aarch64: Remove HWCAP_CPUID from HWCAP_IMPORTANT
This partially reverts

commit f82e9672ad89ea1ef40bbe1af71478e255e87c5e
Author:     Siddhesh Poyarekar <siddhesh@sourceware.org>

    aarch64: Allow overriding HWCAP_CPUID feature check using HWCAP_MASK

The idea was to make it possible to disable cpuid based ifunc resolution
in glibc by changing the hwcap mask which the user could already control.

However the hwcap mask has an orthogonal role: it specifies additional
library search paths for the dynamic linker.  So "cpuid" got added to
the search paths when it was set in the default mask (HWCAP_IMPORTANT),
which is not useful behaviour, the hwcap masking should not be reused
in the cpu features code.

Meanwhile there is a tunable to set the cpu explicitly so it is possible
to disable the cpuid based dispatch without using a hwcap mask:

  GLIBC_TUNABLES=glibc.tune.cpu=generic

	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (init_cpu_features):
	Use dl_hwcap without masking.
	* sysdeps/unix/sysv/linux/aarch64/dl-procinfo.h (HWCAP_IMPORTANT):
	Remove HWCAP_CPUID.

(cherry picked from commit d0cd79807157e399ff58e67cb51651f90442122e)
2020-03-25 16:24:01 +00:00
Marcin Kościelnicki
4d5cfeb510 rtld: Check __libc_enable_secure before honoring LD_PREFER_MAP_32BIT_EXEC (CVE-2019-19126) [BZ #25204]
The problem was introduced in glibc 2.23, in commit
b9eb92ab05204df772eb4929eccd018637c9f3e9
("Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BIT").

(cherry picked from commit d5dfad4326fc683c813df1e37bbf5cf920591c8e)
2019-11-22 13:40:07 +01:00
Dragan Mladjenovic
92f04eedb5 mips: Force RWX stack for hard-float builds that can run on pre-4.8 kernels
Linux/Mips kernels prior to 4.8 could potentially crash the user
process when doing FPU emulation while running on non-executable
user stack.

Currently, gcc doesn't emit .note.GNU-stack for mips, but that will
change in the future. To ensure that glibc can be used with such
future gcc, without silently resulting in binaries that might crash
in runtime, this patch forces RWX stack for all built objects if
configured to run against minimum kernel version less than 4.8.

	* sysdeps/unix/sysv/linux/mips/Makefile
	(test-xfail-check-execstack):
	Move under mips-has-gnustack != yes.
	(CFLAGS-.o*, ASFLAGS-.o*): New rules.
	Apply -Wa,-execstack if mips-force-execstack == yes.
	* sysdeps/unix/sysv/linux/mips/configure: Regenerated.
	* sysdeps/unix/sysv/linux/mips/configure.ac
	(mips-force-execstack): New var.
	Set to yes for hard-float builds with minimum_kernel < 4.8.0
	or minimum_kernel not set at all.
	(mips-has-gnustack): New var.
	Use value of libc_cv_as_noexecstack
	if mips-force-execstack != yes, otherwise set to no.

(cherry picked from commit 33bc9efd91de1b14354291fc8ebd5bce96379f12)
2019-11-05 14:25:38 -03:00
Wilco Dijkstra
5f0d2e0491 [AArch64] Add ifunc support for Ares
Add Ares to the midr_el0 list and support ifunc dispatch.  Since Ares
supports 2 128-bit loads/stores, use Neon registers for memcpy by
selecting __memcpy_falkor by default (we should rename this to
__memcpy_simd or similar).

	* manual/tunables.texi (glibc.cpu.name): Add ares tunable.
	* sysdeps/aarch64/multiarch/memcpy.c (__libc_memcpy): Use
	__memcpy_falkor for ares.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_ARES):
	Add new define.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list):
	Add ares cpu.

(cherry picked from commit 02f440c1ef5d5d79552a524065aa3e2fabe469b9)
2019-09-06 18:53:37 +01:00
Siddhesh Poyarekar
e6b7252040 aarch64,falkor: Use vector registers for memcpy
Vector registers perform better than scalar register pairs for copying
data so prefer them instead.  This results in a time reduction of over
50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk.
Larger sizes show improvements of around 1% to 2%.  memcpy-random shows
a very small improvement, in the range of 1-2%.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use vector registers.

(cherry picked from commit 0aec4c1d1801e8016ebe89281d16597e0557b8be)
2019-09-06 18:38:56 +01:00
Siddhesh Poyarekar
c74b884f70 aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware
prefetching are not as dominant as instruction level parallelism.
Hence it makes more sense to load data into multiple registers than to
try and route them to the same prefetch unit.  This is also the case
for the loop exit where we are unable to latch on to the same prefetch
unit anyway so it makes more sense to have data loaded in parallel.

The performance results are a bit mixed with memcpy-random, with
numbers jumping between -1% and +3%, i.e. the numbers don't seem
repeatable.  memcpy-walk sees a 70% improvement (i.e. > 2x) for 128
bytes and that improvement reduces down as the impact of the tail copy
decreases in comparison to the loop.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use multiple registers to copy data in loop tail.

(cherry picked from commit db725a458e1cb0e17204daa543744faf08bb2e06)
2019-09-06 18:36:23 +01:00
Siddhesh Poyarekar
0fc5934ebd aarch64/strncmp: Use lsr instead of mov+lsr
A lsr can do what the mov and lsr did.

(cherry picked from commit b47c3e7637efb77818cbef55dcd0ed1f0ea0ddf1)
2019-09-06 17:00:32 +01:00
Siddhesh Poyarekar
e0a0bd3acc aarch64/strncmp: Unbreak builds with old binutils
Binutils 2.26.* and older do not support moves with shifted registers,
so use a separate shift instruction instead.

(cherry picked from commit d46f84de745db8f3f06a37048261f4e5ceacf0a3)
2019-09-06 16:59:34 +01:00
Siddhesh Poyarekar
638caf3000 aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple
byte copy, which is not very efficient.  Enhance the comparison
similar to strcmp by loading a double-word at a time.  The peak
performance improvement (i.e. 4k maxlen comparisons) due to this on
the strncmp microbenchmark is as follows:

falkor: 3.5x (up to 72% time reduction)
cortex-a73: 3.5x (up to 71% time reduction)
cortex-a53: 3.5x (up to 71% time reduction)

All mutually misaligned inputs from 16 bytes maxlen onwards show
upwards of 15% improvement and there is no measurable effect on the
performance of aligned/mutually aligned inputs.

	* sysdeps/aarch64/strncmp.S (count): New macro.
	(strncmp): Store misaligned length in SRC1 in COUNT.
	(mutual_align): Adjust.
	(misaligned8): Load dword at a time when it is safe.

(cherry picked from commit 7108f1f944792ac68332967015d5e6418c5ccc88)
2019-09-06 16:58:29 +01:00
Siddhesh Poyarekar
d5f45a29ff aarch64/strcmp: fix misaligned loop jump target
I accidentally set the loop jump back label as misaligned8 instead of
do_misaligned.  The typo is harmless but it's always nice to not have
to unnecessarily execute those two instructions.

	* sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to
	do_misaligned, not misaligned8.

(cherry picked from commit 6ca24c43481e2c93a6eec362b04c3e77a35b28e3)
2019-09-06 16:57:46 +01:00
Siddhesh Poyarekar
40df047b3b aarch64: Fix branch target to loop16
I goofed up when changing the loop8 name to loop16 and missed on out
the branch instance.  Fixed and actually build tested this time.

	* sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16.

(cherry picked from commit 4e54d918630ea53e29dd70d3bdffcb00d29ed3d4)
2019-09-06 16:20:12 +01:00
Siddhesh Poyarekar
062139f233 aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes
and then compares 16 bytes at a time, thus optimizing loads from both
sources.  The glibc memcmp microbenchmark retains performance (with an
error of ~1ns) for smaller compare sizes and reduces up to 31% of
execution time for compares up to 4K on the APM Mustang.  On Qualcomm
Falkor this improves to almost 48%, i.e. it is almost 2x improvement
for sizes of 2K and above.

	* sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a
	time.

(cherry picked from commit 30a81dae5b752f8aa5f96e7f7c341ec57cba3585)
2019-09-06 16:19:07 +01:00
Siddhesh Poyarekar
f3e2add213 aarch64: Use the L() macro for labels in memcmp
The L() macro makes the assembly a bit more readable.

	* sysdeps/aarch64/memcmp.S: Use L() macro for labels.

(cherry picked from commit 84c94d2fd90d84ae7e67657ee8e22c2d1b796f63)
2019-09-06 16:17:01 +01:00
Adhemerval Zanella
22bd3ab40e posix: Fix large mmap64 offset for mips64n32 (BZ#24699)
The fix for BZ#21270 (commit 158d5fa0e19) added a mask to avoid offset larger
than 1^44 to be used along __NR_mmap2.  However mips64n32 users __NR_mmap,
as mips64n64, but still defines off_t as old non-LFS type (other ILP32, such
x32, defines off_t being equal to off64_t).  This leads to use the same
mask meant only for __NR_mmap2 call for __NR_mmap, thus limiting the maximum
offset it can use with mmap64.

This patch fixes by setting the high mask only for __NR_mmap2 usage. The
posix/tst-mmap-offset.c already tests it and also fails for mips64n32. The
patch also change the test to check for an arch-specific header that defines
the maximum supported offset.

Checked on x86_64-linux-gnu, i686-linux-gnu, and I also tests tst-mmap-offset
on qemu simulated mips64 with kernel 3.2.0 kernel for both mips-linux-gnu and
mips64-n32-linux-gnu.

	[BZ #24699]
	* posix/tst-mmap-offset.c: Mention BZ #24699.
	(do_test_bz21270): Rename to do_test_large_offset and use
	mmap64_maximum_offset to check for maximum expected offset value.
	* sysdeps/generic/mmap_info.h: New file.
	* sysdeps/unix/sysv/linux/mips/mmap_info.h: Likewise.
	* sysdeps/unix/sysv/linux/mmap64.c (MMAP_OFF_HIGH_MASK): Define iff
	__NR_mmap2 is used.

(cherry picked from commit a008c76b56e4f958cf5a0d6f67d29fade89421b7)
2019-07-15 09:24:25 -03:00
Szabolcs Nagy
bdd16894aa aarch64: handle STO_AARCH64_VARIANT_PCS
Backport of commit 82bc69c012838a381c4167c156a06f4598f34227
and commit 30ba0375464f34e4bf8129f3d3dc14d0c09add17
without using DT_AARCH64_VARIANT_PCS for optimizing the symbol table check.
This is needed so the internal abi between ld.so and libc.so is unchanged.

Avoid lazy binding of symbols that may follow a variant PCS with different
register usage convention from the base PCS.

Currently the lazy binding entry code does not preserve all the registers
required for AdvSIMD and SVE vector calls.  Saving and restoring all
registers unconditionally may break existing binaries, even if they never
use vector calls, because of the larger stack requirement for lazy
resolution, which can be significant on an SVE system.

The solution is to mark all symbols in the symbol table that may follow
a variant PCS so the dynamic linker can handle them specially.  In this
patch such symbols are always resolved at load time, not lazily.

So currently LD_AUDIT for variant PCS symbols are not supported, for that
the _dl_runtime_profile entry needs to be changed e.g. to unconditionally
save/restore all registers (but pass down arg and retval registers to
pltentry/exit callbacks according to the base PCS).

This patch also removes a __builtin_expect from the modified code because
the branch prediction hint did not seem useful.

	* sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Check
	STO_AARCH64_VARIANT_PCS and bind such symbols at load time.
2019-07-12 10:14:45 +01:00
Florian Weimer
949da7f2fd io: Remove copy_file_range emulation [BZ #24744]
The kernel is evolving this interface (e.g., removal of the
restriction on cross-device copies), and keeping up with that
is difficult.  Applications which need the function should
run kernels which support the system call instead of relying on
the imperfect glibc emulation.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 5a659ccc0ec217ab02a4c273a1f6d346a359560a)
2019-07-09 10:45:47 +02:00
Stefan Liebler
1ab314d8d3 S390: Mark vx and vxe as important hwcap.
This patch adds vx and vxe as important hwcaps
which allows one to provide shared libraries
tuned for platforms with non-vx/-vxe, vx or vxe.

ChangeLog:

	* sysdeps/s390/dl-procinfo.h (HWCAP_IMPORTANT):
	Add HWCAP_S390_VX and HWCAP_S390_VXE.

(cherry picked from commit 61f5e9470fb397a4c334938ac5a667427d9047df)
2019-03-21 09:34:00 +01:00
H.J. Lu
2ebadb6451 x86-64 memcmp: Use unsigned Jcc instructions on size [BZ #24155]
Since the size argument is unsigned. we should use unsigned Jcc
instructions, instead of signed, to check size.

Tested on x86-64 and x32, with and without --disable-multi-arch.

	[BZ #24155]
	CVE-2019-7309
	* NEWS: Updated for CVE-2019-7309.
	* sysdeps/x86_64/memcmp.S: Use RDX_LP for size.  Clear the
	upper 32 bits of RDX register for x32.  Use unsigned Jcc
	instructions, instead of signed.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp-2.
	* sysdeps/x86_64/x32/tst-size_t-memcmp-2.c: New test.

(cherry picked from commit 3f635fb43389b54f682fc9ed2acc0b2aaf4a923d)
2019-02-04 10:23:39 -08:00
H.J. Lu
3a5ae8db68 x86-64 strnlen/wcsnlen: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes strnlen/wcsnlen for x32.  Tested on x86-64 and x32.  On
x86-64, libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/strlen-avx2.S: Use RSI_LP for length.
	Clear the upper 32 bits of RSI register.
	* sysdeps/x86_64/strlen.S: Use RSI_LP for length.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strnlen
	and tst-size_t-wcsnlen.
	* sysdeps/x86_64/x32/tst-size_t-strnlen.c: New file.
	* sysdeps/x86_64/x32/tst-size_t-wcsnlen.c: Likewise.

(cherry picked from commit 5165de69c0908e28a380cbd4bb054e55ea4abc95)
2019-02-01 13:46:20 -08:00
H.J. Lu
2c016ffa24 x86-64 strncpy: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes strncpy for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: Use RDX_LP
	for length.
	* sysdeps/x86_64/multiarch/strcpy-ssse3.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncpy.
	* sysdeps/x86_64/x32/tst-size_t-strncpy.c: New file.

(cherry picked from commit c7c54f65b080affb87a1513dee449c8ad6143c8b)
2019-02-01 13:46:18 -08:00
H.J. Lu
d8457edece x86-64 strncmp family: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes the strncmp family for x32.  Tested on x86-64 and x32.
On x86-64, libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/strcmp-sse42.S: Use RDX_LP for length.
	* sysdeps/x86_64/strcmp.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncasecmp,
	tst-size_t-strncmp and tst-size_t-wcsncmp.
	* sysdeps/x86_64/x32/tst-size_t-strncasecmp.c: New file.
	* sysdeps/x86_64/x32/tst-size_t-strncmp.c: Likewise.
	* sysdeps/x86_64/x32/tst-size_t-wcsncmp.c: Likewise.

(cherry picked from commit ee915088a0231cd421054dbd8abab7aadf331153)
2019-02-01 13:45:34 -08:00
H.J. Lu
55f8812858 x86-64 memset/wmemset: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes memset/wmemset for x32.  Tested on x86-64 and x32.  On
x86-64, libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Use
	RDX_LP for length.  Clear the upper 32 bits of RDX register.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-wmemset.
	* sysdeps/x86_64/x32/tst-size_t-memset.c: New file.
	* sysdeps/x86_64/x32/tst-size_t-wmemset.c: Likewise.

(cherry picked from commit 82d0b4a4d76db554eb6757acb790fcea30b19965)
2019-02-01 13:32:05 -08:00
H.J. Lu
efc3714845 x86-64 memrchr: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes memrchr for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/memrchr.S: Use RDX_LP for length.
	* sysdeps/x86_64/multiarch/memrchr-avx2.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memrchr.
	* sysdeps/x86_64/x32/tst-size_t-memrchr.c: New file.

(cherry picked from commit ecd8b842cf37ea112e59cd9085ff1f1b6e208ae0)
2019-02-01 13:20:12 -08:00
H.J. Lu
a4690969ed x86-64 memcpy: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes memcpy for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Use RDX_LP for
	length.  Clear the upper 32 bits of RDX register.
	* sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcpy.
	tst-size_t-wmemchr.
	* sysdeps/x86_64/x32/tst-size_t-memcpy.c: New file.

(cherry picked from commit 231c56760c1e2ded21ad96bbb860b1f08c556c7a)
2019-02-01 13:02:18 -08:00
H.J. Lu
6465327195 x86-64 memcmp/wmemcmp: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes memcmp/wmemcmp for x32.  Tested on x86-64 and x32.  On
x86-64, libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S: Use RDX_LP for
	length.  Clear the upper 32 bits of RDX register.
	* sysdeps/x86_64/multiarch/memcmp-sse4.S: Likewise.
	* sysdeps/x86_64/multiarch/memcmp-ssse3.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp and
	tst-size_t-wmemcmp.
	* sysdeps/x86_64/x32/tst-size_t-memcmp.c: New file.
	* sysdeps/x86_64/x32/tst-size_t-wmemcmp.c: Likewise.

(cherry picked from commit b304fc201d2f6baf52ea790df8643e99772243cd)
2019-02-01 12:54:55 -08:00
H.J. Lu
50117e00a1 x86-64 memchr/wmemchr: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes memchr/wmemchr for x32.  Tested on x86-64 and x32.  On
x86-64, libc.so is the same with and withou the fix.

	[BZ #24097]
	CVE-2019-6488
	* sysdeps/x86_64/memchr.S: Use RDX_LP for length.  Clear the
	upper 32 bits of RDX register.
	* sysdeps/x86_64/multiarch/memchr-avx2.S: Likewise.
	* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memchr and
	tst-size_t-wmemchr.
	* sysdeps/x86_64/x32/test-size_t.h: New file.
	* sysdeps/x86_64/x32/tst-size_t-memchr.c: Likewise.
	* sysdeps/x86_64/x32/tst-size_t-wmemchr.c: Likewise.

(cherry picked from commit 97700a34f36721b11a754cf37a1cc40695ece1fd)
2019-02-01 12:54:39 -08:00
Tulio Magno Quites Machado Filho
2794474c65 powerpc: Add missing CFI register information (bug #23614)
Add CFI information about the offset of registers stored in the stack
frame.

	[BZ #23614]
	* sysdeps/powerpc/powerpc64/addmul_1.S (FUNC): Add CFI offset for
	registers saved in the stack frame.
	* sysdeps/powerpc/powerpc64/lshift.S (__mpn_lshift): Likewise.
	* sysdeps/powerpc/powerpc64/mul_1.S (__mpn_mul_1): Likewise.

Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Gabriel F. T. Gomes <gabriel@inconstante.eti.br>
(cherry picked from commit 1d880d4a9bf7608c2cd33bbe954ce6995f79121a)
2018-12-13 09:32:29 +01:00
Florian Weimer
9f433fc791 CVE-2018-19591: if_nametoindex: Fix descriptor for overlong name [BZ #23927]
(cherry picked from commit d527c860f5a3f0ed687bd03f0cb464612dc23408)
2018-11-27 21:26:02 +01:00
Adhemerval Zanella
d8eee5ef55 x86: Fix Haswell CPU string flags (BZ#23709)
Th commit 'Disable TSX on some Haswell processors.' (2702856bf4) changed the
default flags for Haswell models.  Previously, new models were handled by the
default switch path, which assumed a Core i3/i5/i7 if AVX is available. After
the patch, Haswell models (0x3f, 0x3c, 0x45, 0x46) do not set the flags
Fast_Rep_String, Fast_Unaligned_Load, Fast_Unaligned_Copy, and
Prefer_PMINUB_for_stringop (only the TSX one).

This patch fixes it by disentangle the TSX flag handling from the memory
optimization ones.  The strstr case cited on patch now selects the
__strstr_sse2_unaligned as expected for the Haswell cpu.

Checked on x86_64-linux-gnu.

	[BZ #23709]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits
	independently of other flags.

(cherry picked from commit c3d8dc45c9df199b8334599a6cbd98c9950dba62)
2018-11-02 11:14:57 +01:00
Szabolcs Nagy
5cd5309d91 i64: fix missing exp2f, log2f and powf symbols in libm.a [BZ #23822]
When new symbol versions were introduced without SVID compatible
error handling the exp2f, log2f and powf symbols were accidentally
removed from the ia64 lim.a.  The regression was introduced by
the commits

f5f0f5265162fe6f4f238abcd3086985f7c38d6d
New expf and exp2f version without SVID compat wrapper

72d3d281080be9f674982067d72874fd6cdb4b64
New symbol version for logf, log2f and powf without SVID compat

With WEAK_LIBM_ENTRY(foo), there is a hidden __foo and weak foo
symbol definition in both SHARED and !SHARED build.

	[BZ #23822]
	* sysdeps/ia64/fpu/e_exp2f.S (exp2f): Use WEAK_LIBM_ENTRY.
	* sysdeps/ia64/fpu/e_log2f.S (log2f): Likewise.
	* sysdeps/ia64/fpu/e_exp2f.S (powf): Likewise.

(cherry picked from commit ba5b14c7613980dfefcad6b6e88f913e5f596c59)
2018-10-26 15:50:54 +01:00
Florian Weimer
1759ea197b conform: XFAIL siginfo_t si_band test on sparc64
We can use long int on sparcv9, but on sparc64, we must match the int
type used by the kernel (and not long int, as in POSIX).

(cherry picked from commit 7c5e34d7f1b8f8f5acd94c2b885ae13b85414dcd)
2018-10-26 09:25:48 +02:00
Ilya Yu. Malakhov
77b4b8231e signal: Use correct type for si_band in siginfo_t [BZ #23562]
(cherry picked from commit f997b4be18f7e57d757d39e42f7715db26528aa0)
2018-10-22 13:16:08 +02:00
Stefan Liebler
5bdb6897fc Fix race in pthread_mutex_lock while promoting to PTHREAD_MUTEX_ELISION_NP [BZ #23275]
The race leads either to pthread_mutex_destroy returning EBUSY
or triggering an assertion (See description in bugzilla).

This patch is fixing the race by ensuring that the elision path is
used in all cases if elision is enabled by the GLIBC_TUNABLES framework.

The __kind variable in struct __pthread_mutex_s is accessed concurrently.
Therefore we are now using the atomic macros.

The new testcase tst-mutex10 is triggering the race on s390x and intel.
Presumably also on power, but I don't have access to a power machine
with lock-elision. At least the code for power is the same as on the other
two architectures.

ChangeLog:

	[BZ #23275]
	* nptl/tst-mutex10.c: New File.
	* nptl/Makefile (tests): Add tst-mutex10.
	(tst-mutex10-ENV): New variable.
	* sysdeps/unix/sysv/linux/s390/force-elision.h: (FORCE_ELISION):
	Ensure that elision path is used if elision is available.
	* sysdeps/unix/sysv/linux/powerpc/force-elision.h (FORCE_ELISION):
	Likewise.
	* sysdeps/unix/sysv/linux/x86/force-elision.h: (FORCE_ELISION):
	Likewise.
	* nptl/pthreadP.h (PTHREAD_MUTEX_TYPE, PTHREAD_MUTEX_TYPE_ELISION)
	(PTHREAD_MUTEX_PSHARED): Use atomic_load_relaxed.
	* nptl/pthread_mutex_consistent.c (pthread_mutex_consistent): Likewise.
	* nptl/pthread_mutex_getprioceiling.c (pthread_mutex_getprioceiling):
	Likewise.
	* nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full)
	(__pthread_mutex_cond_lock_adjust): Likewise.
	* nptl/pthread_mutex_setprioceiling.c (pthread_mutex_setprioceiling):
	Likewise.
	* nptl/pthread_mutex_timedlock.c (__pthread_mutex_timedlock): Likewise.
	* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock): Likewise.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h (struct __pthread_mutex_s):
	Add comments.
	* nptl/pthread_mutex_destroy.c (__pthread_mutex_destroy):
	Use atomic_load_relaxed and atomic_store_relaxed.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init):
	Use atomic_store_relaxed.

(cherry picked from commit 403b4feb22dcbc85ace72a361d2a951380372471)
2018-10-18 12:35:00 +02:00
Adhemerval Zanella
a127df9f3e Fix misreported errno on preadv2/pwritev2 (BZ#23579)
The fallback code of Linux wrapper for preadv2/pwritev2 executes
regardless of the errno code for preadv2, instead of the case where
the syscall is not supported.

This fixes it by calling the fallback code iff errno is ENOSYS. The
patch also adds tests for both invalid file descriptor and invalid
iov_len and vector count.

The only discrepancy between preadv2 and fallback code regarding
error reporting is when an invalid flags are used.  The fallback code
bails out earlier with ENOTSUP instead of EINVAL/EBADF when the syscall
is used.

Checked on x86_64-linux-gnu on a 4.4.0 and 4.15.0 kernel.

	[BZ #23579]
	* misc/tst-preadvwritev2-common.c (do_test_with_invalid_fd): New
	test.
	* misc/tst-preadvwritev2.c, misc/tst-preadvwritev64v2.c (do_test):
	Call do_test_with_invalid_fd.
	* sysdeps/unix/sysv/linux/preadv2.c (preadv2): Use fallback code iff
	errno is ENOSYS.
	* sysdeps/unix/sysv/linux/preadv64v2.c (preadv64v2): Likewise.
	* sysdeps/unix/sysv/linux/pwritev2.c (pwritev2): Likewise.
	* sysdeps/unix/sysv/linux/pwritev64v2.c (pwritev64v2): Likewise.

(cherry picked from commit 7a16bdbb9ff4122af0a28dc20996c95352011fdd)
2018-09-28 15:30:30 -03:00
Florian Weimer
3b3775697a preadv2/pwritev2: Handle offset == -1 [BZ #22753]
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>

(cherry picked from commit d4b4a00a462348750bb18544eb30853ee6ac5d10)
2018-09-28 15:15:06 -03:00
Stefan Liebler
9533f19aa5 Fix segfault in maybe_script_execute.
If glibc is built with gcc 8 and -march=z900,
the testcase posix/tst-spawn4-compat crashes with a segfault.

In function maybe_script_execute, the new_argv array is dynamically
initialized on stack with (argc + 1) elements.
The function wants to add _PATH_BSHELL as the first argument
and writes out of bounds of new_argv.
There is an off-by-one because maybe_script_execute fails to count
the terminating NULL when sizing new_argv.

ChangeLog:

	* sysdeps/unix/sysv/linux/spawni.c (maybe_script_execute):
	Increment size of new_argv by one.

(cherry picked from commit 28669f86f6780a18daca264f32d66b1428c9c6f1)
2018-09-10 14:25:47 +02:00
H.J. Lu
2dab17550d x86: Populate COMMON_CPUID_INDEX_80000001 for Intel CPUs [BZ #23459]
Reviewed-by: Carlos O'Donell <carlos@redhat.com>

	[BZ #23459]
	* sysdeps/x86/cpu-features.c (get_extended_indices): New
	function.
	(init_cpu_features): Call get_extended_indices for both Intel
	and AMD CPUs.
	* sysdeps/x86/cpu-features.h (COMMON_CPUID_INDEX_80000001):
	Remove "for AMD" comment.

(cherry picked from commit be525a69a6630abc83144c0a96474f2e26da7443)
2018-07-29 06:30:59 -07:00
H.J. Lu
a452341529 x86: Correct index_cpu_LZCNT [BZ #23456]
cpu-features.h has

 #define bit_cpu_LZCNT		(1 << 5)
 #define index_cpu_LZCNT	COMMON_CPUID_INDEX_1
 #define reg_LZCNT

But the LZCNT feature bit is in COMMON_CPUID_INDEX_80000001:

Initial EAX Value: 80000001H
ECX Extended Processor Signature and Feature Bits:
Bit 05: LZCNT available

index_cpu_LZCNT should be COMMON_CPUID_INDEX_80000001, not
COMMON_CPUID_INDEX_1.  The VMX feature bit is in COMMON_CPUID_INDEX_1:

Initial EAX Value: 01H
Feature Information Returned in the ECX Register:
5 VMX

Reviewed-by: Carlos O'Donell <carlos@redhat.com>

	[BZ #23456]
	* sysdeps/x86/cpu-features.h (index_cpu_LZCNT): Set to
	COMMON_CPUID_INDEX_80000001.

(cherry picked from commit 65d87ade1ee6f3ac099105e3511bd09bdc24cf3f)
2018-07-29 06:30:59 -07:00
Florian Weimer
5fab7fe1dc math: Set 387 and SSE2 rounding mode for tgamma on i386 [BZ #23253]
Previously, only the SSE2 rounding mode was set, so the assembler
implementations using 387 were not following the expecting rounding
mode.

(cherry picked from commit f496b28e61d0342f579bf794c71b80e9c7d0b1b5)
2018-07-04 12:01:31 +02:00
Daniel Alvarez
4476d16b03 getifaddrs: Don't return ifa entries with NULL names [BZ #21812]
A lookup operation in map_newlink could turn into an insert because of
holes in the interface part of the map.  This leads to incorrectly set
the name of the interface to NULL when the interface is not present
for the address being processed (most likely because the interface was
added between the RTM_GETLINK and RTM_GETADDR calls to the kernel).
When such changes are detected by the kernel, it'll mark the dump as
"inconsistent" by setting NLM_F_DUMP_INTR flag on the next netlink
message.

This patch checks this condition and retries the whole operation.
Hopes are that next time the interface corresponding to the address
entry is present in the list and correct name is returned.

(cherry picked from commit c1f86a33ca32e26a9d6e29fc961e5ecb5e2e5eb4)
2018-06-29 17:23:52 +02:00