Redirect internal assertion failures to __libc_assert_fail, based on
based on __libc_message, which writes directly to STDERR_FILENO
and calls abort. Also disable message translation and reword the
error message slightly (adjusting stdlib/tst-bz20544 accordingly).
As a result of these changes, malloc no longer needs its own
redefinition of __assert_fail.
__libc_assert_fail needs to be stubbed out during rtld dependency
analysis because the rtld rebuilds turn __libc_assert_fail into
__assert_fail, which is unconditionally provided by elf/dl-minimal.c.
This change is not possible for the public assert macro and its
__assert_fail function because POSIX requires that the diagnostic
is written to stderr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
__pthread_sigmask cannot actually fail with valid pointer arguments
(it would need a really broken seccomp filter), and we do not check
for errors elsewhere.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since commit ec2c1fcefb ("malloc:
Abort on heap corruption, without a backtrace [BZ #21754]"),
__libc_message always terminates the process. Since commit
a289ea09ea ("Do not print backtraces
on fatal glibc errors"), the backtrace facility has been removed.
Therefore, remove enum __libc_message_action and the action
argument of __libc_message, and mark __libc_message as _No_return.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Linux 5.19 has no new syscalls, but enables memfd_secret in the uapi
headers for RISC-V. Update the version number in syscall-names.list
to reflect that it is still current for 5.19 and regenerate the
arch-syscall.h headers with build-many-glibcs.py update-syscalls.
Tested with build-many-glibcs.py.
The inline and library functions that the CMSG_NXTHDR macro may expand
to increment the pointer to the header before checking the stride of
the increment against available space. Since C only allows incrementing
pointers to one past the end of an array, the increment must be done
after a length check. This commit fixes that and includes a regression
test for CMSG_FIRSTHDR and CMSG_NXTHDR.
The Linux, Hurd, and generic headers are all changed.
Tested on Linux on armv7hl, i686, x86_64, aarch64, ppc64le, and s390x.
[BZ #28846]
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
When applications redirect some functions they might get called before
libpthread is fully initialized. They may still expected pthread_self
and cancellable functions to work, so cope with such calls in that
situation.
It uses the bitmask with rejection [1], which calculates a mask
being the lowest power of two bounding the request upper bound,
successively queries new random values, and rejects values
outside the requested range.
Performance-wise, there is no much gain in trying to conserve
bits since arc4random is wrapper on getrandom syscall. It should
be cheaper to just query a uint32_t value. The algorithm also
avoids modulo and divide operations, which might be costly
depending of the architecture.
[1] https://www.pcg-random.org/posts/bounded-rands.html
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
Cancellation currently cannot happen at this point because dlopen
as used by the unwind link always performs additional allocations
for libgcc_s.so.1, even if it has been loaded already as a dependency
of the main executable. But it seems prudent not to rely on this
quirk.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Make test-c8rtomb.out and test-mbrtoc8.out depend on $(gen-locales) for
xsetlocale (LC_ALL, "de_DE.UTF-8");
xsetlocale (LC_ALL, "zh_HK.BIG5-HKSCS");
Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
gcc 13 issues the following diagnostic for the uchar.h header when the
-Wc++20-compat option is enabled in C++ modes that do not enable char8_t
as a builtin type (C++17 and earlier by default; subject to _GNU_SOURCE
and the gcc -f[no-]char8_t option).
warning: identifier ‘char8_t’ is a keyword in C++20 [-Wc++20-compat]
This change modifies the uchar.h header to suppress the diagnostic through
the use of '#pragma GCC diagnostic' directives for gcc 10 and later (the
-Wc++20-compat option was added in gcc version 10). Unfortunately, a bug
in gcc currently prevents those directives from having the intended effect
as reported at https://gcc.gnu.org/PR106423. A patch for that issue has
been submitted and is available in the email thread archive linked below.
https://gcc.gnu.org/pipermail/gcc-patches/2022-July/598736.html
pidfd_getfd can fail for a valid pidfd with errno EPERM for various
reasons in a restricted environment. Use FAIL_UNSUPPORTED in that case.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
With new arc4random implementation, the internal parameters might
require a lot of runtime and/or trigger some contention on older
kernels (which might trigger spurious timeout failures).
Also, since we are now testing getrandom entropy instead of an
userspace RNG, there is no much need to extensive testing.
With this change the tst-arc4random-thread goes from about 1m to
5s on a Ryzen 9 with 5.15.0-41-generic.
Checked on x86_64-linux-gnu.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Rather than buffering 16 MiB of entropy in userspace (by way of
chacha20), simply call getrandom() every time.
This approach is doubtlessly slower, for now, but trying to prematurely
optimize arc4random appears to be leading toward all sorts of nasty
properties and gotchas. Instead, this patch takes a much more
conservative approach. The interface is added as a basic loop wrapper
around getrandom(), and then later, the kernel and libc together can
work together on optimizing that.
This prevents numerous issues in which userspace is unaware of when it
really must throw away its buffer, since we avoid buffering all
together. Future improvements may include userspace learning more from
the kernel about when to do that, which might make these sorts of
chacha20-based optimizations more possible. The current heuristic of 16
MiB is meaningless garbage that doesn't correspond to anything the
kernel might know about. So for now, let's just do something
conservative that we know is correct and won't lead to cryptographic
issues for users of this function.
This patch might be considered along the lines of, "optimization is the
root of all evil," in that the much more complex implementation it
replaces moves too fast without considering security implications,
whereas the incremental approach done here is a much safer way of going
about things. Once this lands, we can take our time in optimizing this
properly using new interplay between the kernel and userspace.
getrandom(0) is used, since that's the one that ensures the bytes
returned are cryptographically secure. But on systems without it, we
fallback to using /dev/urandom. This is unfortunate because it means
opening a file descriptor, but there's not much of a choice. Secondly,
as part of the fallback, in order to get more or less the same
properties of getrandom(0), we poll on /dev/random, and if the poll
succeeds at least once, then we assume the RNG is initialized. This is a
rough approximation, as the ancient "non-blocking pool" initialized
after the "blocking pool", not before, and it may not port back to all
ancient kernels, though it does to all kernels supported by glibc
(≥3.2), so generally it's the best approximation we can do.
The motivation for including arc4random, in the first place, is to have
source-level compatibility with existing code. That means this patch
doesn't attempt to litigate the interface itself. It does, however,
choose a conservative approach for implementing it.
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Cristian Rodríguez <crrodriguez@opensuse.org>
Cc: Paul Eggert <eggert@cs.ucla.edu>
Cc: Mark Harris <mark.hsj@gmail.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Commit a06b40cdf5 updated stat.h to use
__USE_XOPEN2K8 instead of __USE_MISC to add the st_atim, st_mtim and
st_ctim members to struct stat. However, for microblaze, there are two
definitions of struct stat, depending on the __USE_FILE_OFFSET64 macro.
The second one was not updated.
Change __USE_MISC to __USE_XOPEN2K8 in the __USE_FILE_OFFSET64 version
of struct stat for microblaze.
The hppa port starts libc at GLIBC_2.2, but has earlier symbol
versions in other shared objects. This means that the compat
symbol for readdir64 is not actually present in libc even though
have-GLIBC_2.1.3 is defined as yes at the make level.
Fixes commit 15e50e6c96 ("Linux:
dirent/tst-readdir64-compat can be a regular test") by mostly
reverting it.
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported
and enabled by the architecture.
As for generic implementation, the last step that XOR with the
input is omited. The final state register clearing is also
omitted.
On a Ryzen 9 5900X it shows the following improvements (using
formatted bench-arc4random data):
SSE MB/s
-----------------------------------------------
arc4random [single-thread] 704.25
arc4random_buf(16) [single-thread] 1018.17
arc4random_buf(32) [single-thread] 1315.27
arc4random_buf(48) [single-thread] 1449.36
arc4random_buf(64) [single-thread] 1511.16
arc4random_buf(80) [single-thread] 1539.48
arc4random_buf(96) [single-thread] 1571.06
arc4random_buf(112) [single-thread] 1596.16
arc4random_buf(128) [single-thread] 1613.48
-----------------------------------------------
AVX2 MB/s
-----------------------------------------------
arc4random [single-thread] 922.61
arc4random_buf(16) [single-thread] 1478.70
arc4random_buf(32) [single-thread] 2241.80
arc4random_buf(48) [single-thread] 2681.28
arc4random_buf(64) [single-thread] 2913.43
arc4random_buf(80) [single-thread] 3009.73
arc4random_buf(96) [single-thread] 3141.16
arc4random_buf(112) [single-thread] 3254.46
arc4random_buf(128) [single-thread] 3305.02
-----------------------------------------------
Checked on x86_64-linux-gnu.
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-amd64-ssse3.S. It replaces the ROTATE_SHUF_2 (which
uses pshufb) by ROTATE2 and thus making the original implementation
SSE2.
As for generic implementation, the last step that XOR with the
input is omited. The final state register clearing is also
omitted.
On a Ryzen 9 5900X it shows the following improvements (using
formatted bench-arc4random data):
GENERIC MB/s
-----------------------------------------------
arc4random [single-thread] 443.11
arc4random_buf(16) [single-thread] 552.27
arc4random_buf(32) [single-thread] 626.86
arc4random_buf(48) [single-thread] 649.81
arc4random_buf(64) [single-thread] 663.95
arc4random_buf(80) [single-thread] 674.78
arc4random_buf(96) [single-thread] 675.17
arc4random_buf(112) [single-thread] 680.69
arc4random_buf(128) [single-thread] 683.20
-----------------------------------------------
SSE MB/s
-----------------------------------------------
arc4random [single-thread] 704.25
arc4random_buf(16) [single-thread] 1018.17
arc4random_buf(32) [single-thread] 1315.27
arc4random_buf(48) [single-thread] 1449.36
arc4random_buf(64) [single-thread] 1511.16
arc4random_buf(80) [single-thread] 1539.48
arc4random_buf(96) [single-thread] 1571.06
arc4random_buf(112) [single-thread] 1596.16
arc4random_buf(128) [single-thread] 1613.48
-----------------------------------------------
Checked on x86_64-linux-gnu.
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-aarch64.S. It is used as default and only
little-endian is supported (BE uses generic code).
As for generic implementation, the last step that XOR with the
input is omited. The final state register clearing is also
omitted.
On a virtualized Linux on Apple M1 it shows the following
improvements (using formatted bench-arc4random data):
GENERIC MB/s
-----------------------------------------------
arc4random [single-thread] 380.89
arc4random_buf(16) [single-thread] 500.73
arc4random_buf(32) [single-thread] 552.61
arc4random_buf(48) [single-thread] 566.82
arc4random_buf(64) [single-thread] 574.01
arc4random_buf(80) [single-thread] 581.02
arc4random_buf(96) [single-thread] 591.19
arc4random_buf(112) [single-thread] 592.29
arc4random_buf(128) [single-thread] 596.43
-----------------------------------------------
OPTIMIZED MB/s
-----------------------------------------------
arc4random [single-thread] 569.60
arc4random_buf(16) [single-thread] 825.78
arc4random_buf(32) [single-thread] 987.03
arc4random_buf(48) [single-thread] 1042.39
arc4random_buf(64) [single-thread] 1075.50
arc4random_buf(80) [single-thread] 1094.68
arc4random_buf(96) [single-thread] 1130.16
arc4random_buf(112) [single-thread] 1129.58
arc4random_buf(128) [single-thread] 1137.91
-----------------------------------------------
Checked on aarch64-linux-gnu.
It shows both throughput (total bytes obtained in the test duration)
and latecy for both arc4random and arc4random_buf with different
sizes.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
The basic tst-arc4random-chacha20.c checks if the output of ChaCha20
implementation matches the reference test vectors from RFC8439.
The tst-arc4random-fork.c check if subprocesses generate distinct
streams of randomness (if fork handling is done correctly).
The tst-arc4random-stats.c is a statistical test to the randomness of
arc4random, arc4random_buf, and arc4random_uniform.
The tst-arc4random-thread.c check if threads generate distinct streams
of randomness (if function are thread-safe).
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.
To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).
Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).
The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros. This strategy is similar to what OpenBSD arc4random
does.
The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.
The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit. It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte. Depending of the upper bound requested,
it might lead to better CPU utilization.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf