Commit Graph

38810 Commits

Author SHA1 Message Date
Adhemerval Zanella
900fa25736 stdio: Remove the usage of $(fno-unit-at-a-time) for errlist.c
The errlist.c is built with -fno-toplevel-reorder to avoid compiler to
reorder the compat assembly directives due an assembler issue [1]
(fixed on 2.39).

This patch removes the compiler flags by split the compat symbol
generation in two phases.  First the _sys_errlist_internal internal
without any compat symbol directive is preprocessed to generate an
assembly source code.  This generate assembly is then used as input
on a platform agnostic errlist-data.S which then creates the compat
definitions.  This prevents compiler to move any compat directive
prior the _sys_errlist_internal definition itself.

Checked on a make check run-built-tests=no on all affected ABIs.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=29012
2022-05-13 10:54:41 -03:00
H.J. Lu
111254f3e1 Add declare_object_symbol_alias for assembly codes (BZ #28128)
There are 2 problems in:

 #define declare_symbol_alias(symbol, original, type, size) \
  declare_symbol_alias_1 (symbol, original, type, size)
 #ifdef __ASSEMBLER__
 # define declare_symbol_alias_1(symbol, original, type, size) \
   strong_alias (original, symbol); \
   .type C_SYMBOL_NAME (symbol), %##type; \
   .size C_SYMBOL_NAME (symbol), size

1. .type and .size are substituted by arguments.
2. %##type is expanded to "% type" due to the GCC bug:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101613

But assembler doesn't support "% type".

Workaround BZ #28128 by

1. Don't define declare_symbol_alias for assembly codes.
2. Define declare_object_symbol_alias for assembly codes.

Reviewed-by: Fangrui Song <maskray@google.com>
2022-05-13 10:54:41 -03:00
Siddhesh Poyarekar
9bcd12d223 wcrtomb: Make behavior POSIX compliant
The GNU implementation of wcrtomb assumes that there are at least
MB_CUR_MAX bytes available in the destination buffer passed to wcrtomb
as the first argument.  This is not compatible with the POSIX
definition, which only requires enough space for the input wide
character.

This does not break much in practice because when users supply buffers
smaller than MB_CUR_MAX (e.g. in ncurses), they compute and dynamically
allocate the buffer, which results in enough spare space (thanks to
usable_size in malloc and padding in alloca) that no actual buffer
overflow occurs.  However when the code is built with _FORTIFY_SOURCE,
it runs into the hard check against MB_CUR_MAX in __wcrtomb_chk and
hence fails.  It wasn't evident until now since dynamic allocations
would result in wcrtomb not being fortified but since _FORTIFY_SOURCE=3,
that limitation is gone, resulting in such code failing.

To fix this problem, introduce an internal buffer that is MB_LEN_MAX
long and use that to perform the conversion and then copy the resultant
bytes into the destination buffer.  Also move the fortification check
into the main implementation, which checks the result after conversion
and aborts if the resultant byte count is greater than the destination
buffer size.

One complication is that applications that assume the MB_CUR_MAX
limitation to be gone may not be able to run safely on older glibcs if
they use static destination buffers smaller than MB_CUR_MAX; dynamic
allocations will always have enough spare space that no actual overruns
will occur.  One alternative to fixing this is to bump symbol version to
prevent them from running on older glibcs but that seems too strict a
constraint.  Instead, since these users will only have made this
decision on reading the manual, I have put a note in the manual warning
them about the pitfalls of having static buffers smaller than
MB_CUR_MAX and running them on older glibc.

Benchmarking:

The wcrtomb microbenchmark shows significant increases in maximum
execution time for all locales, ranging from 10x for ar_SA.UTF-8 to
1.5x-2x for nearly everything else.  The mean execution time however saw
practically no impact, with some results even being quicker, indicating
that cache locality has a much bigger role in the overhead.

Given that the additional copy uses a temporary buffer inside wcrtomb,
it's likely that a hot path will end up putting that buffer (which is
responsible for the additional overhead) in a similar place on stack,
giving the necessary cache locality to negate the overhead.  However in
situations where wcrtomb ends up getting called at wildly different
spots on the call stack (or is on different call stacks, e.g. with
threads or different execution contexts) and is still a hotspot, the
performance lag will be visible.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-13 19:15:46 +05:30
Wangyang Guo
8162147872 nptl: Add backoff mechanism to spinlock loop
When mutiple threads waiting for lock at the same time, once lock owner
releases the lock, waiters will see lock available and all try to lock,
which may cause an expensive CAS storm.

Binary exponential backoff with random jitter is introduced. As try-lock
attempt increases, there is more likely that a larger number threads
compete for adaptive mutex lock, so increase wait time in exponential.
A random jitter is also added to avoid synchronous try-lock from other
threads.

v2: Remove read-check before try-lock for performance.

v3:
1. Restore read-check since it works well in some platform.
2. Make backoff arch dependent, and enable it for x86_64.
3. Limit max backoff to reduce latency in large critical section.

v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h

v5: Commit log updated for regression in large critical section.

Result of pthread-mutex-locks bench

Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
First Row: thread number
First Col: critical section length
Values: backoff vs upstream, time based, low is better

non-critical-length: 1
	1	2	4	8	16	32	64	112	140
0	0.99	0.58	0.52	0.49	0.43	0.44	0.46	0.52	0.54
1	0.98	0.43	0.56	0.50	0.44	0.45	0.50	0.56	0.57
2	0.99	0.41	0.57	0.51	0.45	0.47	0.48	0.60	0.61
4	0.99	0.45	0.59	0.53	0.48	0.49	0.52	0.64	0.65
8	1.00	0.66	0.71	0.63	0.56	0.59	0.66	0.72	0.71
16	0.97	0.78	0.91	0.73	0.67	0.70	0.79	0.80	0.80
32	0.95	1.17	0.98	0.87	0.82	0.86	0.89	0.90	0.90
64	0.96	0.95	1.01	1.01	0.98	1.00	1.03	0.99	0.99
128	0.99	1.01	1.01	1.17	1.08	1.12	1.02	0.97	1.02

non-critical-length: 32
	1	2	4	8	16	32	64	112	140
0	1.03	0.97	0.75	0.65	0.58	0.58	0.56	0.70	0.70
1	0.94	0.95	0.76	0.65	0.58	0.58	0.61	0.71	0.72
2	0.97	0.96	0.77	0.66	0.58	0.59	0.62	0.74	0.74
4	0.99	0.96	0.78	0.66	0.60	0.61	0.66	0.76	0.77
8	0.99	0.99	0.84	0.70	0.64	0.66	0.71	0.80	0.80
16	0.98	0.97	0.95	0.76	0.70	0.73	0.81	0.85	0.84
32	1.04	1.12	1.04	0.89	0.82	0.86	0.93	0.91	0.91
64	0.99	1.15	1.07	1.00	0.99	1.01	1.05	0.99	0.99
128	1.00	1.21	1.20	1.22	1.25	1.31	1.12	1.10	0.99

non-critical-length: 128
	1	2	4	8	16	32	64	112	140
0	1.02	1.00	0.99	0.67	0.61	0.61	0.61	0.74	0.73
1	0.95	0.99	1.00	0.68	0.61	0.60	0.60	0.74	0.74
2	1.00	1.04	1.00	0.68	0.59	0.61	0.65	0.76	0.76
4	1.00	0.96	0.98	0.70	0.63	0.63	0.67	0.78	0.77
8	1.01	1.02	0.89	0.73	0.65	0.67	0.71	0.81	0.80
16	0.99	0.96	0.96	0.79	0.71	0.73	0.80	0.84	0.84
32	0.99	0.95	1.05	0.89	0.84	0.85	0.94	0.92	0.91
64	1.00	0.99	1.16	1.04	1.00	1.02	1.06	0.99	0.99
128	1.00	1.06	0.98	1.14	1.39	1.26	1.08	1.02	0.98

There is regression in large critical section. But adaptive mutex is
aimed for "quick" locks. Small critical section is more common when
users choose to use adaptive pthread_mutex.

Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-09 14:38:40 -07:00
Florian Weimer
a2a6bce7d7 Linux: Implement a useful version of _startup_fatal
On i386 and ia64, the TCB is not available at this point.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-09 18:15:16 +02:00
Florian Weimer
18bd9c3d3b ia64: Always define IA64_USE_NEW_STUB as a flag macro
And keep the previous definition if it exists.  This allows
disabling IA64_USE_NEW_STUB while keeping USE_DL_SYSINFO defined.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-09 18:15:16 +02:00
Adhemerval Zanella
71e2a681f1 linux: Fix posix_spawn return code if clone fails (BZ#29109)
The __clone_internal returns the error on errno.

Checked on x86_64-linux-gnu.
2022-05-06 10:48:30 -03:00
Siddhesh Poyarekar
050cc5f7c1 benchtests: Add wcrtomb microbenchmark
Add a simple benchmark that measures wcrtomb performance with various
locales with 1-4 byte characters.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-06 18:16:43 +05:30
Xiaoming Ni
cf73acb596 clock_settime/clock_gettime: Use __nonnull to avoid null pointer
clock_settime()
clock_settime64()
clock_gettime()
clock_gettime64()
Add __nonnull((2)) to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Xiaoming Ni
ed2ddeffa5 clock_adjtime: Use __nonnull to avoid null pointer
clock_adjtime()/clock_adjtime64()
Add __nonnull((2)) to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Xiaoming Ni
6a9786b8ec ntp_xxxtimex: Use __nonnull to avoid null pointer
ntp_gettime()
ntp_gettime64()
ntp_gettimex()
ntp_gettimex64()
ntp_adjtime()
Add __nonnull((1)) to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Xiaoming Ni
d62a70fda8 adjtimex/adjtimex64: Use __nonnull to avoid null pointer
Add __nonnull((1)) to the adjtimex()/adjtimex64() function declaration
    to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Samuel Thibault
eff158b75d hurd spawni: Fix reauthenticating closed fds
When an fd is closed, the port cell remains, but the port becomes
MACH_PORT_NULL, so we have to guard against it.
2022-05-05 02:14:43 +02:00
Florian Weimer
c1b68685d4 Linux: Define MMAP_CALL_INTERNAL
Unlike MMAP_CALL, this avoids a TCB dependency for an errno update
on failure.

<mmap_internal.h> cannot be included as is on several architectures
due to the definition of page_unit, so introduce a separate header
file for the definition of MMAP_CALL and MMAP_CALL_INTERNAL,
<mmap_call.h>.

Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2022-05-04 15:37:21 +02:00
Florian Weimer
60f0f2130d i386: Honor I386_USE_SYSENTER for 6-argument Linux system calls
Introduce an int-80h-based version of __libc_do_syscall and use
it if I386_USE_SYSENTER is defined as 0.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-04 15:37:21 +02:00
Florian Weimer
6e5c7a1e26 i386: Remove OPTIMIZE_FOR_GCC_5 from Linux libc-do-syscall.S
After commit a78e6a10d0
("i386: Remove broken CAN_USE_REGISTER_ASM_EBP (bug 28771)"),
it is never defined.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-04 15:37:21 +02:00
Siddhesh Poyarekar
db1efe02c9 manual: Clarify that abbreviations of long options are allowed
The man page and code comments clearly state that abbreviations of long
option names are recognized correctly as long as they are unique.
Document this fact in the glibc manual as well.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Andreas Schwab <schwab@linux-m68k.org>
2022-05-04 15:56:47 +05:30
Fangrui Song
8e28aa3a51 elf: Remove fallback to the start of DT_STRTAB for dladdr
When neither DT_HASH nor DT_GNU_HASH is present, the code scans
[DT_SYMTAB, DT_STRTAB). However, there is no guarantee that .dynstr
immediately follows .dynsym (e.g. lld typically places .gnu.version
after .dynsym).

In the absence of a hash table, symbol lookup will always fail
(map->l_nbuckets == 0 in dl-lookup.c) as if the object has no symbol, so
it seems fair for dladdr to do the same.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-02 09:06:39 -07:00
Fangrui Song
4e7e4f3b4b powerpc32: Remove unused HAVE_PPC_SECURE_PLT
82a79e7d18 removed the only user of
HAVE_PPC_SECURE_PLT.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-02 08:55:36 -07:00
Florian Weimer
d056c21213 dlfcn: Implement the RTLD_DI_PHDR request type for dlinfo
The information is theoretically available via dl_iterate_phdr as
well, but that approach is very slow if there are many shared
objects.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@rehdat.com>
2022-04-29 17:00:53 +02:00
Florian Weimer
93804a1ee0 manual: Document the dlinfo function
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@rehdat.com>
2022-04-29 17:00:48 +02:00
Florian Weimer
e47de5cb2d Do not use --hash-style=both for building glibc shared objects
The comment indicates that --hash-style=both was used to maintain
compatibility with static dlopen, but we had many internal ABI
changes since then, so this compatiblity does not add value anymore.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-04-29 16:37:51 +02:00
Siddhesh Poyarekar
5b5b1012d5 benchtests: Better libmvec integration
Improve libmvec benchmark integration so that in future other
architectures may be able to run their libmvec benchmarks as well.  This
now allows libmvec benchmarks to be run with `make BENCHSET=bench-math`.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-29 11:48:18 +05:30
Siddhesh Poyarekar
944afe6d95 benchtests: Add UNSUPPORTED benchmark status
The libmvec benchmarks print a message indicating that a certain CPU
feature is unsupported and exit prematurelyi, which breaks the JSON in
bench.out.

Handle this more elegantly in the bench makefile target by adding
support for an UNSUPPORTED exit status (77) so that bench.out continues
to have output for valid tests.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-29 11:48:16 +05:30
Adhemerval Zanella
118a2aee07 linux: Fix fchmodat with AT_SYMLINK_NOFOLLOW for 64 bit time_t (BZ#29097)
The AT_SYMLINK_NOFOLLOW emulation ues the default 32 bit stat internal
calls, which fails with EOVERFLOW if the file constains timestamps
beyond 2038.

Checked on i686-linux-gnu.
2022-04-28 09:58:44 -03:00
Alan Modra
6f043e0ee7 Use __ehdr_start rather than _begin in _dl_start_final
__ehdr_start is already used in rltld.c:dl_main, and can serve the
same purpose as _begin.  Besides tidying the code, using linker
defined section relative symbols rather than "-defsym _begin=0" better
reflects the intent of _dl_start_final use of _begin, which is to
refer to the load address of ld.so rather than absolute address zero.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-04-28 08:50:11 +09:30
Noah Goldstein
911c63a51c sysdeps: Add 'get_fast_jitter' interace in fast-jitter.h
'get_fast_jitter' is meant to be used purely for performance
purposes. In all cases it's used it should be acceptable to get no
randomness (see default case). An example use case is in setting
jitter for retries between threads at a lock. There is a
performance benefit to having jitter, but only if the jitter can
be generated very quickly and ultimately there is no serious issue
if no jitter is generated.

The implementation generally uses 'HP_TIMING_NOW' iff it is
inlined (avoid any potential syscall paths).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-27 17:17:43 -05:00
DJ Delorie
7c477b57a3 posix/glob.c: update from gnulib
Copied from gnulib/lib/glob.c in order to fix rhbz 1982608
Also fixes swbz 25659

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-04-27 17:19:31 -04:00
Wangyang Guo
9e5daa1f6a benchtests: Add pthread-mutex-locks bench
Benchmark for testing pthread mutex locks performance with different
threads and critical sections.

The test configuration consists of 3 parts:
1. thread number
2. critical-section length
3. non-critical-section length

Thread number starts from 1 and increased by 2x until num of CPU cores
(nprocs). An additional over-saturation case (1.25 * nprocs) is also
included.
Critical-section is represented by a loop of shared do_filler(),
length can be determined by the loop iters.
Non-critical-section is similiar to the critical-section, except it's
based on non-shared do_filler().

Currently, adaptive pthread_mutex lock is tested.
2022-04-27 13:41:57 -07:00
Adhemerval Zanella
834ddd0432 linux: Fix missing internal 64 bit time_t stat usage
These are two missing spots initially done by 52a5fe70a2.

Checked on i686-linux-gnu.
2022-04-27 14:21:07 -03:00
Adhemerval Zanella
3a0588ae48 elf: Fix DFS sorting algorithm for LD_TRACE_LOADED_OBJECTS with missing libraries (BZ #28868)
On _dl_map_object the underlying file is not opened in trace mode
(in other cases where the underlying file can't be opened,
_dl_map_object  quits with an error).  If there any missing libraries
being processed, they will not be considered on final nlist size
passed on _dl_sort_maps later in the function.  And it is then used by
_dl_sort_maps_dfs on the stack allocated working maps:

222   /* Array to hold RPO sorting results, before we copy back to  maps[].  */
223   struct link_map *rpo[nmaps];
224
225   /* The 'head' position during each DFS iteration. Note that we start at
226      one past the last element due to first-decrement-then-store (see the
227      bottom of above dfs_traversal() routine).  */
228   struct link_map **rpo_head = &rpo[nmaps];

However while transversing the 'l_initfini' on dfs_traversal it will
still consider the l_faked maps and thus update rpo more times than the
allocated working 'rpo', overflowing the stack object.

As suggested in bugzilla, one option would be to avoid sorting the maps
for trace mode.  However I think ignoring l_faked object does make
sense (there is one less constraint to call the sorting function), it
allows a slight less stack usage for trace, and it is slight simpler
solution.

The tests does trigger the stack overflow, however I tried to make
it more generic to check different scenarios or missing objects.

Checked on x86_64-linux-gnu.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-27 08:36:09 -03:00
Adhemerval Zanella
4f7b7d00e0 posix: Remove unused definition on _Fork
Checked on x86_64-linux-gnu.
2022-04-26 14:21:08 -03:00
H.J. Lu
4c5b1cf5a6 NEWS: Mention DT_RELR support 2022-04-26 10:16:11 -07:00
H.J. Lu
4ada564f35 elf: Add more DT_RELR tests
Verify that:

1. A DT_RELR shared library without DT_NEEDED works.
2. A DT_RELR shared library without DT_VERNEED works.
3. A DT_RELR shared library without libc.so on DT_NEEDED works.
2022-04-26 10:16:11 -07:00
H.J. Lu
60196d2ef2 elf: Properly handle zero DT_RELA/DT_REL values
With DT_RELR, there may be no relocations in DT_RELA/DT_REL and their
entry values are zero.  Don't relocate DT_RELA/DT_REL and update the
combined relocation start address if their entry values are zero.
2022-04-26 10:16:11 -07:00
Fangrui Song
e895cff59a elf: Support DT_RELR relative relocation format [BZ #27924]
PIE and shared objects usually have many relative relocations. In
2017/2018, SHT_RELR/DT_RELR was proposed on
https://groups.google.com/g/generic-abi/c/bX460iggiKg/m/GxjM0L-PBAAJ
("Proposal for a new section type SHT_RELR") and is a pre-standard. RELR
usually takes 3% or smaller space than R_*_RELATIVE relocations. The
virtual memory size of a mostly statically linked PIE is typically 5~10%
smaller.

---

Notes I will not include in the submitted commit:

Available on https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/maskray/relr

"pre-standard": even Solaris folks are happy with the refined generic-abi
proposal. Cary Coutant will apply the change
https://sourceware.org/pipermail/libc-alpha/2021-October/131781.html

This patch is simpler than Chrome OS's glibc patch and makes ELF_DYNAMIC_DO_RELR
available to all ports. I don't think the current glibc implementation
supports ia64 in an ELFCLASS32 container. That said, the style I used is
works with an ELFCLASS32 container for 64-bit machine if ElfW(Addr) is
64-bit.

* Chrome OS folks have carried a local patch since 2018 (latest version:
  https://chromium.googlesource.com/chromiumos/overlays/chromiumos-overlay/+/refs/heads/main/sys-libs/glibc/files/local/glibc-2.32).
  I.e. this feature has been battle tested.
* Android bionic supports 2018 and switched to DT_RELR==36 in 2020.
* The Linux kernel has supported CONFIG_RELR since 2019-08
  (https://git.kernel.org/linus/5cf896fb6be3effd9aea455b22213e27be8bdb1d).
* A musl patch (by me) exists but is not applied:
  https://www.openwall.com/lists/musl/2019/03/06/3
* rtld-elf from FreeBSD 14 will support DT_RELR.

I believe upstream glibc should support DT_RELR to benefit all Linux
distributions. I filed some feature requests to get their attention:

* Gentoo: https://bugs.gentoo.org/818376
* Arch Linux: https://bugs.archlinux.org/task/72433
* Debian https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=996598
* Fedora https://bugzilla.redhat.com/show_bug.cgi?id=2014699

As of linker support (to the best of my knowledge):

* LLD support DT_RELR.
* https://chromium.googlesource.com/chromiumos/overlays/chromiumos-overlay/+/refs/heads/main/sys-devel/binutils/files/
  has a gold patch.
* GNU ld feature request https://sourceware.org/bugzilla/show_bug.cgi?id=27923

Changes from the original patch:

1. Check the linker option, -z pack-relative-relocs, which add a
GLIBC_ABI_DT_RELR symbol version dependency on the shared C library if
it provides a GLIBC_2.XX symbol version.
2. Change make variale to have-dt-relr.
3. Rename tst-relr-no-pie to tst-relr-pie for --disable-default-pie.
4. Use TEST_VERIFY in tst-relr.c.
5. Add the check-tst-relr-pie.out test to check for linker generated
libc.so version dependency on GLIBC_ABI_DT_RELR.
6. Move ELF_DYNAMIC_DO_RELR before ELF_DYNAMIC_DO_REL.
2022-04-26 10:16:11 -07:00
H.J. Lu
57292f5741 Add GLIBC_ABI_DT_RELR for DT_RELR support
The EI_ABIVERSION field of the ELF header in executables and shared
libraries can be bumped to indicate the minimum ABI requirement on the
dynamic linker.  However, EI_ABIVERSION in executables isn't checked by
the Linux kernel ELF loader nor the existing dynamic linker.  Executables
will crash mysteriously if the dynamic linker doesn't support the ABI
features required by the EI_ABIVERSION field.  The dynamic linker should
be changed to check EI_ABIVERSION in executables.

Add a glibc version, GLIBC_ABI_DT_RELR, to indicate DT_RELR support so
that the existing dynamic linkers will issue an error on executables with
GLIBC_ABI_DT_RELR dependency.  When there is a DT_VERNEED entry with
libc.so on DT_NEEDED, issue an error if there is a DT_RELR entry without
GLIBC_ABI_DT_RELR dependency.

Support __placeholder_only_for_empty_version_map as the placeholder symbol
used only for empty version map to generate GLIBC_ABI_DT_RELR without any
symbols.
2022-04-26 10:16:11 -07:00
H.J. Lu
4610b24f5e elf: Define DT_RELR related macros and types 2022-04-26 10:16:11 -07:00
Fangrui Song
098a657fe4 elf: Replace PI_STATIC_AND_HIDDEN with opposite HIDDEN_VAR_NEEDS_DYNAMIC_RELOC
PI_STATIC_AND_HIDDEN indicates whether accesses to internal linkage
variables and hidden visibility variables in a shared object (ld.so)
need dynamic relocations (usually R_*_RELATIVE). PI (position
independent) in the macro name is a misnomer: a code sequence using GOT
is typically position-independent as well, but using dynamic relocations
does not meet the requirement.

Not defining PI_STATIC_AND_HIDDEN is legacy and we expect that all new
ports will define PI_STATIC_AND_HIDDEN. Current ports defining
PI_STATIC_AND_HIDDEN are more than the opposite. Change the configure
default.

No functional change.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-04-26 09:26:22 -07:00
Carlos O'Donell
e465d97653 i386: Regenerate ulps
These failures were caught while building glibc master for Fedora
Rawhide which is built with '-mtune=generic -msse2 -mfpmath=sse'
using gcc 11.3 (gcc-11.3.1-2.fc35) on a Cascadelake Intel Xeon
processor.
2022-04-26 10:52:41 -04:00
Florian Weimer
8dcb6d0af0 dlfcn: Do not use rtld_active () to determine ld.so state (bug 29078)
When audit modules are loaded, ld.so initialization is not yet
complete, and rtld_active () returns false even though ld.so is
mostly working.  Instead, the static dlopen hook is used, but that
does not work at all because this is not a static dlopen situation.

Commit 466c1ea15f ("dlfcn: Rework
static dlopen hooks") moved the hook pointer into _rtld_global_ro,
which means that separate protection is not needed anymore and the
hook pointer can be checked directly.

The guard for disabling libio vtable hardening in _IO_vtable_check
should stay for now.

Fixes commit 8e1472d2c1 ("ld.so:
Examine GLRO to detect inactive loader [BZ #20204]").

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-04-26 14:24:36 +02:00
Florian Weimer
c935789bdf INSTALL: Rephrase -with-default-link documentation
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-04-26 14:22:23 +02:00
Fangrui Song
1305edd42c elf: Move post-relocation code of _dl_start into _dl_start_final
On non-PI_STATIC_AND_HIDDEN architectures, getting the address of
_rtld_local_ro (for GLRO (dl_final_object)) goes through a GOT entry.
The GOT load may be reordered before self relocation, leading to an
unrelocated/incorrect _rtld_local_ro address.

84e02af1eb tickled GCC powerpc32 to
reorder the GOT load before relative relocations, leading to ld.so
crash. This is similar to the m68k jump table reordering issue fixed by
a8e9b5b807.

Move code after self relocation into _dl_start_final to avoid the
reordering. This fixes powerpc32 and may help other architectures when
ELF_DYNAMIC_RELOCATE is simplified in the future.
2022-04-25 10:30:27 -07:00
Joan Bruguera
33e03f9cd2 misc: Fix rare fortify crash on wchar funcs. [BZ 29030]
If `__glibc_objsize (__o) == (size_t) -1` (i.e. `__o` is unknown size), fortify
checks should pass, and `__whatever_alias` should be called.

Previously, `__glibc_objsize (__o) == (size_t) -1` was explicitly checked, but
on commit a643f60c53, this was moved into `__glibc_safe_or_unknown_len`.

A comment says the -1 case should work as: "The -1 check is redundant because
since it implies that __glibc_safe_len_cond is true.". But this fails when:
* `__s > 1`
* `__osz == -1` (i.e. unknown size at compile time)
* `__l` is big enough
* `__l * __s <= __osz` can be folded to a constant
(I only found this to be true for `mbsrtowcs` and other functions in wchar2.h)

In this case `__l * __s <= __osz` is false, and `__whatever_chk_warn` will be
called by `__glibc_fortify` or `__glibc_fortify_n` and crash the program.

This commit adds the explicit `__osz == -1` check again.
moc crashes on startup due to this, see: https://bugs.archlinux.org/task/74041

Minimal test case (test.c):
    #include <wchar.h>

    int main (void)
    {
        const char *hw = "HelloWorld";
        mbsrtowcs (NULL, &hw, (size_t)-1, NULL);
        return 0;
    }

Build with:
    gcc -O2 -Wp,-D_FORTIFY_SOURCE=2 test.c -o test && ./test

Output:
    *** buffer overflow detected ***: terminated

Fixes: BZ #29030
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-25 17:32:30 +05:30
Fangrui Song
693517b922 elf: Remove unused enum allowmask
Unused since 52a01100ad
("elf: Remove ad-hoc restrictions on dlopen callers [BZ #22787]").

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-04-25 01:01:02 -07:00
Florian Weimer
b571f3adff scripts/glibcelf.py: Mark as UNSUPPORTED on Python 3.5 and earlier
enum.IntFlag and enum.EnumMeta._missing_ support are not part of
earlier Python versions.
2022-04-25 09:14:49 +02:00
Noah Goldstein
c966099cdc x86: Optimize {str|wcs}rchr-evex
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.755
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:43 -05:00
Noah Goldstein
df7e295d18 x86: Optimize {str|wcs}rchr-avx2
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.832
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:40 -05:00
Noah Goldstein
5307aa9c18 x86: Optimize {str|wcs}rchr-sse2
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.741
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:36 -05:00
Noah Goldstein
c2ff9555a1 benchtests: Improve bench-strrchr
1. Use json-lib for printing results.
2. Expose all parameters (before pos, seek_char, and max_char where
   not printed).
3. Add benchmarks that test multiple occurence of seek_char in the
   string.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:07:54 -05:00