On a bpf-*-* testsuite fails:
./ld/ld-new: warning: test has a LOAD segment with RWX permissions
Adjusting `--memory-size=10Mb' to the simulator bpf testsuite passes.
Tested on bpf-*-*:
Bug: https://sourceware.org/PR29954
sim/testsuite:
* bpf/allinsn.exp (SIMFLAGS_FOR_TARGET): Adjust sim flags.
I386_PCREL_TYPE_P and X86_64_PCREL_TYPE_P are defined twice. Remove
the duplications.
* elfxx-x86.h (I386_PCREL_TYPE_P): Remove duplication.
(X86_64_PCREL_TYPE_P): Likewise.
A refactoring in 4b9728bec1 (gdb: use gdb_test_multiple in
gdb_breakpoint) left the $test_name variable undefined.
This patch fixes this.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
When running the testsuite in a non-optimized build on a slow machine, I
sometimes get:
UNTESTED: gdb.gdb/selftest.exp: Cannot set breakpoint at captured_main, skipping testcase.
do_self_tests, in lib/selftest-support.exp, uses `with_timeout_factor
10`, to account for the fact that reading the debug info of the gdb
binary (especially in a non-optimized GDB) can take time. But then it
ends up calling gdb_breakpoint, which uses gdb_expect with a hard-coded
timeout of 30 seconds.
Fix this by making gdb_breakpoint use gdb_test_multiple, which is a
desired change anyway for this kind of simple command / expected
output case.
Change-Id: I9b06ce991cc584810d8cc231b2b4893980b8be75
Reviewed-By: Lancelot Six <lancelot.six@amd.com>
This occurs when attempting to read back a section from the output
file in _bfd_XX_bfd_copy_private_bfd_data_common. The copy of the
section failed size sanity checking, thus it won't be written.
* objcopy.c (copy_object): Return false if copy_section or
copy_relocations_in_section fails.
objcopy of archive, element containing an object with a fuzzed section
size far exceeding the element size. copy_section detects this, but
the temp file is laid out for the large section. It can take a long
time to write terabytes of sparse file, a waste of time when it will
be deleted.
* objcopy.c (copy_archive): Don't write element contents after
bad status result from copy_object.
Another case of fuzzers finding the section size sanity checks are
avoided with SHT_NOBITS sections.
* dwarf2.c (read_section): Check that the DWARF section being
read has contents.
In passing I spotted some incorrect #ifdef logic in bt-utils.h. The
logic in question has existed since the file was originally added in
commit:
commit abbbd4a3e0
Date: Wed Aug 11 13:24:33 2021 +0100
gdb: use libbacktrace to create a better backtrace for fatal signals
The code is trying to select between using libbacktrace or using the
execinfo supplied backtrace API.
First we check to see if we can use libbacktrace. If we can then we
include some header files, and then set some defines to indicate that
libbacktrace is being used.
Then we check if execinfo is available, if it is then we include
<execinfo.h> and set some alternative defines.
In theory the second block of logic should not trigger if the first
block (that uses libbacktrace) has also triggered, but we incorrectly
check the define 'PRINT_BACKTRACE_ON_FATAL_SIGNAL' instead of checking
for 'GDB_PRINT_INTERNAL_BACKTRACE_USING_LIBBACKTRACE', so the second
block triggers more than it should. The
'PRINT_BACKTRACE_ON_FATAL_SIGNAL' define is not defined anywhere, this
was a mistake in the original commit.
In reality this is harmless, we include <execinfo.h> when we don't
need too, but in by-utils.c the libbacktrace define is always checked
for before the execinfo define, so we never actually end up using the
execinfo path (when libbacktrace is available). But I figure its
still worth cleaning this up.
I've tested GDB in a "default" build where libbacktrace is used, and
when configuring with --disable-libbacktrace which causes the execinfo
backtrace API to be used instead, both still appear to work fine.
There should be no user visible changes after this commit.
While chasing some reverse debugging bugs, I found myself wondering what
was recorded by GDB to undo and redo a certain instruction. This commit
implements a simple way of printing that information.
If there isn't enough history to print the desired instruction (such as
when the user hasn't started recording yet or when they request 2
instructions back but only 1 was recorded), GDB warns the user like so:
(gdb) maint print record-instruction
Not enough recorded history
If there is enough, GDB prints the instruction like so:
(gdb) maint print record-instruction
4 bytes of memory at address 0x00007fffffffd5dc changed from: 01 00 00 00
Register eflags changed: [ IF ]
Register rip changed: (void (*)()) 0x401115 <main+15>
Approved-by: Eli Zaretskii <eliz@gnu.org>
Reviewed-by: Alexandra Hajkova <ahajkova@redhat.com>
Reviewed-by: Lancelot Six <lsix@lancelotsix.com>
Approved-by: Tom Tromey <tom@tromey.com>
This is something I discovered when working on aarch64, though it's
relevant to x86_64 too.
The PE32+ imports are located in the .idata section, which starts off
with a 20-byte structure for each DLL, containing offsets into the rest
of the section. This is the Import Directory Table in
https://learn.microsoft.com/en-us/windows/win32/debug/pe-format, which
is a concatenation of the .idata$2 sections. This is then followed by an
20 zero bytes generated by the linker script, which calls this .idata$3.
After this comes the .idata$4 entries for each function, which the
loader overwrites with the function pointers. Because there's no padding
between .idata$3 and .idata$4, this means that if there's an even number
of DLLs, the function pointers won't be aligned on an 8-byte boundary.
Misaligned reads are slower on x86_64, but this is more important on
aarch64, as the e.g. `ldr x0, [x0, :lo12:__imp__func]` the compiler
might generate requires __imp__func (the .idata$4 entry) to be aligned
to 8 bytes. Without this you get IMAGE_REL_ARM64_PAGEOFFSET_12L overflow
errors.
Those files have changed by regenerating using the maintainer mode.
The first line of sim/ppc/pk.h have changed by an effect of the commit
319e41e83a ("sim: ppc: inline the sim-packages option").
This adds a test case for "finish" with variably-sized types, and for
inferior calls as well. This also extends the "runto" proc to handle
temporary breakpoints.
get_call_return_value can handle RETURN_VALUE_STRUCT_CONVENTION,
because the call is completely managed by gdb. However, it does not
handle variably-sized types correctly. The simplest way to fix this
is to use value_at_non_lval, which does type resolution.
This converts a few selected architectures to use
gdbarch_return_value_as_value rather than gdbarch_return_value. The
architectures are just the ones that I am able to test. This patch
should not introduce any behavior changes.
On PPC, we saw that calling an inferior function could sometimes
change the current language, because gdb would select the call dummy
frame -- associated with _start.
This patch changes gdb so that the current language is never affected
by DWARF property evaluation.
In some cases, while a value might be read from memory, gdb should not
record the value as being equivalent to that memory.
In Ada, the inferior call code will call ada_convert_actual -- and
here, if the argument is already in memory, that address will simply
be reused. However, for a call like "f(g())", the result of "g" might
be on the stack and thus overwritten by the call to "f".
This patch introduces a new function that is like value_at but that
ensures that the result is non-lvalue.
The previous patch introduced a new overload of gdbarch_return_value.
The intent here is that this new overload always be called by the core
of gdb -- the previous implementation is effectively deprecated,
because a call to the old-style method will not work with any
converted architectures (whereas calling the new-style method is will
delegate when needed).
This patch changes gdbarch.py so that the old gdbarch_return_value
wrapper function can be omitted. This will prevent any errors from
creeping in.
The gdbarch "return_value" can't correctly handle variably-sized
types. The problem here is that the TYPE_LENGTH of such a type is 0,
until the type is resolved, which requires reading memory. However,
gdbarch_return_value only accepts a buffer as an out parameter.
Fixing this requires letting the implementation of the gdbarch method
resolve the type and return a value -- that is, both the contents and
the new type.
After an attempt at this, I realized I wouldn't be able to correctly
update all implementations (there are ~80) of this method. So,
instead, this patch adds a new method that falls back to the current
method, and it updates gdb to only call the new method. This way it's
possible to incrementally convert the architectures that I am able to
test.
amd64-tdep.c could crash when 'finish'ing from a function whose return
type had variable length. In this situation, the value will be passed
by reference, and this patch avoids the crash.
(Note that this does not fully fix the bug reported, but it does fix
the crash, so it seems worthwhile to land independently.)
On a x86_64-linux machine with pkru register, I run into:
...
(gdb) PASS: gdb.arch/i386-pkru.exp: set pkru value
info register pkru^M
pkru 0x12345678 305419896^M
(gdb) FAIL: gdb.arch/i386-pkru.exp: read value after setting value
...
This is a regression due to kernel commit e84ba47e313d ("x86/fpu: Hook up PKRU
onto ptrace()"). This is fixed by recent kernel commit 4a804c4f8356
("x86/fpu: Allow PKRU to be (once again) written by ptrace.").
The regression occurs for kernel versions v5.14-rc1 (the first tag containing
the regression) up to but excluding v6.2-rc1 (the first tag containing the fix).
Fix this by adding an xfail for the appropriate kernel versions.
Tested on x86_64-linux.
PR testsuite/29790
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29790
With a simple test-case:
...
$ cat test.c
char *p = "a";
int main (void) {
return strlen (p);
}
$ gcc -g test.c
...
we run into this segfault:
...
$ gdb -q -batch a.out -ex start -ex "p strlen (p)"
Temporary breakpoint 1 at 0x1151: file test.c, line 4.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Temporary breakpoint 1, main () at test.c:4
4 return strlen (p);
Fatal signal: Segmentation fault
...
The strlen is an ifunc, and consequently during the call to
call_function_by_hand_dummy for "p strlen (p)" another call
to call_function_by_hand_dummy is used to resolve the ifunc.
This invalidates the get_current_frame () result in the outer call.
Fix this by using prepare_reinflate and reinflate.
Note that this series (
https://inbox.sourceware.org/gdb-patches/20221214033441.499512-1-simon.marchi@polymtl.ca/ )
should address this problem, but this patch is a simpler fix which is easy to
backport.
Tested on x86_64-linux.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
PR gdb/29941
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29941
This should have been part of the previous commit 80636a54bc
("sim: build: move generated headers to built sources"), but they were
missed because they're .c files effectively treated as .h files.
Rather than rely on SIM_SUBDIRS being set, add a dedicated variable
to track whether to enable the sim. While the current code works
fine, it won't work as we remove the recursive make logic (i.e. the
SIM_SUBDIRS variable).
Automake's automatic header deptracking has a bootstrap problem where
it can't detect generated headers when compiling. We've been handling
that by adding a custom SIM_ALL_RECURSIVE_DEPS variable, but that only
works when building objects recursively in subdirs. As we move those
out to the top-level, we don't have any recursive steps anymore. The
Automake approach is to declare those headers in BUILT_SOURCES.
This isn't completely foolproof as the Automake manual documents: it
only activates for `make all`, not `make foo.o`, but that shouldn't be
a huge limitation as it only affects the initial compile. After that,
rebuilds should work fine.