At this point, all implementations of breakpoints use the vtable. So,
we can now remove most function pointers from breakpoint_ops and
switch to using methods directly in the callers. Only the two "static
virtual" methods remain in breakpoint_ops.
Right now, probe tracepoints are handled by a separate ops object.
However, they differ only in a small way from ordinary tracepoints,
and furthermore can be distinguished by their event location.
This patch merges the two cases, just as was done for breakpoints.
Right now, probe breakpoints are handled by a separate ops object.
However, they differ only in a small way from ordinary breakpoints,
and furthermore can be distinguished by their "probe" object.
This patch merges the two cases. This avoids having to introduce a
new bp_ constant (which can be quite subtle to do correctly) and a new
subclass.
Because the actual construction of a breakpoint is buried deep in
create_breakpoint, at present it's necessary to have a new bp_
enumerator constant any time a new subclass is needed. Static marker
tracepoints are one such case, so this patch introduces
bp_static_marker_tracepoint and updates various spots to recognize it.
This converts ranged breakpoints to use vtable_breakpoint_ops. This
requires introducing a new ranged_breakpoint type, but this is
relatively simple because ranged breakpoints can only be created by
break_range_command.
This converts "ordinary" breakpoint to use vtable_breakpoint_ops.
Recall that an ordinary breakpoint is both the kind normally created
by users, and also a base class used by other classes.
The dprintf breakpoint ops is mostly a copy of bpkt_breakpoint_ops,
except it's written out explicitly -- and, importantly, there's
nothing that bpkt_breakpoint_ops overrides that dprintf does not.
This changes dprintf to simply inherit directly, and updates struct
dprintf_breakpoint to reflect the change as well.
This adds a few new subclasses of breakpoint. The inheritance
hierarchy is chosen to reflect what's already present in
initialize_breakpoint_ops -- it mirrors the way that the _ops
structures are filled in.
This patch also changes new_breakpoint_from_type to create the correct
sublcass based on bptype. This is important due to the somewhat
inverted way in which create_breakpoint works; and in particular later
patches will change some of these entries.
This converts watchpoints and masked watchpoints. to use
vtable_breakpoint_ops. For masked watchpoints, a new subclass must be
introduced, and watch_command_1 is changed to create one.
This adds methods to struct breakpoint. Each method has a similar
signature to a corresponding function in breakpoint_ops, with the
exceptions of create_sals_from_location and create_breakpoints_sal,
which can't be virtual methods on breakpoint -- they are only used
during the construction of breakpoints.
Then, this adds a new vtable_breakpoint_ops structure and populates it
with functions that simply forward a call from breakpoint_ops to the
corresponding virtual method. These are all done with lambdas,
because they are just a stepping stone -- by the end of the series,
this structure will be deleted.
This changes breakpoint_ops::print_one to return bool, and updates all
the implementations and the caller. The caller is changed so that a
NULL check is no longer needed -- something that will be impossible
with a real method.
This adds an assertion to clone_momentary_breakpoint. This will
eventually be removed, but in the meantime is is useful for helping
convince oneself that momentary breakpoints will always use
momentary_breakpoint_ops. This understanding will help when cleaning
up the code later.
The "catch load" code is reasonably self-contained, and so this patch
moves it out of breakpoint.c and into a new file, break-catch-load.c.
One function from breakpoint.c, print_solib_event, now has to be
exposed, but this seems pretty reasonable.
This de-duplicates variables and types in .gdb_index, making the new
index closer to what gdb generated before the new DWARF scanner
series. Spot-checking the resulting index for gdb itself, it seems
that the new scanner picks up some extra symbols not detected by the
old one. I tested both the new and old versions of gdb on both new
and old versions of the index, and startup time in all cases is
roughly the same (it's worth noting that, for gdb itself, the index no
longer provides any benefit over the DWARF scanner). So, I think this
fixes the size issue with the new index writer.
Regression tested on x86-64 Fedora 34.
At AdaCore, we run the internal gdb test suite in several modes,
including one using the .debug_names index. This caught a regression
caused by the new DWARF indexer.
First, the psymtabs-based .debug_names generator was completely wrong.
However, to avoid making the rewrite series even bigger (fixing the
writer will also require rewriting the .debug_names reader), it
attempted to preserve the weirdness.
However, this was not done properly. For example the old writer did
this:
- case STRUCT_DOMAIN:
- return DW_TAG_structure_type;
The new code, instead, simply preserves the actual DWARF tag -- but
this makes future lookups fail, because the .debug_names reader only
looks for DW_TAG_structure_type.
This patch attempts to revert to the old behavior in the writer.
Commit 152a174956 ("gdb: prune inferiors at end of
fetch_inferior_event, fix intermittent failure of
gdb.threads/fork-plus-threads.exp") introduced some follow-fork-related
test failures, such as:
info inferiors^M
Num Description Connection Executable ^M
* 1 process 634972 1 (native) /home/simark/build/binutils-gdb-one-target/gdb/testsuite/outputs/gdb.base/foll-fork/foll-fork ^M
2 process 634975 1 (native) /home/simark/build/binutils-gdb-one-target/gdb/testsuite/outputs/gdb.base/foll-fork/foll-fork ^M
(gdb) PASS: gdb.base/foll-fork.exp: follow-fork-mode=parent: detach-on-fork=off: cmd=next 2: test_follow_fork: info inferiors
inferior 2^M
[Switching to inferior 2 [process 634975] (/home/simark/build/binutils-gdb-one-target/gdb/testsuite/outputs/gdb.base/foll-fork/foll-fork)]^M
[Switching to thread 2.1 (Thread 0x7ffff7c9a740 (LWP 634975))]^M
#0 0x00007ffff7d7abf7 in _Fork () from /usr/lib/libc.so.6^M
(gdb) PASS: gdb.base/foll-fork.exp: follow-fork-mode=parent: detach-on-fork=off: cmd=next 2: test_follow_fork: inferior 2
continue^M
Continuing.^M
[Inferior 2 (process 634975) exited normally]^M
[Switching to Thread 0x7ffff7c9a740 (LWP 634972)]^M
(gdb) PASS: gdb.base/foll-fork.exp: follow-fork-mode=parent: detach-on-fork=off: cmd=next 2: test_follow_fork: continue until exit at continue unfollowed inferior to end
break callee^M
Breakpoint 2 at 0x555555555160: file /home/simark/src/binutils-gdb/gdb/testsuite/gdb.base/foll-fork.c, line 9.^M
(gdb) FAIL: gdb.base/foll-fork.exp: follow-fork-mode=parent: detach-on-fork=off: cmd=next 2: test_follow_fork: break callee
What happens here is:
- inferior 2 is selected
- we continue, leading to inferior 2's exit
- we set breakpoint, expect 2 locations, but only one location is
resolved
Reading between the lines, we understand that inferior 2 got pruned,
when it shouldn't have been.
The issue can be reproduced by hand with:
$ ./gdb -q --data-directory=data-directory testsuite/outputs/gdb.base/foll-fork/foll-fork -ex "set detach-on-fork off" -ex start -ex "next 2" -ex "inferior 2" -ex "set debug infrun"
...
Temporary breakpoint 1, main () at /home/simark/src/binutils-gdb/gdb/testsuite/gdb.base/foll-fork.c:14
14 int v = 5;
[New inferior 2 (process 637627)]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/../lib/libthread_db.so.1".
17 if (pid == 0) /* set breakpoint here */
[Switching to inferior 2 [process 637627] (/home/simark/build/binutils-gdb-one-target/gdb/testsuite/outputs/gdb.base/foll-fork/foll-fork)]
[Switching to thread 2.1 (Thread 0x7ffff7c9a740 (LWP 637627))]
#0 0x00007ffff7d7abf7 in _Fork () from /usr/lib/libc.so.6
(gdb) continue
Continuing.
[infrun] clear_proceed_status_thread: 637627.637627.0
[infrun] proceed: enter
[infrun] proceed: addr=0xffffffffffffffff, signal=GDB_SIGNAL_DEFAULT
[infrun] scoped_disable_commit_resumed: reason=proceeding
[infrun] start_step_over: enter
[infrun] start_step_over: stealing global queue of threads to step, length = 0
[infrun] operator(): step-over queue now empty
[infrun] start_step_over: exit
[infrun] proceed: start: resuming threads, all-stop-on-top-of-non-stop
[infrun] proceed: resuming 637627.637627.0
[infrun] resume_1: step=0, signal=GDB_SIGNAL_0, trap_expected=0, current thread [637627.637627.0] at 0x7ffff7d7abf7
[infrun] do_target_resume: resume_ptid=637627.637627.0, step=0, sig=GDB_SIGNAL_0
[infrun] infrun_async: enable=1
[infrun] prepare_to_wait: prepare_to_wait
[infrun] proceed: end: resuming threads, all-stop-on-top-of-non-stop
[infrun] reset: reason=proceeding
[infrun] maybe_set_commit_resumed_all_targets: enabling commit-resumed for target native
[infrun] maybe_call_commit_resumed_all_targets: calling commit_resumed for target native
[infrun] maybe_call_commit_resumed_all_targets: calling commit_resumed for target native
[infrun] proceed: exit
[infrun] fetch_inferior_event: enter
[infrun] scoped_disable_commit_resumed: reason=handling event
[infrun] do_target_wait: Found 2 inferiors, starting at #1
[infrun] random_pending_event_thread: None found.
[infrun] print_target_wait_results: target_wait (-1.0.0 [process -1], status) =
[infrun] print_target_wait_results: 637627.637627.0 [process 637627],
[infrun] print_target_wait_results: status->kind = EXITED, exit_status = 0
[infrun] handle_inferior_event: status->kind = EXITED, exit_status = 0
[Inferior 2 (process 637627) exited normally]
[infrun] stop_waiting: stop_waiting
[infrun] stop_all_threads: start: reason=presenting stop to user in all-stop, inf=-1
[infrun] stop_all_threads: pass=0, iterations=0
[infrun] stop_all_threads: 637624.637624.0 not executing
[infrun] stop_all_threads: pass=1, iterations=1
[infrun] stop_all_threads: 637624.637624.0 not executing
[infrun] stop_all_threads: done
[infrun] stop_all_threads: end: reason=presenting stop to user in all-stop, inf=-1
[Switching to Thread 0x7ffff7c9a740 (LWP 637624)]
[infrun] infrun_async: enable=0
[infrun] reset: reason=handling event
[infrun] maybe_set_commit_resumed_all_targets: not requesting commit-resumed for target native, no resumed threads
(gdb) [infrun] fetch_inferior_event: exit
(gdb) info inferiors
Num Description Connection Executable
* 1 process 637624 1 (native) /home/simark/build/binutils-gdb-one-target/gdb/testsuite/outputs/gdb.base/foll-fork/foll-fork
(gdb) i th
Id Target Id Frame
* 1 Thread 0x7ffff7c9a740 (LWP 637624) "foll-fork" main () at /home/simark/src/binutils-gdb/gdb/testsuite/gdb.base/foll-fork.c:17
After handling the EXITED event for inferior 2, inferior 2 should have
stayed the current inferior, which should have prevented it from getting
pruned. When debugging, we find that when getting at the
prune_inferiors call, the current inferior is inferior 1. Further
debugging shows that prior to the call to
clean_up_just_stopped_threads_fsms, the current inferior is inferior 2,
and after, it's inferior 1. Then, back in fetch_inferior_event, the
restore_thread object is disabled, due to:
/* If we got a TARGET_WAITKIND_NO_RESUMED event, then the
previously selected thread is gone. We have two
choices - switch to no thread selected, or restore the
previously selected thread (now exited). We chose the
later, just because that's what GDB used to do. After
this, "info threads" says "The current thread <Thread
ID 2> has terminated." instead of "No thread
selected.". */
if (!non_stop
&& cmd_done
&& ecs->ws.kind () != TARGET_WAITKIND_NO_RESUMED)
restore_thread.dont_restore ();
So in the end, inferior 1 stays current, and inferior 2 gets wrongfully
pruned.
I'd say clean_up_just_stopped_threads_fsms is the culprit here. It
actually attempts to restore the event_thread to be current at the end,
after the loop (I presume the current thread on entry is always supposed
to be the event thread). But in this case, the event is of kind EXITED,
and ecs->event_thread is not set, so the current inferior isn't
restored.
Fix that by using scoped_restore_current_thread. If there is no current
thread, scoped_restore_current_thread will still restore the current
inferior, and that's what we want.
Random note: the thread_info object for inferior 2's thread is never
freed. It is held (by refcount) by the restore_thread object in
fetch_inferior_event, while the inferior's thread list gets cleared, in
the exit event processing. When the refcount reaches 0 (when the
restore_thread object is destroyed), there's nothing that actually
deletes the thread_info object. And I think that nothing in GDB points
to it anymore, so it leaks. I don't want to fix that in this patch, but
thought it would be good to mention it, in case somebody has an idea for
how to fix that.
Change-Id: Ibc7df543e2c46aad5f3b9250b28c3fb5912be4e8
The current target_resume interface is a bit odd & non-intuitive.
I've found myself explaining it a couple times the recent past, while
reviewing patches that assumed STEP/SIGNAL always applied to the
passed in PTID. It goes like this today:
- if the passed in PTID is a thread, then the step/signal request is
for that thread.
- otherwise, if PTID is a wildcard (all threads or all threads of
process), the step/signal request is for inferior_ptid, and PTID
indicates which set of threads run free.
Because GDB always switches the current thread to "leader" thread
being resumed/stepped/signalled, we can simplify this a bit to:
- step/signal are always for inferior_ptid.
- PTID indicates the set of threads that run free.
Still not ideal, but it's a minimal change and at least there are no
special cases this way.
That's what this patch does. It renames the PTID parameter to
SCOPE_PTID, adds some assertions to target_resume, and tweaks
target_resume's description. In addition, it also renames PTID to
SCOPE_PTID in the remote and linux-nat targets, and simplifies their
implementation a little bit. Other targets could do the same, but
they don't have to.
Change-Id: I02a2ec2ab3a3e9b191de1e9a84f55c17cab7daaf
The recent gnulib import caused a build failure of libinproctrace.so
on PPC:
alloc.c:(.text+0x20): undefined reference to `rpl_malloc'
alloc.c:(.text+0x70): undefined reference to `rpl_realloc'
This patch fixes the problem using the same workaround that was
previously used for free.
Update
commit ebb191adac
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Wed Feb 9 15:51:22 2022 -0800
x86: Disallow invalid relocation against protected symbol
to allow function pointer reference and make sure that PLT entry isn't
used for function reference due to function pointer reference.
bfd/
PR ld/29087
* elf32-i386.c (elf_i386_scan_relocs): Don't set
pointer_equality_needed nor check non-canonical reference for
function pointer reference.
* elf64-x86-64.c (elf_x86_64_scan_relocs): Likewise.
ld/
PR ld/29087
* testsuite/ld-x86-64/x86-64.exp: Run PR ld/29087 tests.
* testsuite/ld-x86-64/protected-func-3.c: New file.
The DWARF index code currently uses 'stat' to see if an objfile
represents a real file. However, I think it's more correct to check
OBJF_NOT_FILENAME instead.
Regression tested on x86-64 Fedora 34.
I noticed a few spots in GDB that use "typedef enum". However, in C++
this isn't as useful, as the tag is automatically entered as a
typedef. This patch removes most uses of "typedef enum" -- the
exceptions being in some nat-* code I can't compile, and
glibc_thread_db.h, which I think is more or less a copy of some C code
from elsewhere.
Tested by rebuilding.
This commit:
commit f5cb8afdd2
Date: Sun Feb 6 22:27:53 2022 -0500
gdb: remove BLOCK_RANGES macro
introduces a potential nullptr dereference in block::ranges, this is
breaking most tests, e.g. gdb.base/break.exp is failing for me.
In the above patch BLOCK_CONTIGUOUS_P is changed from this:
#define BLOCK_CONTIGUOUS_P(bl) (BLOCK_RANGES (bl) == nullptr \
|| BLOCK_NRANGES (bl) <= 1)
to this:
#define BLOCK_CONTIGUOUS_P(bl) ((bl)->ranges ().size () == 0 \
|| (bl)->ranges ().size () == 1)
So, before the commit we checked for the block ranges being nullptr,
but afterwards we just call block::ranges() in all cases.
The problem is that block::ranges() looks like this:
/* Return a view on this block's ranges. */
gdb::array_view<blockrange> ranges ()
{ return gdb::make_array_view (m_ranges->range, m_ranges->nranges); }
where m_ranges is:
struct blockranges *m_ranges;
And so, we see that the nullptr check has been lost, and we might end
up dereferencing a nullptr.
My proposed fix is to move the nullptr check into block::ranges, and
return an explicit empty array_view if m_ranges is nullptr.
After this, everything seems fine again.
In static-pie case, there are IRELATIVE-relocs in
.rela.iplt (htab->irelplt), which will later be grouped
to .rela.plt. On s390, the IRELATIVE relocations are
always located in .rela.iplt - even for non-static case.
Ensure that DT_JMPREL, DT_PLTRELA, DT_PLTRELASZ is added
to the dynamic section even if htab->srelplt->size == 0.
See _bfd_elf_add_dynamic_tags in bfd/elflink.c.
bfd/
elf64-s390.c (elf_s390_size_dynamic_sections):
Enforce DT_JMPREL via htab->elf.dt_jmprel_required.
No dynamic relocs are needed for TLS defined in an executable, the
TP relative offset is known at link time.
Fixes
FAIL: Build pr22263-1
bfd/
PR ld/22263
* elf64-s390.c (elf_s390_tls_transition): Use bfd_link_dll
instead of bfd_link_pic for TLS.
(elf_s390_check_relocs): Likewise.
(allocate_dynrelocs): Likewise.
(elf_s390_relocate_section): Likewise.
When two types conflict and they are not types which can have forwards
(say, two arrays of different sizes with the same name in two different
TUs) the CTF deduplicator uses a popularity contest to decide what to
do: the type cited by the most other types ends up put into the shared
dict, while the others are relegated to per-CU child dicts.
This works well as long as one type *is* most popular -- but what if
there is a tie? If several types have the same popularity count,
we end up picking the first we run across and promoting it, and
unfortunately since we are working over a dynhash in essentially
arbitrary order, this means we promote a random one. So multiple
runs of ld with the same inputs can produce different outputs!
All the outputs are valid, but this is still undesirable.
Adjust things to use the same strategy used to sort types on the output:
when there is a tie, always put the type that appears in a CU that
appeared earlier on the link line (and if there is somehow still a tie,
which should be impossible, pick the type with the lowest type ID).
Add a testcase -- and since this emerged when trying out extern arrays,
check that those work as well (this requires a newer GCC, but since all
GCCs that can emit CTF at all are unreleased this is probably OK as
well).
Fix up one testcase that has slight type ordering changes as a result
of this change.
libctf/ChangeLog:
* ctf-dedup.c (ctf_dedup_detect_name_ambiguity): Use
cd_output_first_gid to break ties.
ld/ChangeLog:
* testsuite/ld-ctf/array-conflicted-ordering.d: New test, using...
* testsuite/ld-ctf/array-char-conflicting-1.c: ... this...
* testsuite/ld-ctf/array-char-conflicting-2.c: ... and this.
* testsuite/ld-ctf/array-extern.d: New test, using...
* testsuite/ld-ctf/array-extern.c: ... this.
* testsuite/ld-ctf/conflicting-typedefs.d: Adjust for ordering
changes.
Specifically, tell users what to pass to those functions that accept raw
section content, since it's fairly involved and easy to get wrong.
(.dynsym / .dynstr when CTF_F_DYNSTR is set, otherwise .symtab / .strtab).
include/ChangeLog:
* ctf-api.h (ctf_*open): Improve comment.