This patch upgrades gdb/ax_cxx_compile_stdcxx.m4 to follow changes
available in [1] and regenerates the configure script.
[1] https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx.html
Change-Id: I5b16adc65c9e48a13ad65202d58ab7a9d487214e
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
We are used to generate these kinds of relocations by data directives.
Considering the following example,
.word (A + 3) - (B + 2)
The GAS will generate a pair of ADD/SUB for this,
R_RISCV_ADD, A + 1
R_RISCV_SUB, 0
The addend of R_RISCV_SUB will always be zero, and the summary of the
constants will be stored in the addend of R_RISCV_ADD/SET. Therefore,
we can always add the addend of these data relocations when doing relocations.
But unfortunately, I had heard that if we are using .reloc to generate
the data relocations will make the relocations failed. Refer to this,
.reloc offset, R_RISCV_ADD32, A + 3
.reloc offset, R_RISCV_SUB32, B + 2
.word 0
Then we can get the relocations as follows,
R_RISCV_ADD, A + 3
R_RISCV_SUB, B + 2
Then... Current LD does the relocation, B - A + 3 + 2, which is wrong
obviously...
So first of all, this patch fixes the wrong relocation behavior of
R_RISCV_SUB* relocations.
Afterwards, considering the uleb128 direcitve, we will get a pair of
SET_ULEB128/SUB_ULEB128 relocations for it for now,
.uleb128 (A + 3) - (B + 2)
R_RISCV_SET_ULEB128, A + 1
R_RISCV_SUB_ULEB128, B + 1
Which looks also wrong obviously, the summary of the constants should only
be stored into the addend of SET_ULEB128, and the addend of SUB_ULEB128 should
be zero like other SUB relocations. But the current LD will still get the right
relocation values since we only add the addend of SUB_ULEB128 by accident...
Anyway, this patch also fixes the behaviors above, to make sure that no matter
using .uleb128 or .reloc directives, we should always get the right values.
bfd/
* elfnn-riscv.c (perform_relocation): Clarify that SUB relocations
should substract the addend, rather than add.
(riscv_elf_relocate_section): Since SET_ULEB128 won't go into
perform_relocation, we should add it's addend here in advance.
gas/
* config/tc-riscv.c (riscv_insert_uleb128_fixes): Set the addend of
SUB_ULEB128 to zero since it should already be added into the addend
of SET_ULEB128.
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
Overview
========
Consider the following situation, GDB is in non-stop mode, the main
thread is running while a second thread is stopped. The user has the
second thread selected as the current thread and asks GDB to detach.
At the exact moment of detach the main thread exits.
This situation currently causes crashes, assertion failures, and
unexpected errors to be reported from GDB for both native and remote
targets.
This commit addresses this situation for native and remote targets.
There are a number of different fixes, but all are required in order
to get this functionality working correct for native and remote
targets.
Native Linux Target
===================
For the native Linux target, detaching is handled in the function
linux_nat_target::detach. In here we call stop_wait_callback for each
thread, and it is this callback that will spot that the main thread
has exited.
GDB then detaches from everything except the main thread by calling
detach_callback.
After this the first problem is this assert:
/* Only the initial process should be left right now. */
gdb_assert (num_lwps (pid) == 1);
The num_lwps call will return 0 as the main thread has exited and all
of the other threads have now been detached. I fix this by changing
the assert to allow for 0 or 1 lwps at this point. As the 0 case can
only happen in non-stop mode, the assert becomes:
gdb_assert (num_lwps (pid) == 1
|| (target_is_non_stop_p () && num_lwps (pid) == 0));
The next problem is that we do:
main_lwp = find_lwp_pid (ptid_t (pid));
and then proceed assuming that main_lwp is not nullptr. In the case
that the main thread has exited though, main_lwp will be nullptr.
However, we only need main_lwp so that GDB can detach from the
thread. If the main thread has exited, and GDB has already detached
from every other thread, then GDB has finished detaching, GDB can skip
the calls that try to detach from the main thread, and then tell the
user that the detach was a success.
For Remote Targets
==================
On remote targets there are two problems.
First is that when the exit occurs during the early phase of the
detach, we see the stop notification arrive while GDB is removing the
breakpoints ahead of the detach. The 'set debug remote on' trace
looks like this:
[remote] Sending packet: $z0,7f1648fe0241,1#35
[remote] Notification received: Stop:W0;process:2a0ac8
# At this point an unpatched gdbserver segfaults, and the connection
# is broken. A patched gdbserver continues as below...
[remote] Packet received: E01
[remote] Sending packet: $z0,7f1648ff00a8,1#68
[remote] Packet received: E01
[remote] Sending packet: $z0,7f1648ff132f,1#6b
[remote] Packet received: E01
[remote] Sending packet: $D;2a0ac8#3e
[remote] Packet received: E01
I was originally running into Segmentation Faults, from within
gdbserver/mem-break.cc, in the function find_gdb_breakpoint. This
function calls current_process() and then dereferences the result to
find the breakpoint list.
However, in our case, the current process has already exited, and so
the current_process() call returns nullptr. At the point of failure,
the gdbserver backtrace looks like this:
#0 0x00000000004190e4 in find_gdb_breakpoint (z_type=48 '0', addr=4198762, kind=1) at ../../src/gdbserver/mem-break.cc:982
#1 0x000000000041930d in delete_gdb_breakpoint (z_type=48 '0', addr=4198762, kind=1) at ../../src/gdbserver/mem-break.cc:1093
#2 0x000000000042d8db in process_serial_event () at ../../src/gdbserver/server.cc:4372
#3 0x000000000042dcab in handle_serial_event (err=0, client_data=0x0) at ../../src/gdbserver/server.cc:4498
...
The problem is that, as a result non-stop being on, the process
exiting is only reported back to GDB after the request to remove a
breakpoint has been sent. Clearly gdbserver can't actually remove
this breakpoint -- the process has already exited -- so I think the
best solution is for gdbserver just to report an error, which is what
I've done.
The second problem I ran into was on the gdb side, as the process has
already exited, but GDB has not yet acknowledged the exit event, the
detach -- the 'D' packet in the above trace -- fails. This was being
reported to the user with a 'Can't detach process' error. As the test
actually calls detach from Python code, this error was then becoming a
Python exception.
Though clearly the detach has returned an error, and so, maybe, having
GDB throw an error would be fine, I think in this case, there's a good
argument that the remote error can be ignored -- if GDB tries to
detach and gets back an error, and if there's a pending exit event for
the pid we tried to detach, then just ignore the error and pretend the
detach worked fine.
We could possibly check for a pending exit event before sending the
detach packet, however, I believe that it might be possible (in
non-stop mode) for the stop notification to arrive after the detach is
sent, but before gdbserver has started processing the detach. In this
case we would still need to check for pending stop events after seeing
the detach fail, so I figure there's no point having two checks -- we
just send the detach request, and if it fails, check to see if the
process has already exited.
Testing
=======
In order to test this issue I needed to ensure that the exit event
arrives at the same time as the detach call. The window of
opportunity for getting the exit to arrive is so small I've never
managed to trigger this in real use -- I originally spotted this issue
while working on another patch, which did manage to trigger this
issue.
However, if we trigger both the exit and the detach from a single
Python function then we never return to GDB's event loop, as such GDB
never processes the exit event, and so the first time GDB gets a
chance to see the exit is during the detach call. And so that is the
approach I've taken for testing this patch.
Tested-By: Kevin Buettner <kevinb@redhat.com>
Approved-By: Kevin Buettner <kevinb@redhat.com>
If we make writing an index-cache entry very slow by doing this in
index_cache::store:
...
try
{
+ sleep (15);
index_cache_debug ("writing index cache for objfile %s",
bfd_get_filename (per_bfd->obfd));
...
we run into:
...
FAIL: gdb.dwarf2/per-bfd-sharing.exp: \
couldn't remove files in temporary cache dir
...
The FAIL happens because there is no index-cache entry in the cache dir.
The problem is that gdb is killed (by gdb_exit) before the index-cache entry
is written.
Fix this by using "maint wait-for-index-cache".
Tested on x86_64-linux.
PR testsuite/30528
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30528
Clang doesn't use CFA information for variable locations. This makes it
so software breakpoints get a false hit when rbp gets popped, causing
a FAIL in gdb.python/py-watchpoint.exp. Since this is nothing wrong with
GDB itself, add an xfail to reduce noise.
Approved-By: Tom Tromey <tom@tromey.com>
The test gdb.python/py-explore-cc.exp was showing one unexpected
failure. This was due to how clang mapped instructions to lines,
resulting in the inferior seemingly stopping at a different location.
This patch adds a nop line in the relevant location so we don't need to
add XFAILs for existing clang releases, if this gets solved in future
versions.
Approved-By: Tom Tromey <tom@tromey.com>
I noticed that in handle_v_run (gdbserver/server.cc) we leak
new_program_name (a string) each time GDB starts an inferior, in the
case where GDB passes a program name to gdbserver.
This bug was introduced with this commit:
commit 7ab2607f97
Date: Wed Apr 13 17:31:02 2022 -0400
gdbsupport: make gdb_abspath return an std::string
When gdbserver receives a program name from GDB, this is first placed
into a malloc'd buffer within handle_v_run, and this buffer is then
used in this call:
program_path.set (new_program_name);
Prior to the above commit this call took ownership of the buffer
passed to it, but now this call uses the buffer to initialise a
std::string, which copies the buffer contents, leaving ownership with
the caller. So now, after this call (in handle_v_run)
new_program_name still owns a buffer.
At no point in handle_v_run do we free new_program_name, as a result
we are leaking the program name each time GDB starts a remote
inferior.
I could solve this by adding a 'free' call into handle_v_run, but I'd
rather automate the memory management.
So, to this end, I have added a new function in gdbserver/server.cc,
decode_v_run_arg. This function takes care of allocating the memory
buffer and decoding the vRun packet into the buffer, but returns a
gdb::unique_xmalloc_ptr<char> (or nullptr on error).
Back in handle_v_run I have converted new_program_name to also be a
gdb::unique_xmalloc_ptr<char>.
Now, after we call program_path.set(), the allocated buffer will be
automatically released when it is no longer needed.
It is worth highlighting that within the new decode_v_run_arg
function, I have wrapped the call to hex2bin in a try/catch block.
The hex2bin function can throw an exception if it encounters an
invalid (non-hex) character. Back in handle_v_run, we have a local
argument new_argv, which is of type std::vector<char *>. Each
'char *' in this vector is a malloc'd buffer. If we allow
hex2bin to throw an exception and don't catch it in either
decode_v_run_arg or handle_v_run then we are going to leak memory from
new_argv.
I chose to catch the exception in decode_v_run_arg, this seemed
cleanest, but I'm not sure it really matters, so long as the exception
is caught before we leave handle_v_run. I am working on a patch that
changes new_argv to automatically manage its memory, but that isn't
ready for posting yet. I think what I have here would be fine if my
follow on patch never arrives.
Additionally, within the handle_v_run loop I have changed an
assignment of nullptr to new_program_name into an assert. Previously,
the assignment could only trigger on the first iteration of the loop,
if we had no new program name to assign. However, new_program_name
always starts as nullptr, so, on the first loop iteration, if we have
nothing to assign to new_program_name, its value must already be
nullptr.
There should be no user visible changes after this commit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
get_symbol_address is only used symbol::value_address, make it a private
helper method.
Change-Id: I318ddcfcf1269d95045b8efe9137812df9c5113c
Approved-By: Tom Tromey <tom@tromey.com>
get_msymbol_address is only used in minimal_symbol::value_address. Make
it a private helper method.
Change-Id: I3f30e1b9d89ace6682fb08a7ebb91746db0ccf0f
Approved-By: Tom Tromey <tom@tromey.com>
Extends commit 6136093c0d to handle verdefs as well as verrefs.
PR 30886
* elf.c (_bfd_elf_slurp_version_tables): See free_contents for
verdefs too. Use free_contents rather than elf_tdata fields.
Sections without SEC_HAS_CONTENTS avoid the file size checks, and of
course it doesn't make sense to read such as the contents are all
zero.
* som.c (som_set_reloc_info): Don't read sections without contents.
This fixes some holes found by fuzzers, and removes aborts that can be
triggered by user input to objdump. Abort should only be used within
bfd to show programming errors in bfd.
* coff-alpha.c (alpha_ecoff_get_relocated_section_contents): Handle
NULL howto. Don't abort on stack errors or on unexpected relocs.
Show more bfd reloc status messages.
When compiling hello world and adding a v9 .gdb-index section:
...
$ gcc -g hello.c
$ gdb-add-index a.out
...
readelf shows it as:
...
Shortcut table:
Language of main: unknown: 0
Name of main: ^A
...
The documentation of gdb says about the "Name of main" that:
...
This value must be ignored if the value for the language of main is zero.
...
Implement this approach in display_gdb_index, such that we have instead:
...
Shortcut table:
Language of main: unknown: 0
Name of main: <unknown>
...
Tested on x86_64-linux.
Approved-By: Jan Beulich <jbeulich@suse.com>
The as and ld use _bfd_error_handler to output error messages when
checking relocation alignment and relocation overflow. However, the
abfd value passed by as to the function is NULL, resulting in an
internal error. The ld passes a non-null value to the function,
so it can output an error message normally.
First of all add f32_5[], allowing to eliminate the extra slot-is-NULL
code from i386_output_nops(). Plus then introduce f32_8[] and f16_5[]
following the same concept of adding a %cs segment override prefix.
Also re-use patterns when possible and correct comments as applicable.
Similarly re-use testcase expectations as much as possible, where they
need touching anyway.
The two are distinct in opcodes/, distinguished precisely by CpuNOP
that's relevant in i386_generate_nops(), yet the function has the PPro
case label in the other group. Simply removing it revealed that
cpu_arch[] had a wrong entry for i686.
While there also add PROCESSOR_IAMCU to the respective comment.
Making GENERIC64 a special case was never correct; prior to the
generalization of ".arch .no*" to cover all ISA extensions other
processor families supporting long NOPs should have been covered as
well. When introducing ".arch .nonops" (among others) it wasn't
apparent that a hidden implication of .cpunop not being possible to
separately turn off existed here. Seeing that the two large case label
blocks in the 2nd switch() already had identical behavior, simply
collapse all of the (useful) case labels into a single "default" one.
Since we don't key the NOP selection to user-controlled properties, we
may not use i386 features; otherwise we would violate a possible .arch
directive restricting ISA to pre-386.
Except for the shared 1- and 2-byte cases, the LEA uses corrupt %rsi
(by zero-extending %esi to %rsi). Introduce separate 64-bit patterns
which keep %rsi intact.
What matters is what was in effect at the time the original directive
was issued. Later changes to global state (bitness or ISA) must not
affect what code is generated.
The recorded value, and not the global variable, will want using in
TC_FRAG_INIT(). The so far file scope variable therefore needs to become
external, to be accessible there.
The help says that <reserve> and <commit> should be separated by a ","
but the implementation is checking for ".". Having two numbers being
separated by a "." could be confusing, thus adjust the implementation to
match the help syntax.
binutils/ChangeLog:
* objcopy.c (copy_main): Set separator to "," between <reserve>
and <commit> for --heap and --stack.
* doc/binutils.texi: Add <commit> for --heap and --stack.
I noticed the regenerated BFD_RELOC_MICROBLAZE_32_NONE comment didn't
match that committed to bfd-in2.h, and was just going to regen
bfd-in2.h but then decided to do something about the silly formatting
of these comments in bfd-in2.h. eg. the BFD_RELOC_MICROBLAZE_32_NONE
comment:
-/* This is a 32 bit reloc that stores the 32 bit pc relative
-value in two words (with an imm instruction).No relocation is
-done here - only used for relaxing */
+ /* This is a 32 bit reloc that stores the 32 bit pc relative value in
+ two words (with an imm instruction). No relocation is done here -
+ only used for relaxing. */
BFD_RELOC_MICROBLAZE_32_NONE,
You'll notice how the second and third line of the original comment
aren't indented properly relative to the first line, and the whole
comment needs to be indented to match the code.
I've also edited reloc.c ENUMDOC paragraphs. Some of these had excess
indentation, presumably in an attempt to properly indent bfd-in2.h
comments but that fails due to chew.c removing leading whitespace
early by skip_white_and_stars. COMMENT was used in reloc.c to add
extra blank lines in bfd-in2.h. I've removed them too as I don't
think they add anything to readability of that file. (Perhaps more
usefully, they also add blank lines to libbfd.h separating relocs for
one target from others, but this isn't done consistently.)
* doc/chew.c (drop, idrop): Move earlier.
(strip_trailing_newlines): Check index before accessing array,
not after.
(wrap_comment): New function.
(main): Add "wrap_comment" intrinsic.
* doc/proto.str (ENUMDOC): Use wrap_comment.
(make_enum_header, ENDSENUM): Put start and end braces on
separate lines.
* reloc.c: Remove uses of COMMENT and edit ENUMDOC paragraphs.
* libbfd.h: Regenerate.
* bfd-in2.h: Regenerate.
When printing a value, I think the history reference -- the "$1" in
the output -- should be styled using the "variable" style. This patch
implements this.
Commit 8971d2788e ("gdb: link so_list using intrusive_list") introduced
a bug in clear_solib. Instead of passing an `so_list *` to
remove_target_sections, it passed an `so_list **`. This was not caught
by the compiler, because remove_target_sections takes a `void *` as the
"owner", so you can pass it any pointer and it won't complain.
This happened because I previously had a patch to change the type of the
disposer parameter to be a reference rather than a pointer, so had to
change `so` to `&so`. When dropping that patch, I forgot to revert this
bit and / or it got re-introduced when handling subsequent merge
conflicts. And I didn't properly retest.
Fix that, but try to make things less error prone. Add a union to
represent the possible owner kinds for a target_section. Trying to pass
a pointer to another type than those will not compile.
Change-Id: I600cab5ea0408ccc5638467b760768161ca3036c
Currently when gdb asks the source-highlight library to highlight a file, it
tells it what language file to use.
For instance, if gdb learns from the debug info that the file is language_c,
the language file "c.lang" is used. This mapping is hardcoded in
get_language_name.
However, if gdb doesn't know what language file to use, it falls back to using
python pygments, and in absence of that, unhighlighted source text.
In the case of python pygments, it autodetects which language to use based on
the file name.
Add the same capability when using the source-highlight library.
Tested on x86_64-linux.
Verified that it works by:
- making get_language_name return nullptr for language_c, and
- checking that source-highlight still manages to highlight a hello world.
Reviewed-By: Guinevere Larsen <blarsen@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
PR cli/30966
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30966
dwarf2/read.h includes cooked-index.h, but it doesn't need to. This
patch removes the inclusion from this header, and adds one to
index-write.c to make up for the absence.
This patch makes a cosmetic change to the reloc_weaksym.s
by making the bneid instruction all lower case like all of
the other instructions in the example.
Signed-off-by: Neal Frager <neal.frager@amd.com>
Signed-off-by: Michael J. Eager <eager@eagercon.com>
The fixes applied a few years ago to resolve confusions between parent and
child dicts at lookup time also apply in various forms to creation. In
general, if you have a type in a parent dict ctf_imported into a child and
you do something to it, and the parent dict is writable (created via
ctf_create, not opened via ctf_open*) it should work just the same to make
changes to that type via a child dict as it does to make the change
to the parent dict directly -- and nothing you're prohibited from doing
to the parent dict when done directly should be allowed just because
you're doing it via a child.
Specifically, the following don't work when doing things from the child, but
should:
- adding a member of a type in the parent to a struct or union in the
parent via ctf_add_member or ctf_add_member_offset: this yields
ECTF_BADID
- adding a member of a type in the parent to a struct or union in the
parent via ctf_add_member_encoded: this dumps core (!).
- adding an enumerand to an enumerator in the parent: this yields
ECTF_BADID
- setting the properties of an array in the parent via ctf_set_array;
this yields ECTF_BADID
Relatedly, some things work when doing things via a child that should fail,
yielding a CTF dictionary with invalid content (readable, but meaningless):
in particular, you can add a child type to a struct in the parent via
any of the ctf_add_member* family and nothing complains at all, even though
you should never be able to add references to children to parents (since any
given parent can be associated with many different children).
A family of tests is added to check each of these cases independently, since
some can result in coredumps and it would be nice to test the other cases
even if some dump core. They use a common library to do all the actual
work. The set of affected API calls was determined by code inspection
(auditing all calls to ctf_dtd_lookup): it's possible that I missed a few,
but I doubt it, since other cases use ctf_lookup* functions, which already
climb to the parent where appropriate.
libctf/ChangeLog:
PR libctf/30985
* ctf-create.c (ctf_dtd_lookup): Traverse to parents if necessary.
(ctf_set_array): Likewise. Report errors on the child; require
both parent and child to be writable.
(ctf_add_enumerator): Likewise.
(ctf_add_member_offset): Likewise. Prohibit addition of child types
to structs in the parent.
(ctf_add_member_encoded): Do not dereference a NULL dtd: report
ECTF_BADID instead.
* ctf-string.c (ctf_str_add_ref_internal): Report ENOMEM on the
dict if addition of a string ref fails.
* testsuite/libctf-writable/parent-child-dtd-crash-lib.c: New library.
* testsuite/libctf-writable/parent-child-dtd-enum.*: New test.
* testsuite/libctf-writable/parent-child-dtd-enumerator.*: New test.
* testsuite/libctf-writable/parent-child-dtd-member-encoded.*: New test.
* testsuite/libctf-writable/parent-child-dtd-member-offset.*: New test.
* testsuite/libctf-writable/parent-child-dtd-set-array.*: New test.
* testsuite/libctf-writable/parent-child-dtd-struct.*: New test.
* testsuite/libctf-writable/parent-child-dtd-union.*: New test.
This patch adds the R_MICROBLAZE_32_NONE relocation type.
This is a 32-bit reloc that stores the 32-bit pc relative
value in two words (with an imm instruction).
Add test case to gas test suite.
Signed-off-by: Neal Frager <neal.frager@amd.com>
Signed-off-by: Michael J. Eager <eager@eagercon.com>