A missing paren led to an intended cast to avoid dependence on the size
of size_t in one argument of ctf_err_warn applying to the wrong type by
mistake.
libctf/ChangeLog:
* ctf-serialize.c (ctf_write_mem): Fix cast.
Right now, if you compile the same .c input repeatedly with CTF enabled
and different compilation flags, then arrange to link all of these
together, then things misbehave in various ways. libctf may conflate
either inputs (if the .o files have the same name, say if they are
stored in different .a archives), or per-CU outputs when conflicting
types are found: the latter can lead to entirely spurious errors when
it tries to produce multiple per-CU outputs with the same name
(discarding all but the last, but then looking for types in the earlier
ones which have just been thrown away).
Fixing this is multi-pronged. Both inputs and outputs need to be
differentiated in the hashtables libctf keeps them in: inputs with the
same cuname and filename need to be considered distinct as long as they
have different associated CTF dicts, and per-CU outputs need to be
considered distinct as long as they have different associated input
dicts. Right now there is nothing tying the two together other than the
CU name: fix this by introducing a new field in the ctf_dict_t named
ctf_link_in_out, which (for input dicts) points to the associated per-CU
output dict (if any), and for output dicts points to the associated
input dict. At creation time the name used is completely arbitrary:
it's only important that it be distinct if CTF dicts are distinct. So,
when a clash is found, adjust the CU name by sticking the number of
elements in the input on the end. At output time, the CU name will
appear in the linked object, so it matters a little more that it look
slightly less ugly: in conflicting cases, append an incrementing
integer, starting at 0.
This naming scheme is not very helpful, but it's hard to see what else
we can do. The input .o name may be the same. The input .a name is not
even visible to ctf_link, and even *that* might be the same, because
.a's can contain many members with the same name, all of which
participate in the link. All we really know is that the two have
distinct dictionaries with distinct types in them, and at least this way
they are all represented, any any symbols, variables etc referring to
those types are accurately stored.
(As a side-effect this also fixes a use-after-free and double-free when
errors are found during variable or symbol emission.)
Use the opportunity to prevent a couple of sources of problems, to wit
changing the active CU mappings when a link has already been done
(no effect on ld, which doesn't use CU mappings at all), and causing
multiple consecutive ctf_link's to have the same net effect as just
doing the last one (no effect on ld, which only ever does one
ctf_link) rather than having the links be a sort of half-incremental
not-really-intended mess.
libctf/ChangeLog:
PR libctf/29242
* ctf-impl.h (struct ctf_dict) [ctf_link_in_out]: New.
* ctf-dedup.c (ctf_dedup_emit_type): Set it.
* ctf-link.c (ctf_link_add_ctf_internal): Set the input
CU name uniquely when clashes are found.
(ctf_link_add): Document what repeated additions do.
(ctf_new_per_cu_name): New, come up with a consistent
name for a new per-CU dict.
(ctf_link_deduplicating): Use it.
(ctf_create_per_cu): Use it, and ctf_link_in_out, and set
ctf_link_in_out properly. Don't overwrite per-CU dicts with
per-CU dicts relating to different inputs.
(ctf_link_add_cu_mapping): Prevent per-CU mappings being set up
if we already have per-CU outputs.
(ctf_link_one_variable): Adjust ctf_link_per_cu call.
(ctf_link_deduplicating_one_symtypetab): Likewise.
(ctf_link_empty_outputs): New, delete all the ctf_link_outputs
and blank out ctf_link_in_out on the corresponding inputs.
(ctf_link): Clarify the effect of multiple ctf_link calls.
Empty ctf_link_outputs if it already exists rather than
having the old output leak into the new link. Fix a variable
name.
* testsuite/config/default.exp (AR): Add.
(OBJDUMP): Likewise.
* testsuite/libctf-regression/libctf-repeat-cu.exp: New test.
* testsuite/libctf-regression/libctf-repeat-cu*: Main program,
library, and expected results for the test.
GDB's documentation of the 'file' command says:
If you do not specify a directory and the file is not found in the
GDB working directory, GDB uses the environment variable PATH as a
list of directories to search, just as the shell does when looking
for a program to run.
The same is true for files specified via commandline options -s, -e,
and -se.
This commit adds a cross reference to the file command for these options.
* dwarf.h (struct debug_info): Add rnglists_base field.
* dwarf.c (read_and_display_attr_value): Read attribute DW_AT_rnglists_base.
(display_debug_rnglists_list): While handling DW_RLE_base_addressx,
DW_RLE_startx_endx, DW_RLE_startx_length items, pass the proper parameter
value to fetch_indexed_addr(), i.e. fetch the proper entry in .debug_addr section.
(display_debug_ranges): Add rnglists_base to the .debug_rnglists base address.
(load_separate_debug_files): Load .debug_addr section, if exists.
PR 29263
* configure.ac (ac_default_ld_warn_execstack): Default to 'no' for
HPPA targets.
(ac_default_ld_warn_rwx_segments): Likewise.
* configure: Regenerate.
* testsuite/ld-elf/elf.exp: Add the --warn-execstack command line
option to the command line when running execstack tests for the
HPPA target.
'finish_print' does not really belong in value_print_options -- this
is consulted only when deciding whether or not to print a value, and
never during the course of printing. This patch removes it from the
structure and makes it a static global in infcmd.c instead.
Tested on x86-64 Fedora 34.
PR exp/20630 points out a simple way to cause an assertion failure in
copy_type -- but this was found in the wild a few times as well.
copy_type only works for objfile-owned types, but there isn't a deep
reason for this. This patch fixes the bug by updating copy_type to
work for any sort of type.
Better would perhaps be to finally implement type GC, but I still
haven't attempted this.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=20630
The optimized insertion sort algorithm in `elf_link_adjust_relocs`
incorrectly assembled "runs" from unsorted entries and inserted them to an
already-sorted prefix, breaking the loop invariants of insertion sort.
This commit updates the run assembly loop to break upon encountering a
non-monotonic change in the sort key.
PR 29259
bfd/
* elflink.c (elf_link_adjust_relocs): Ensure run being inserted
is sorted.
ld/
* testsuite/ld-elf/pr29259.d,
* testsuite/ld-elf/pr29259.s,
* testsuite/ld-elf/pr29259.t: New test.
This patch makes it possible to allow Value.format_string() to return
nibbles output.
When we set the parameter of nibbles to True, we can achieve the
displaying binary values in groups of every four bits.
Here's an example:
(gdb) py print (gdb.Value (1230).format_string (format='t', nibbles=True))
0100 1100 1110
(gdb)
Note that the parameter nibbles is only useful if format='t' is also used.
This patch also includes update to the relevant testcase and
documentation.
Tested on x86_64 openSUSE Tumbleweed.
Make an introduction of a new print setting that can be set by 'set
print nibbles [on|off]'. The default value if OFF, which can be changed
by user manually. Of course, 'show print nibbles' is also included in
the patch.
The new feature displays binary values by group, with four bits per
group. The motivation for this work is to enhance the readability of
binary values.
Here's a GDB session before this patch is applied.
(gdb) print var_a
$1 = 1230
(gdb) print/t var_a
$2 = 10011001110
With this patch applied, we can use the new print setting to display the
new form of the binary values.
(gdb) print var_a
$1 = 1230
(gdb) print/t var_a
$2 = 10011001110
(gdb) set print nibbles on
(gdb) print/t var_a
$3 = 0100 1100 1110
Tested on x86_64 openSUSE Tumbleweed.
commit e5ab6af52d ("gdbserver: Add LoongArch/Linux support")
was merged into the master since GDB 12, so we should put the
news in the "Changes since GDB 12" section.
Thanks Tom Tromey for your correction [1], sorry for that.
[1] https://sourceware.org/pipermail/gdb-patches/2022-June/190122.html
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
When handling section names in quotes obj_elf_section_name calls
demand_copy_C_string, which puts the name on the gas notes obstack.
Such strings aren't usually freed, since obstack_free frees all more
recently allocated objects as well as its arg. When handling
non-quoted names, obj_elf_section_name mallocs the name. Due to the
mix of allocation strategies it isn't possible for callers to free
names, if that was desirable. Partially fix this by always creating
names on the obstack, which is more efficient anyway. (You still
can't obstack_free on error paths due to the xtensa
tc_canonicalize_section_name.) Also remove a couple of cases where
the name is dup'd for no good reason as far as I know.
PR 29256
* config/obj-elf.c (obj_elf_section_name): Create name on notes
obstack.
(obj_elf_attach_to_group): Don't strdup group name.
(obj_elf_section): Likewise.
(obj_elf_vendor_attribute): Use xmemdup0 rather than xstrndup.
With gcc 4.8/4.9, we run into this build failure (and other similar
ones):
/home/palves/gdb/binutils-gdb/src/gdb/location.h:224:59: error: could not convert ‘{0, LINE_OFFSET_UNKNOWN}’ from ‘<brace-enclosed initializer list>’ to ‘line_offset’
struct line_offset line_offset = {0, LINE_OFFSET_UNKNOWN};
^
The issue is that at around the GCC 4.8/4.9 era, a default member
initializer prevented the struct from being an aggregate, so you
cannot use aggregate initialization on them. That rule changed after
GCC 4.9 and GCC 5 & later uses new rules.
Fix this by not using aggregate initialization for struct line_offset.
The default member initization already leaves line_offset as {0,
LINE_OFFSET_UNKNOWN}, so initialization to those values can just go
away. The remaining cases are of the form {0, LINE_OFFSET_NONE}, and
those cases can be better rewritten to delay setting the sign field
until we know we have a valid offset.
Change-Id: I0506ea4a83c5fa2f15e159569db68b3b0a7509b4
This converts set_location_spec_string to a method of location_spec,
and makes the location_spec::as_string field protected, renaming it to
m_as_string along the way.
Change-Id: Iccfb1654e9fa7808d0512df89e775f9eacaeb9e0
This converts location_spec_to_string to a method of location_spec,
simplifying the code using it, as it no longer has to use
std::unique_ptr::get().
Change-Id: I621bdad8ea084470a2724163f614578caf8f2dd5
This converts location_spec_empty_p to a method of location_spec,
simplifying users, as they no longer have to use
std::unique_ptr::get().
Change-Id: I83381a729896f12e1c5a1b4d6d4c2eb1eb6582ff
copy_location_spec is just a wrapper around location_spec::clone(), so
remove it and call clone() directly. This simplifies users, as they
no longer have to use std::unique_ptr::get().
Change-Id: I8ce8658589460b98888283b306b315a5b8f73976
Currently, there's the location_spec hierarchy, and then some
location_spec subclasses have their own struct type holding all their
data fields.
I.e., there is this:
location_spec
explicit_location_spec
linespec_location_spec
address_location_spec
probe_location_spec
and then these separate types:
explicit_location
linespec_location
where:
explicit_location_spec
has-a explicit_location
linespec_location_spec
has-a linespec_location
This patch eliminates explicit_location and linespec_location,
inlining their members in the corresponding location_spec type.
The location_spec subclasses were the ones currently defined in
location.c, so they are moved to the header. Since the definitions of
the classes are now visible, we no longer need location_spec_deleter.
Some constructors that are used for cloning location_specs, like:
explicit explicit_location_spec (const struct explicit_location *loc)
... were converted to proper copy ctors.
In the process, initialize_explicit_location is eliminated, and some
functions that returned the "data type behind a locspec", like
get_linespec_location are converted to downcast functions, like
as_linespec_location_spec.
Change-Id: Ia31ccef9382b25a52b00fa878c8df9b8cf2a6c5a
Currently, GDB internally uses the term "location" for both the
location specification the user input (linespec, explicit location, or
an address location), and for actual resolved locations, like the
breakpoint locations, or the result of decoding a location spec to
SaLs. This is expecially confusing in the breakpoints module, as
struct breakpoint has these two fields:
breakpoint::location;
breakpoint::loc;
"location" is the location spec, and "loc" is the resolved locations.
And then, we have a method called "locations()", which returns the
resolved locations as range...
The location spec type is presently called event_location:
/* Location we used to set the breakpoint. */
event_location_up location;
and it is described like this:
/* The base class for all an event locations used to set a stop event
in the inferior. */
struct event_location
{
and even that is incorrect... Location specs are used for finding
actual locations in the program in scenarios that have nothing to do
with stop events. E.g., "list" works with location specs.
To clean all this confusion up, this patch renames "event_location" to
"location_spec" throughout, and then all the variables that hold a
location spec, they are renamed to include "spec" in their name, like
e.g., "location" -> "locspec". Similarly, functions that work with
location specs, and currently have just "location" in their name are
renamed to include "spec" in their name too.
Change-Id: I5814124798aa2b2003e79496e78f95c74e5eddca
When testing on openSUSE Leap 15.4 I ran into this FAIL:
...
FAIL: gdb.arch/i386-mpx-map.exp: NULL address of the pointer
...
and likewise for all the other mpx tests.
The problem is that have_mpx is supposed to return 0, but it doesn't because
it tries to match this output:
...
builtin_spawn -ignore SIGHUP temp/20294/have_mpx-2-20294.x^M
No MPX support^M
No MPX support^M
...
using:
...
&& ![string equal $output "No MPX support\r\n"]]
...
Fix this by matching using a regexp instead.
Tested on x86_64-linux.
Triggered by a file containing just "#N" or "#A". fgets when hitting
EOF before reading anything returns NULL and does not write to buf.
strchr (buf, '\n') then is reading from uninitialised memory.
* input-file.c (input_file_open): Don't assume buf contains
zero string terminator when fgets returns NULL.
PR 29250
binutils/
* dwarf.c (display_debug_frames): Set col_type[reg] on sizing
pass over FDE to cie->col_type[reg] if CIE specifies reg.
Handle DW_CFA_restore and DW_CFA_restore_extended on second
pass using the same logic. Remove unnecessary casts. Don't
call frame_need_space on second pass over FDE.
gas/
* testsuite/gas/i386/ehinterp.d,
* testsuite/gas/i386/ehinterp.s: New test.
* testsuite/gas/i386/i386.exp: Run it.
Noticed format mismatch when attempted to build gdb on i686-linux-gnu
in --enable-64-bit-bfd mode:
sim/../../sim/cris/sim-if.c:576:28:
error: format '%lx' expects argument of type 'long unsigned int',
but argument 4 has type 'bfd_size_type' {aka 'long long unsigned int'} [-Werror=format=]
576 | sim_do_commandf (sd, "memory region 0x%" BFD_VMA_FMT "x,0x%lx",
| ^~~~~~~~~~~~~~~~~~~
577 | interp_load_addr, interpsiz);
| ~~~~~~~~~
| |
| bfd_size_type {aka long long unsigned int}
While at it fixed format string for time-related types.
I noticed that emit_exiting_event does not check whether there are any
listeners before creating the event object. All other event emitters
do this, so this patch updates this one as well.
PR python/28533 points out that the Python 'dont_repeat' documentation
is a bit ambiguous about when the method ought to be called. This
patch spells it out.
For Cortex-M targets, SP register is never detached from msp or
psp, it always has the same value as one of them. Let GDB treat
ARM_SP_REGNUM as an alias similar to what is done in hardware.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
For Arm Cortex-M33 with security extensions, there are 4 different
stack pointers (msp_s, msp_ns, psp_s, psp_ns). To be compatible
with earlier Cortex-M derivates, the msp and psp registers are
aliases for one of the 4 real stack pointer registers.
These are the combinations that exist:
sp -> msp -> msp_s
sp -> msp -> msp_ns
sp -> psp -> psp_s
sp -> psp -> psp_ns
This means that when the GDB client is to show the value of "msp",
the value should always be equal to either "msp_s" or "msp_ns".
Same goes for "psp".
To add a bit more context; GDB does not really use the register msp
(or psp) internally, but they are part of the set of registers which
are provided by the target.xml file. As a result, they will be part
of the set of registers printed by the "info r" command.
Without this particular patch, GDB will hit the assert in the bottom
of arm_cache_get_sp_register function.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29121
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
For Arm Cortex-M33 with security extensions, there are 4 different
stack pointers (msp_s, msp_ns, psp_s, psp_ns). In order to
identify the active one, compare the values of the different
stacks. The value of the initial sp register needs to be fetched to
perform this comparison.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
After the recent restructuring of the disassembler code, GDB has ended
up with two identical class static functions, both called
dis_asm_read_memory, with identical implementations.
My first thought was to move these out of their respective classes,
and just make them global functions, then I'd only need a single
copy.
And maybe that's the right way to go. But I disliked that by doing
that I loose the encapsulation of the method with the corresponding
disassembler class.
So, instead, I placed the static method into its own class, and had
both the gdb_non_printing_memory_disassembler and gdb_disassembler
classes inherit from this new class as an additional base-class.
In terms of code generated, I don't think there's any significant
difference with this approach, but I think this better reflects how
the function is closely tied to the disassembler.
There should be no user visible changes after this commit.
This commit started from an observation I made while working on some
other disassembler patches, that is, that the function
gdb_buffered_insn_length, is broken ... sort of.
I noticed that the gdb_buffered_insn_length function doesn't set up
the application data field if the disassemble_info structure.
Further, I noticed that some architectures, for example, ARM, require
that the application_data field be set, see gdb_print_insn_arm in
arm-tdep.c.
And so, if we ever use gdb_buffered_insn_length for ARM, then GDB will
likely crash. Which is why I said only "sort of" broken. Right now
we don't use gdb_buffered_insn_length with ARM, so maybe it isn't
broken yet?
Anyway to prove to myself that there was a problem here I extended the
disassembler self tests in disasm-selftests.c to include a test of
gdb_buffered_insn_length. As I run the test for all architectures, I
do indeed see GDB crash for ARM.
To fix this we need gdb_buffered_insn_length to create a disassembler
that inherits from gdb_disassemble_info, but we also need this new
disassembler to not print anything.
And so, I introduce a new gdb_non_printing_disassembler class, this is
a disassembler that doesn't print anything to the output stream.
I then observed that both ARC and S12Z also create non-printing
disassemblers, but these are slightly different. While the
disassembler in gdb_non_printing_disassembler reads the instruction
from a buffer, the ARC and S12Z disassemblers read from target memory
using target_read_code.
And so, I further split gdb_non_printing_disassembler into two
sub-classes, gdb_non_printing_memory_disassembler and
gdb_non_printing_buffer_disassembler.
The new selftests now pass, but otherwise, there should be no user
visible changes after this commit.
This commit extends the Python API to include disassembler support.
The motivation for this commit was to provide an API by which the user
could write Python scripts that would augment the output of the
disassembler.
To achieve this I have followed the model of the existing libopcodes
disassembler, that is, instructions are disassembled one by one. This
does restrict the type of things that it is possible to do from a
Python script, i.e. all additional output has to fit on a single line,
but this was all I needed, and creating something more complex would,
I think, require greater changes to how GDB's internal disassembler
operates.
The disassembler API is contained in the new gdb.disassembler module,
which defines the following classes:
DisassembleInfo
Similar to libopcodes disassemble_info structure, has read-only
properties: address, architecture, and progspace. And has methods:
__init__, read_memory, and is_valid.
Each time GDB wants an instruction disassembled, an instance of
this class is passed to a user written disassembler function, by
reading the properties, and calling the methods (and other support
methods in the gdb.disassembler module) the user can perform and
return the disassembly.
Disassembler
This is a base-class which user written disassemblers should
inherit from. This base class provides base implementations of
__init__ and __call__ which the user written disassembler should
override.
DisassemblerResult
This class can be used to hold the result of a call to the
disassembler, it's really just a wrapper around a string (the text
of the disassembled instruction) and a length (in bytes). The user
can return an instance of this class from Disassembler.__call__ to
represent the newly disassembled instruction.
The gdb.disassembler module also provides the following functions:
register_disassembler
This function registers an instance of a Disassembler sub-class
as a disassembler, either for one specific architecture, or, as a
global disassembler for all architectures.
builtin_disassemble
This provides access to GDB's builtin disassembler. A common
use case that I see is augmenting the existing disassembler output.
The user code can call this function to have GDB disassemble the
instruction in the normal way. The user gets back a
DisassemblerResult object, which they can then read in order to
augment the disassembler output in any way they wish.
This function also provides a mechanism to intercept the
disassemblers reads of memory, thus the user can adjust what GDB
sees when it is disassembling.
The included documentation provides a more detailed description of the
API.
There is also a new CLI command added:
maint info python-disassemblers
This command is defined in the Python gdb.disassemblers module, and
can be used to list the currently registered Python disassemblers.
This commit is setup for the next commit.
In the next commit I will add a Python API to intercept the print_insn
calls within GDB, each print_insn call is responsible for
disassembling, and printing one instruction. After the next commit it
will be possible for a user to write Python code that either wraps
around the existing disassembler, or even, in extreme situations,
entirely replaces the existing disassembler.
This commit does not add any new Python API.
What this commit does is put the extension language framework in place
for a print_insn hook. There's a new callback added to 'struct
extension_language_ops', which is then filled in with nullptr for Python
and Guile.
Finally, in the disassembler, the code is restructured so that the new
extension language function ext_lang_print_insn is called before we
delegate to gdbarch_print_insn.
After this, the next commit can focus entirely on providing a Python
implementation of the new print_insn callback.
There should be no user visible change after this commit.
The motivation for this change is an upcoming Python disassembler API
that I would like to add. As part of that change I need to create a
new disassembler like class that contains a disassemble_info and a
gdbarch. The management of these two objects is identical to how we
manage these objects within gdb_disassembler, so it might be tempting
for my new class to inherit from gdb_disassembler.
The problem however, is that gdb_disassembler has a tight connection
between its constructor, and its print_insn method. In the
constructor the ui_file* that is passed in is replaced with a member
variable string_file*, and then in print_insn, the contents of the
member variable string_file are printed to the original ui_file*.
What this means is that the gdb_disassembler class has a tight
coupling between its constructor and print_insn; the class just isn't
intended to be used in a situation where print_insn is not going to be
called, which is how my (upcoming) sub-class would need to operate.
My solution then, is to separate out the management of the
disassemble_info and gdbarch into a new gdb_disassemble_info class,
and make this class a parent of gdb_disassembler.
In arm-tdep.c and mips-tdep.c, where we used to cast the
disassemble_info->application_data to a gdb_disassembler, we can now
cast to a gdb_disassemble_info as we only need to access the gdbarch
information.
Now, my new Python disassembler sub-class will still want to print
things to an output stream, and so we will want access to the
dis_asm_fprintf functionality for printing.
However, rather than move this printing code into the
gdb_disassemble_info base class, I have added yet another level of
hierarchy, a gdb_printing_disassembler, thus the class structure is
now:
struct gdb_disassemble_info {};
struct gdb_printing_disassembler : public gdb_disassemble_info {};
struct gdb_disassembler : public gdb_printing_disassembler {};
In a later commit my new Python disassembler will inherit from
gdb_printing_disassembler.
The reason for adding the additional layer to the class hierarchy is
that in yet another commit I intend to rewrite the function
gdb_buffered_insn_length, and to do this I will be creating yet more
disassembler like classes, however, these will not print anything,
thus I will add a gdb_non_printing_disassembler class that also
inherits from gdb_disassemble_info. Knowing that that change is
coming, I've gone with the above class hierarchy now.
There should be no user visible changes after this commit.
Convert the gdbpy_err_fetch class to make use of gdbpy_ref, this
removes the need for manual reference count management, and allows the
destructor to be removed.
There should be no functional change after this commit.
I think this cleanup is worth doing on its own, however, in a later
commit I will want to copy instances of gdbpy_err_fetch, and switching
to using gdbpy_ref means that I can rely on the default copy
constructor, without having to add one that handles the reference
counts, so this is good preparation for that upcoming change.