PR ld/31906
* libdep_plugin.c (str2vec): Fix bug where null byte was not copied on memmove during quote handling and escaping, causing repeat of the last character in the last argument.
Fix buffer overflow in **res when arguments were separated by `\t` instead of ` `.
Remove handling of the escape character `\`, as it made it impossible to specify paths containing `\`
-- the implementation merely dropped `\`, and was affected by the memmove bug, so this should not be breaking; just single and double quotes are sufficient to deal with white space and quote characters, there is no need for escaping.
Handle syntax errors on unterminated quotes.
Make the parser linear time instead of quadratic.
Introduces instructions for the SME2 lutv2 extension for AArch64. They
are documented in the following document:
* ARM DDI0602
For both luti4 instructions, we introduced an operand called
SME_Znx2_BIT_INDEX. We use the existing function parse_vector_reg_list
for parsing but modified that function so that it can accept operands
without qualifiers and rejects instructions that have operands with
qualifiers but are not supposed to have operands with qualifiers.
For disassembly, we modified print_register_list so that it could
accept register lists without qualifiers.
For one luti4 instruction, we introduced a SME_Zdnx4_STRIDED. It is
similar to SME_Ztx4_STRIDED and we could use existing code for parsing,
encoding, and disassembly.
For movt instruction, we introduced an operand called SME_ZT0_INDEX2_12.
This is a ZT0 register with a bit index encoded in [13:12]. It is
similar to SME_ZT0_INDEX.
We also introduced an iclass named sme_size_12_b so that we can encode
size bits [13:12] correctly when only 'b' is allowed as qualifier.
Compiling on FreeBSD 13.2 with the default clang version 14.0.5 and top level
configure options --with-python=/usr/local/bin/python3.9 gives this error:
CXX ada-exp.o
./../binutils-gdb/gdb/ada-exp.y💯8: error: no template named 'unordered_map' in namespace 'std'
std::unordered_map<std::string, std::vector<ada_index_var_operation *>>
~~~~~^
1 error generated.
This change fixes it.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31918
Approved-By: Tom Tromey <tom@tromey.com>
When building with 'make -j20 -C gdb/doc all-doc' I often see problems
caused from trying to build some dvi files in parallel with some pdf
files. The problem files are: gdb.dvi and gdb.pdf; stabs.dvi and
stabs.pdf; and annotate.dvi and annotate.pdf.
The problem is that building these files create temporary files in the
local directory. There's already a race here that two make threads
might try to create these files at the same time.
But it gets worse, to avoid issues where a failed build could leave
these temporary files in a corrupted state, and so prevent the next
build from succeeding, the recipe for each of these files deletes all
the temporary files first, this obviously causes problems if some
other thread has already started the build and is relying on these
temporary files.
To work around this problem I propose we start using the --build and
--build-dir options for texi2dvi (which is the same tool used to
create the pdf files). These options were added in texinfo 4.9 which
was released in June 2007. We already require using a version of
texinfo after 4.9 (I tried to build with 4.13 and the doc build failed
as some of the texinfo constructs were not understood), so this patch
has not changed the minimum required version at all.
The --build flag allows the temporary files to be placed into a
sub-directory, and the --build-dir option allows us to control the
name of that sub-directory.
What we do is create a unique sub-directory for each target that
invokes texi2dvi, all of the unique sub-directories are created within
a single directory texi2dvi_tmpdir, and so after a complete doc build,
we are left with a build tree like this:
build/gdb/doc/
'-- texi2dvi_tmpdir/
|-- annotate_dvi/
|-- annotate_pdf/
|-- gdb_dvi/
|-- gdb_pdf/
|-- stabs_dvi/
'-- stabs_pdf/
I've left out all the individual files that live within these
directories for simplicity.
To avoid corrupted temporary files preventing a future build to
complete, each recipe deletes its associated sub-directory from within
texi2dvi_tmpdir/ before it attempts a build, this ensures a fresh
start each time.
And the mostlyclean target deletes texi2dvi_tmpdir/ and all its
sub-directories, ensuring that everything is cleaned up.
For me, with this fix in place, I can now run 'make -j20 -C gdb/doc
all-doc' without seeing any build problems.
Approved-By: Pedro Alves <pedro@palves.net>
There are two problems we encounter when trying to build the refcard
related target in parallel, i.e.:
$ make -j20 -C gdb/doc/ refcard.dvi refcard.ps refcard.pdf
These problems are:
(1) The refcard.dvi and refcard.pdf targets both try and generate the
tmp.sed and sedref.tex files. If two make threads end up trying
to create these files at the same time then the result is these
files become corrupted.
I've fixed this by creating a new rule that creates sedref.tex,
both refcard.dvi and refcard.pdf now depend on this, and make will
build sedref.tex just once. The tmp.sed file is now generated as
refcard.sed, this is generated and deleted as a temporary file
within the sedref.tex recipe.
(2) Having created sedref.tex the recipes for refcard.dvi and
refcard.pdf both run various LaTeX based tools with sedref.tex as
the input file. The problem with this is that these tools all
rely on creating temporary files calls sedref.*.
If the refcard.dvi and refcard.pdf rules run at the same time then
these temporary files clash and overwrite each other causing the
build to fail.
We already copy the result file in order to rename it, our input
file is sedref.tex which results in an output file named
sedref.dvi or sedref.pdf, but we actually want refcard.dvi or
refcard.pdf. So within the recipe for refcard.dvi I copy the
input file from sedref.tex to sedref_dvi.tex. Now all the temp
files are named sedref_dvi.* and the output is sedref_dvi.dvi, I
then rename this new output file to refcard.dvi.
I've done the same thing for refcard.pdf, but I copy the input
to sedref_pdf.tex.
In this way the temp files no longer clash, and both recipes can
safely run in parallel.
After this commit I was able to reliably build all of the refcard
targets in parallel. There should be no change in the final file.
Approved-By: Tom Tromey <tom@tromey.com>
In gdb/doc/Makefile.in the TEXI2POD variable is used to invoke
texi2pod.pl, which process the .texinfo files. This also handles the
'include' directives within the .texinfo files.
Like the texi2dvi and texi2pdf tools, texi2pod.pl handles the -I flag
to add search directories for resolving 'include' directives within
.texinfo files.
When GDB runs TEXI2POD we include gdb-cfg.texi, which then includes
GDBvn.texi.
When building from a git checkout the gdb-cfg.texi files and
GDBvn.texi files will be created in the build directory, which is
where texi2pod.pl is invoked, so the files will be found just fine.
However, for a GDB release we ship gdb-cfg.texi and GDBvn.texi in the
source tree, along with the generated manual (.1 and .5) files.
So when building a release, what normally happens is that we spot that
the .1 and .5 man files are up to date, and don't run the recipe to
regenerate these files.
However, if we deliberately touch the *.texinfo files in a release
source tree, and then try to rebuild the man files, we'll get an error
like this:
make: Entering directory '/tmp/release-build/build/gdb/doc'
TEXI2POD gdb.1
cannot find GDBvn.texi at ../../../gdb-16.0.50.20240529/gdb/doc/../../etc/texi2pod.pl line 251, <GEN0> line 16.
make: *** [Makefile:664: gdb.1] Error 2
make: Leaving directory '/tmp/release-build/build/gdb/doc'
The problem is that texi2pod.pl doesn't know to look in the source
tree for the GDBvn.texi file.
If we compare this to the recipe for creating (for example) gdb.dvi,
which uses texi2dvi, this recipe adds '-I $(srcdir)' to the texi2dvi
command line, which allows texi2dvi to find GDBvn.texi in the source
tree.
In this commit I add a similar -I option to the texi2pod.pl command
line. After this, given a GDB release, it is possible to edit (or
just touch) the gdb.texinfo file and rebuild the man pages, the
GDBvn.texi will be picked up from the source tree.
If however a dependency for GDBvn.texi is changed in a release tree
then GDBvn.texi will be regenerated into the build directory and this
will be picked up in preference to the GDBvn.texi in the source tree,
just as you would want.
Approved-By: Tom Tromey <tom@tromey.com>
In a git checkout of the source code we don't have a version.subst
file in the gdb/doc directory. When building the GDB docs the
version.subst file is generated on demand (we have a recipe for that).
However, in a release tar file we do include a copy of the
version.subst file in the source tree, as a result the version.subst
recipe will not be run.
If, in a release build, we force the running of any recipe that
depends on version.subst then we run into a problem. For example,
slightly confusingly, if we 'touch gdb/doc/version.subst' within the
unpacked source tree of a release, then 'make -C gdb/doc GDBvn.texi'
in the build tree, we'll see:
make: Entering directory '/tmp/build/build/gdb/doc'
GEN GDBvn.texi
sed: can't read version.subst: No such file or directory
make: Leaving directory '/tmp/build/build/gdb/doc'
The problem is that every reference to version.subst in GDB's Makefile
assumes that the version.subst file will always be in the build
directory.
Handily version.subst is always the first dependency in every recipe
that uses that file. As such we can replace references to
version.subst with $<, make will expand this to the location where the
dependency was found.
In the case of the man page generation, the reference to version.subst
is hidden inside POD2MAN. It seemed a little confusing adding a use
of $< within POD2MAN, so I've moved the use into the recipe, which I
think is clearer.
I've also added comments for the two rules that I've modified to
explain our use of $<.
After this change it is possible to rebuild the man pages even when
version.subst is located in the source tree.
Approved-By: Tom Tromey <tom@tromey.com>
We have two rules, one each for building the .1 and .5 man pages. The
only actual difference is that one rule passes --section=1 and the
other passes --section=5 (see the definitions of POD2MAN1 and POD2MAN5
respectively.
I figure by using the suffix from the target of the rule we can
combine these two rules into one.
I use:
$(subst .,,$(suffix $@))
This gets the suffix from the target, either '.1' or '.5', and the
'subst' removes the '.' leaving '1' or '5'.
Now that I'm not using a static pattern rule for building the man
pages, the advice in the 'make' documentation is to not use $*, so
I've moved away from that to instead use $(basename $@), e.g. for
'gdbinit.5' this gives 'gdbinit', which is what we want.
There should be no difference in what is created after this change.
Approved-By: Tom Tromey <tom@tromey.com>
The build recipe for gdb.dvi and gdb.pdf contains instructions for
copying the GDBvn.texi file from the source tree into the build
directory if the GDBvn.texi file doesn't already exist in the build
directory.
The gdb.dvi and gdb.pdf targets also have a dependency on GDBvn.texi,
and we have a recipe for building GDBvn.texi.
What's happening here is this:
- In a git checkout of the source tree there is no GDBvn.texi in the
source tree, the GDBvn.texi dependency will trigger a rebuild of
GDBvn.texi, which is then used to build gdb.dvi and/or gdb.pdf.
- In a release tar file we do include a copy of GDBvn.texi. This
file will appear to be up to date, and so no copy of GDBvn.texi is
created within the build directory. Now when building gdb.dvi
and/or gdb.pdf we copy (or symlink) the version of GDBvn.texi from
the source tree into the build directory.
However, copying GDBvn.texi from the source directory is completely
unnecessary. The gdb.dvi/gdb.pdf recipes both invoke texi2dvi and
pass '-I $(srcdir)' as an argument, this means that texi2dvi will look
in the $(srcdir) to find included files, including GDBvn.texi.
As such I believe we can remove the code that copies GDBvn.texi from
the source tree into the build tree.
I've tested with a release build; creating a release with:
./src-release gdb
Then in an empty directory, unpacking the resulting .tar file,
creating a parallel build directory and doing the usual configure,
make, and 'make install'.
Having done this I can then 'touch gdb/doc/*.texinfo' in the unpacked
source tree, and do 'make -C gdb/doc pdf dvi' to rebuild all the pdf
and dvi files, this works fine without having to either build or copy
GDBvn.texi into the build directory.
Approved-By: Tom Tromey <tom@tromey.com>
After the x86 target description changes that I committed recently,
the first commit in the series being:
commit 8a29222b85
Date: Sat Jan 27 10:40:35 2024 +0000
gdb/gdbserver: share I386_LINUX_XSAVE_XCR0_OFFSET definition
and the last commit in the series being:
commit 646d754d14
Author: Andrew Burgess <aburgess@redhat.com>
Date: Tue Jan 30 15:37:23 2024 +0000
gdb/gdbserver: share x86/linux tdesc caching
The sourceware buildbot highlighted a regression on i386. On the GDB
side we'd see this:
Remote debugging using :54321
warning: Architecture rejected target-supplied description
Remote connection closed
(gdb)
while on the gdbserver side we'd see this:
$ ./gdbserver/gdbserver --once :54321 ~/empty
Process /srv/aburgess/empty created; pid = 31406
Listening on port 54321
Remote debugging from host ::1, port 39488
../../src/gdbserver/regcache.cc:272: A problem internal to GDBserver has been detected.
Unknown register st0 requested
Aborted (core dumped)
When I tried to reproduce this regression on my local i386 VM the
issue would not reproduce.
I eventually tracked the problem down to x86_linux_tdesc_for_tid in
gdb/nat/x86-linux-tdesc.c. In this function we have this line:
/* Check if PTRACE_GETREGSET works. */
if (ptrace (PTRACE_GETREGSET, tid,
(unsigned int) NT_X86_XSTATE, &iov) < 0)
{
... handle failure ...
}
else
{
... handle success ...
}
The problem is that on my VM the PTRACE_GETREGSET feature is
supported, while on sourceware's buildbot machine this feature is not
supported.
I did a quick search and it seems like the 'xsave' feature in
/proc/cpuinfo might be the indicator for whether PTRACE_GETREGSET is
supported or not, and indeed my machine has the 'xsave' feature while
the sourceware machine does not.
The point of divergence then is this ptrace call, on my machine the
call succeeds and we extract the xcr0 value from the iov vector, while
on the sourceware machine the ptrace call fails and we use a default
xcr0 value of 0.
This xcr0 value is then passed to i386_linux_read_description at the
end of x86_linux_tdesc_for_tid.
In gdb/arch/i386-linux-tdesc.c we find i386_linux_read_description
which does some caching but calls i386_create_target_description to
actually create the target descriptions when needed. The xcr0 value
is masked to only the bits that are interesting, but given a value of
0 we'll just pass 0 through to i386_create_target_description.
In gdb/arch/i386.c we find i386_create_target_description which checks
the xcr0 bits and builds the target description. What we can see is
that if no bits are set in the xcr0 value then no features will be
added to the created target description. This featureless target
description is then transmitted back to GDB, which is then rejected
due to lack of essential core registers.
So, how did things work prior to the above commit series? There are
three places of interest, on the GDB side there is
x86_linux_nat_target::read_description and
i386_linux_core_read_description. Then on the gdbserver side there is
x86_linux_read_description.
All of these locations have a call to i386_linux_read_description
followed by a check if the return value was nullptr. If we do get
back nullptr then we perform another call to
i386_linux_read_description with a default xcr0 value.
Looking in i386_linux_read_description we see a specific check for
xcr0 being 0 in which case we return nullptr.
And so, prior to the above series, if xcr0 was 0 due to
PTRACE_GETREGSET being unavailable we'd use a default xcr0 value.
After the above series this is no longer the case, the 'xcr0 == 0'
check has been removed from i386_linux_read_description and the
calling code is streamlined to remove the use of default xcr0 values.
The fix I propose here is to setup the default xcr0 value at the point
where we find that PTRACE_GETREGSET is unavailable. The default value
used is X86_XSTATE_SSE_MASK. This is the default used in
x86_linux_nat_target::read_description (for GDB) and in
x86_linux_read_description (for gdbserver). The above commit series
already fixed i386_linux_core_read_description to ensure that the
correct default xcr0 value was used, this case is a little special in
that it uses different defaults depending on which sections are
present in the core file, so that case always needed to be handled
differently.
The choice of X86_XSTATE_SSE_MASK corresponds to the default used for
i386 before the above series was committed. This mask includes the
X87 and SSE bits only, neither of these bits are checked for on amd64
or x32, so this default doesn't change the behaviour on these targets.
By setting the default xcr0 value at this early stage we ensure that
the cached xcr0 value on the gdbserver side is correct. This is
critical as this cached xcr0 value is passed through to the in process
agent (IPA). If we leave the cached xcr0 value as 0 and apply the
defaults later in the series we also have to encode the knowledge of
the default into the IPA, this just means we have the default encoded
in multiple locations, which seems like a bad idea. The approach used
in this patch means the default is present in just one location.
This commit should fix the i386 regressions seen on the sourceware
buildbot.
In addition to the fix in nat/x86-linux-tdesc.c I've also fixed the
layout of the declaration of x86_linux_tdesc_for_tid in the header
file.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
- Add type annotations
- Use a raw string in one spot (where we call re.sub), to avoid an
"invalid escape sequence" warning.
- Remove unused "os" import.
Change-Id: I0149cbb73ad2b05431f032fa9d9530282cb01e90
Reviewed-By: Guinevere Larsen <blarsen@redhat.com>
On fedora rawhide, I ran into:
...
(gdb) continue^M
Continuing.^M
^M
Catchpoint 2 (call to syscall clone3), 0x000000000042097d in __clone3 ()^M
(gdb) FAIL: gdb.threads/stepi-over-clone.exp: continue
...
Fix this by updating a regexp to also recognize __clone3.
Tested on x86_64-linux.
Tested-By: Guinevere Larsen <blarsen@redhat.com>
When running test-case gdb.base/watchpoint-running on ppc64le-linux (and
similar on arm-linux), we get:
...
(gdb) watch global_var^M
warning: Error when detecting the debug register interface. \
Debug registers will be unavailable.^M
Watchpoint 2: global_var^M
(gdb) FAIL: $exp: all-stop: hardware: watch global_var
FAIL: $exp: all-stop: hardware: watchpoint hit (timeout)
...
The problem is that ppc_linux_dreg_interface::detect fails to detect the
hardware watchpoint interface, because the calls to ptrace return with errno
set to ESRCH.
This is a feature of ptrace: if a call is done while the tracee is not
ptrace-stopped, it returns ESRCH.
Indeed, in the test-case "watch global_var" is executed while the inferior is
running, and that triggers the first call to ppc_linux_dreg_interface::detect.
And because the detection failure is cached, subsequent attempts at setting
hardware watchpoints will also fail, even if the tracee is ptrace-stopped.
The way to fix this is to make sure that ppc_linux_dreg_interface::detect is
called when we know that the thread is ptrace-stopped, which in the current
setup is best addressed by using target-specific post_attach and
post_startup_inferior overrides. However, as we can see in
aarch64_linux_nat_target, that causes code duplication.
Fix this by:
- defining a new target hook low_init_process, called from
linux_init_ptrace_procfs, which is called from both
linux_nat_target::post_attach and linux_nat_target::post_startup_inferior,
- adding implementations for ppc_linux_nat_target and arm_linux_nat_target
that detect the hardware watchpoint interface,
- replacing the aarch64_linux_nat_target implementations of post_attach and
post_startup_inferior with a low_init_process implementation.
Tested on ppc64le-linux, arm-linux, aarch64-linux and x86_64-linux.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
Approved-By: Luis Machado <luis.machado@arm.com>
PR tdep/31834
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31834
PR tdep/31705
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31705
These can be replaced by adds when acting on a register operand.
While for the scalar forms there's no gain in encoding size, ADD
generally has higher throughput than SHL. EFLAGS set by ADD are a
superset of those set by SHL (AF in particular is undefined there).
For the SIMD cases the transformation also reduced code size, by
eliminating the 1-byte immediate from the resulting encoding. Note
that this transformation is not applied by gcc13 (according to my
observations), so would - as of now - even improve compiler generated
code.
Like for REX/REX2, EVEX-prefixed insns access the low bytes of all
registers; %ah...%bh are inaccessible. Reflect this correctly in output,
by leveraging REX machinery we already have to this effect.
While these can't be used as register operands, they can be used for
memory operand addressing. Such uses do not prevent conversion: The
RegRex64 checks in check_Rex_required() for base and index registers
were simply wrong. They specifically also aren't needed for byte
registers, as those won't pass i386_index_check() anyway.
Blindly ignoring any mnemonic suffix can't be quite right: Bad suffix /
operand combinations still want flagging. Simply avoid optimizing in
such situations.
PR gas/31903
While elsewhere having realized that "one" doesn't point to a nul-
terminated string, it somehow didn't occur to me that the pre-existing
strstr() could have been wrong, and hence I blindly added a new use of
the function. Add the (already prior to 1e3c814459 ["gas: extend \+
support to .rept"]) missing call to sb_terminate(), leveraging that to
simplify the other two places where the lack of nul termination was
previously worked around.
Accroding to the Crypto spec, the Zvkned,Zvknhb,Zvkb and Zvkt are
included in the Zvkn. So the Zvknha should be removed from Zvkn.
bfd/ChangeLog:
* elfxx-riscv.c: Remove zvknha from zvkn.
When I changed the Rust parser to handle 128-bit ints, this
inadvertently broke some other gdb commands. For example, "info
symbol 0xffffffffffffffff" now fails, because the resulting value is
128 bits, but this is rejected by extract_integer.
This patch fixes the problem by changing extract_integer to allow
over-long integers as long as the high bytes are either 0, or (for
signed types) 0xff.
Regression tested on x86-64 Fedora 38.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31565
Approved-By: Andrew Burgess <aburgess@redhat.com>
When running test-case gdb.python/py-format-address.exp on arm-linux, I get:
...
(gdb) python print("Got: " + gdb.format_address(0x103dd))^M
Got: 0x103dd <main at py-format-address.c:30>^M
(gdb) FAIL: $exp: symbol_filename=on: gdb.format_address, \
result should have an offset
...
What is expected here is:
...
Got: 0x103dd <main+1 at py-format-address.c:30>^M
...
Main starts at main_addr:
...
(gdb) print /x &main^M
$1 = 0x103dc^M
...
and we obtained next_addr 0x103dd by adding 1 to it:
...
set next_addr [format 0x%x [expr $main_addr + 1]]
...
Adding 1 to $main_addr results in an address for a thumb function starting at
address 0x103dc, which is incorrect because main is an arm function (because
I'm running with target board unix/-marm).
At some point during the call to format_addr, arm_addr_bits_remove removes
the thumb bit, which causes the +1 offset to be dropped, causing the FAIL.
Fix this by using the address of the breakpoint on main, provided it's not at
the very start of main.
Tested on arm-linux.
PR testsuite/31452
Bug: https://www.sourceware.org/bugzilla/show_bug.cgi?id=31452
With test-case gdb.opt/inline-cmds.exp on ppc64le-linux, I ran into:
...
PASS: gdb.opt/inline-cmds.exp: finish from marker
...
PASS: gdb.opt/inline-cmds.exp: finish from marker
DUPLICATE: gdb.opt/inline-cmds.exp: finish from marker
...
Fix this by issuing less passes.
Tested on ppc64le-linux.
As the comment in the code says, TLS_IE needs only one dynamic reloc.
But commit b67a17aa7c ("LoongArch: Fix the issue of excessive
relocation generated by GD and IE") has incorrectly allocated the space
for two dynamic relocs, causing libc.so to contain 8 R_LARCH_NONE.
Adjust tlsdesc-dso.d for the offset changes and add two tests to ensure
there are no R_LARCH_NONE with TLS.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Remove JANSSON_LIBS from ld_new_DEPENDENCIES since ld_new_DEPENDENCIES
should only contain binutils dependencies.
PR ld/31909
* Makefile.am (ld_new_DEPENDENCIES): Remove JANSSON_LIBS.
* Makefile.in: Regenerated.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
In commit 764af87825 ("[gdb/python] Add typesafe wrapper around
PyObject_CallMethod") I added poisoning of PyObject_CallMethod:
...
/* Poison PyObject_CallMethod. The typesafe wrapper gdbpy_call_method should be
used instead. */
template<typename... Args>
PyObject *
PyObject_CallMethod (Args...);
...
The idea was that subsequent code would be forced to use gdbpy_call_method
instead of PyObject_CallMethod.
However, that caused build issues with gcc 14 and python 3.13:
...
/usr/bin/ld: python/py-disasm.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<unsigned int, long long>(_object*, char const*, unsigned int, long long)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x384f): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, unsigned int, long long>(_object*, char*, char*, unsigned int, long long)'
/usr/bin/ld: python/py-tui.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<int>(_object*, char const*, int)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x1235): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, int>(_object*, char*, char*, int)'
/usr/bin/ld: python/py-tui.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<int, int, int>(_object*, char const*, int, int, int)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x12b0): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, int, int, int>(_object*, char*, char*, int, int, int)'
collect2: error: ld returned 1 exit status
...
Fix this by poisoning without using templates.
Tested on x86_64-linux.
When running test-case gdb.base/complex-parts.exp on arm-linux, I get:
...
(gdb) p $_cimag (z3)^M
$6 = 6.5^M
(gdb) PASS: gdb.base/complex-parts.exp: long double imaginary: p $_cimag (z3)
ptype $^M
type = double^M
(gdb) FAIL: gdb.base/complex-parts.exp: long double imaginary: ptype $
...
Given that z3 is a complex long double, the test-case expects the type of the
imaginary part of z3 to be long double, but it's double instead.
This is due to the fact that the dwarf info doesn't specify an explicit target
type:
...
<5b> DW_AT_name : z3
<60> DW_AT_type : <0xa4>
...
<1><a4>: Abbrev Number: 2 (DW_TAG_base_type)
<a5> DW_AT_byte_size : 16
<a6> DW_AT_encoding : 3 (complex float)
<a7> DW_AT_name : complex long double
...
and consequently we're guessing in dwarf2_init_complex_target_type based on
the size:
...
case 64:
tt = builtin_type (gdbarch)->builtin_double;
break;
case 96: /* The x86-32 ABI specifies 96-bit long double. */
case 128:
tt = builtin_type (gdbarch)->builtin_long_double;
break;
...
For arm-linux, complex long double is 16 bytes, so the target type is assumed
to be 8 bytes, which is handled by the "case 64", which gets us double
instead of long double.
Fix this by searching for "long" in the name_hint parameter, and using long
double instead.
Note that base types in dwarf are not allowed to contain references to other
types, and the complex types are base types, so the missing explicit target
type is standard-conformant.
A gcc PR was filed to add this as a dwarf extension (
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115272 ).
Tested on arm-linux.
Most of these are harmless, but some of the type confusions and especially
a missing ctf_strerror() on an error path were actual bugs that could
have resulted in test failures crashing rather than printing an error
message.
libctf/
* testsuite/libctf-lookup/enumerator-iteration.c: Fix type
confusion, signedness confusion and a missing ctf_errmsg().
* testsuite/libctf-regression/libctf-repeat-cu-main.c: Return 0 from
the test function.
* testsuite/libctf-regression/open-error-free.c: Fix signedness
confusion.
* testsuite/libctf-regression/zrewrite.c: Remove unused label.
The following recent change introduced a regression when building using
clang++:
commit 764af87825
Date: Wed Jun 12 18:58:49 2024 +0200
[gdb/python] Add typesafe wrapper around PyObject_CallMethod
The error message is:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
CXX python/py-block.o
1 error generated.
make[2]: *** [Makefile:1959: python/py-arch.o] Error 1
make[2]: *** Waiting for unfinished jobs....
In file included from ../../gdb/python/py-auto-load.c:25:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
1 error generated.
make[2]: *** [Makefile:1959: python/py-auto-load.o] Error 1
In file included from ../../gdb/python/py-block.c:23:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
1 error generated.
This patch fixes this by changing gdbpy_method_format to be a templated
struct, and only have its specializations define the static constexpr
member "format". This way, we avoid having an uninitialized constexpr
expression, regardless of it being instantiated or not.
Reviewed-By: Tom de Vries <tdevries@suse.de>
Change-Id: I5bec241144f13500ef78daea30f00d01e373692d
There are two encodings for each opcode F6/F7 in ctest, but the second one
is never used, so remove it to reduce the size of opcode_tbl.h.
opcodes/ChangeLog:
* i386-opc.tbl: Removed the secondary insn template for ctest.
* i386-tbl.h: Regenerated.
On s390x-linux, I run into:
...
(gdb) p (short []) s1^M
$3 = {0, 1, 0, <optimized out>}^M
(gdb) FAIL: gdb.dwarf2/shortpiece.exp: p (short []) s1
...
while this is expected:
...
(gdb) p (short []) s1^M
$3 = {1, 0, 0, <optimized out>}^M
(gdb) PASS: gdb.dwarf2/shortpiece.exp: p (short []) s1
...
The type of s1 is:
...
(gdb) ptype s1
type = struct S {
myint a;
myushort b;
}
...
so the difference is due the fact that viewing an int as two shorts gives
different results depending on the endianness.
Fix this by allowing both results.
Tested on x86_64-linux and s390x-linux.
Approved-By: Tom Tromey <tom@tromey.com>
I noticed that we started using "string cat", which has been available since
tcl version 8.6.2.
Add a local implementation for use with older tcl versions.
Tested on x86_64-linux.
Approved-By: Andrew Burgess <aburgess@redhat.com>
In commit 1a7d840a21 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI"), in absense of
osabi settings for newlib and uclibc for arm, I chose a best-effort approach
using ifdefs.
Post-commit review [1] pointed out that this may be causing more problems than
it's worth.
Fix this by removing the ifdefs and simply defining ARM_LINUX_JB_PC_EABI to 1.
Rebuild on x86_64-linux with --enable-targets=all.
Fixes: 1a7d840a21 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI")
[1] https://sourceware.org/pipermail/gdb-patches/2024-June/209779.html
Commit 97033da507 ("[gdb/build] Cleanup gdb/features/feature_to_c.sh")
factored out new file gdb/features/feature_to_c.awk out of
gdb/features/feature_to_c.sh, but failed to add the GPL header comment, so add
this now.
Tested on x86_64-linux.
Three new functions for looking up the enum type containing a given
enumeration constant, and optionally that constant's value.
The simplest, ctf_lookup_enumerator, looks up a root-visible enumerator by
name in one dict: if the dict contains multiple such constants (which is
possible for dicts created by older versions of the libctf deduplicator),
ECTF_DUPLICATE is returned.
The next simplest, ctf_lookup_enumerator_next, is an iterator which returns
all enumerators with a given name in a given dict, whether root-visible or
not.
The most elaborate, ctf_arc_lookup_enumerator_next, finds all
enumerators with a given name across all dicts in an entire CTF archive,
whether root-visible or not, starting looking in the shared parent dict;
opened dicts are cached (as with all other ctf_arc_*lookup functions) so
that repeated use does not incur repeated opening costs.
All three of these return enumerator values as int64_t: unfortunately, API
compatibility concerns prevent us from doing the same with the other older
enum-related functions, which all return enumerator constant values as ints.
We may be forced to add symbol-versioning compatibility aliases that fix the
other functions in due course, bumping the soname for platforms that do not
support such things.
ctf_arc_lookup_enumerator_next is implemented as a nested ctf_archive_next
iterator, and inside that, a nested ctf_lookup_enumerator_next iterator
within each dict. To aid in this, add support to ctf_next_t iterators for
iterators that are implemented in terms of two simultaneous nested iterators
at once. (It has always been possible for callers to use as many nested or
semi-overlapping ctf_next_t iterators as they need, which is one of the
advantages of this style over the _iter style that calls a function for each
thing iterated over: the iterator change here permits *ctf_next_t iterators
themselves* to be implemented by iterating using multiple other iterators as
part of their internal operation, transparently to the caller.)
Also add a testcase that tests all these functions (which is fairly easy
because ctf_arc_lookup_enumerator_next is implemented in terms of
ctf_lookup_enumerator_next) in addition to enumeration addition in
ctf_open()ed dicts, ctf_add_enumerator duplicate enumerator addition, and
conflicting enumerator constant deduplication.
include/
* ctf-api.h (ctf_lookup_enumerator): New.
(ctf_lookup_enumerator_next): Likewise.
(ctf_arc_lookup_enumerator_next): Likewise.
libctf/
* libctf.ver: Add them.
* ctf-impl.h (ctf_next_t) <ctn_next_inner>: New.
* ctf-util.c (ctf_next_copy): Copy it.
(ctf_next_destroy): Destroy it.
* ctf-lookup.c (ctf_lookup_enumerator): New.
(ctf_lookup_enumerator_next): New.
* ctf-archive.c (ctf_arc_lookup_enumerator_next): New.
* testsuite/libctf-lookup/enumerator-iteration.*: New test.
* testsuite/libctf-lookup/enum-ctf-2.c: New test CTF, used by the
above.
Describe a bit more clearly what effects a type being non-root-
visible has. More consistently use the term non-root-visible
rather than hidden. Document ctf_enum_iter.
include/
* ctf-api.h (ctf_enum_iter): Document.
(ctf_type_iter): Hidden, not non-root. Mention that
parent dictionaries are not traversed.
This was always an error, because the ctn_fp routinely has errors set on it,
which is not something you can (or should) do to a const object.
libctf/
* ctf-impl.h (ctf_next_) <cu.ctn_fp>: Make non-const.