Currently, in i386_linux_core_read_description, if GDB fails to
extract an xcr0 value from the core file, then we will have a default
zero value for the xcr0 variable, we still call the
i386_linux_read_description function, which checks for this zero value
and returns nullptr.
Back in i386_linux_core_read_description we spot the nullptr return
value from i386_linux_read_description and call
i386_linux_read_description again, but this time passing a default
value for xcr0.
In the next commit I plan to rework i386_linux_read_description, and
in so doing I will remove the check for xcr0 == 0, this is inline with
how the amd64 code is written.
However, this means that the 'xcr0 == 0' check needs to move up the
stack to i386_linux_core_read_description, again, this brings the i386
code into line with the amd64 code.
This is just a refactor in preparation for the next commit, there
should be no user visible changes after this commit.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
Approved-By: John Baldwin <jhb@FreeBSD.org>
This patch is part of a series that has the aim sharing the x86 Linux
target description creation code between GDB and gdbserver.
Within GDB part of this process involves reading the cs and ds state
from the 'struct user_regs_struct' using a ptrace call.
This isn't done by gdbserver, which is part of the motivation for this
whole series; the approach gdbserver takes is inferior to the approach
GDB takes (gdbserver relies on reading the file being debugged, and
extracting similar information from the file headers).
This commit moves the reading of cs and ds, which is used to figure
out if a thread is 32-bit or 64-bit (or in x32 mode), into the gdb/nat
directory so that the code can be shared with gdbserver, but at this
point I'm not actually using the code in gdbserver, that will come
later.
As such there should be no user visible changes after this commit, GDB
continues to do things as it did before (reading cs/ds), while
gdbserver continues to use its own approach (which doesn't require
reading cs/ds).
Approved-By: John Baldwin <jhb@FreeBSD.org>
Reviewed-By: Felix Willgerodt <felix.willgerodt@intel.com>
In a later commit I want to access have_ptrace_getregset from a .c
file in the nat/ directory. To achieve this I need access to the
declaration of have_ptrace_getregset.
Currently have_ptrace_getregset is declared (and defined) twice, once
in GDB and once in gdbserver.
This commit moves the declaration into nat/linux-nat.h, but leaves the
two definitions where they are. Now, in my later commit, I can pull
in the declaration from nat/linux-nat.h.
There should be no user visible changes after this commit.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
The have_ptrace_getfpxregs global tracks whether GDB or gdbserver is
running on a kernel that supports the GETFPXREGS ptrace request.
Currently this global is declared twice (once in GDB and once in
gdbserver), I think it makes sense to move this global into the nat/
directory, and have a single declaration and definition.
While moving this variable I have converted it to a tribool, as that
was what it really was, if even used the same numbering as the tribool
enum (-1, 0, 1). Where have_ptrace_getfpxregs was used I have updated
in the obvious way.
However, while making this change I noticed what I think is a bug in
x86_linux_nat_target::read_description and x86_linux_read_description,
both of these functions can be called multiple times, but in both
cases we only end up calling i386_linux_read_description the first
time through in the event that PTRACE_GETFPXREGS is not supported.
This is because initially have_ptrace_getfpxregs will be
TRIBOOL_UNKNOWN, but after the ptrace call fails we set
have_ptrace_getfpxregs to TRIBOOL_FALSE. The next time we attempt to
read the target description we'll skip the ptrace call, and so skip
the call to i386_linux_read_description.
I've not tried to address this preexisting bug in this commit, this is
purely a refactor, there should be no user visible changes after this
commit. In later commits I'll merge the gdbserver and GDB code
together into the nat/ directory, and after that I'll try to address
this bug.
Reviewed-By: Felix Willgerodt <felix.willgerodt@intel.com>
This commit is part of a series that aims to share more of the x86
target description reading/generation code between GDB and gdbserver.
There are a huge number of similarities between the code in
gdbserver's x86_linux_read_description function and GDB's
x86_linux_nat_target::read_description function, and it is this
similarity that I plan, in a later commit, to share between GDB and
gdbserver.
However, one thing that is different in x86_linux_read_description is
the code inside the '!use_xml' block. This is the code that handles
the case where gdbserver is not allowed to send an XML target
description back to GDB. In this case gdbserver uses some predefined,
fixed, target descriptions.
First, it's worth noting that I suspect this code is not tested any
more. I couldn't find anything in the testsuite that tries to disable
XML target description support. And the idea of having a single
"fixed" target description really doesn't work well when we think
about all the various x86 extensions that exist. Part of me would
like to rip out the no-xml support in gdbserver (at least for x86),
and if a GDB connects that doesn't support XML target descriptions,
gdbserver can just give an error and drop the connection. GDB has
supported XML target descriptions for 16 years now, I think it would
be reasonable for our shipped gdbserver to drop support for the old
way of doing things.
Anyway.... this commit doesn't do that.
What I did notice was that, over time, the '!use_xml' block appears to
have "drifted" within the x86_linux_read_description function; it's
now not the first check we do. Instead we make some ptrace calls and
return a target description generated based on the result of these
ptrace calls. Surely it only makes sense to generate variable target
descriptions if we can send these back to GDB?
So in this commit I propose to move the '!use_xml' block earlier in
the x86_linux_read_description function.
The benefit of this is that this leaves the later half of
x86_linux_read_description much more similar to the GDB function
x86_linux_nat_target::read_description and sets us up for potentially
sharing code between GDB and gdbserver in a later commit.
Approved-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
Share the definition of I386_LINUX_XSAVE_XCR0_OFFSET between GDB and
gdbserver.
This commit moves the definition into gdbsupport/x86-xstate.h, which
allows the #define to be shared.
There should be no user visible changes after this commit.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
This changes gdbpy_call_method to return a gdbpy_ref<>. This is
slightly safer because it makes it simpler to correctly handle
reference counts.
Reviewed-By: Tom de Vries <tdevries@suse.de>
Starting with gcc commit 80048aa13a6 ("debug/111409 - don't generate COMDAT
macro sections for split DWARF"), available from release gcc 14.1 onwards, gcc
produces a usable dwarf-5 32-bit .debug_macro.dwo section.
Add a test-case excercising this.
Tested on x86_64-linux.
Tested test-case using a current gcc trunk build, and gcc 14.
Test-case gdb.base/watchpoint-running.exp reports the following kfail:
...
KFAIL: $exp: all-stop: software: watchpoint hit (timeout) (PRMS: gdb/111111)
...
but the referenced gdb PR doesn't exist.
Fix this by using an actual PR.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31833
For historic reasons we have ended up with a random set of discontiguous
bit assignments for INSN_* flags within `membership' and `exclusions'
members of `mips_opcode'. Some of the bits were previously used for ASE
assignments and have been reused in a disorganised fashion since `ase'
has been split off as a member on its own. It makes them hard to track
and maintain, and to see how many we still have available for future
assignments.
Therefore reorder the flags using consecutive bits and matching the
order used with the switch statement in `cpu_is_member'.
With test-case gdb.server/server-kill-python.exp, I sometimes run into:
...
builtin_spawn gdb -nw -nx -q -iex set height 0 -iex set width 0 \
-data-directory data-directory^M
kill^M
(gdb) kill^M
file server-kill-python^M
The program is not being run.^M
(gdb) ERROR: Couldn't load server-kill-python into GDB.
...
The problem is that the spawn produces a prompt, but it's not explicitly
consumed.
This is a regression since commit 0f077fcae0 ("[gdb/testsuite] Simplify
gdb.server/server-kill-python.exp").
Fix this by consuming the initial prompt.
PR testsuite/31819
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31819
Fixes: 0f077fcae0 ("[gdb/testsuite] Simplify gdb.server/server-kill-python.exp"
In gdb/python/py-tui.c we have code like this:
...
gdbpy_ref<> result (PyObject_CallMethod (m_window.get(), "hscroll",
"i", num_to_scroll, nullptr));
...
The nullptr is superfluous, the format string already indicates that there's
only one method argument.
OTOH, passing no method args does use a nullptr:
...
gdbpy_ref<> result (PyObject_CallMethod (m_window.get (), "render",
nullptr));
...
Furthermore, choosing the right format string chars can be tricky.
Add a typesafe wrapper around PyObject_CallMethod that hides these
details, such that we can use the more intuitive:
...
gdbpy_ref<> result (gdbpy_call_method (m_window.get(), "hscroll",
num_to_scroll));
...
and:
...
gdbpy_ref<> result (gdbpy_call_method (m_window.get (), "render"));
...
Tested on x86_64-linux.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
Approved-By: Tom Tromey <tom@tromey.com>
Currently it's not possible to call user-defined function call
operators, at least not without specifying operator() directly:
```
(gdb) l 1
1 struct S {
2 int operator() (int x) { return x + 5; }
3 };
4
5 int main () {
6 S s;
7
8 return s(23);
9 }
(gdb) p s(10)
Invalid data type for function to be called.
(gdb) p s.operator()(10)
$1 = 15
```
This now looks if an user-defined call operator is available when
trying to 'call' a struct value, and calls it instead, making this
possible:
```
(gdb) p s(10)
$1 = 15
```
The change in operation::evaluate_funcall is to make sure the type
fields are only used for function types, only they use them as the
argument types.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=12213
Approved-By: Tom Tromey <tom@tromey.com>
Add a new 'error_message' feature to the qSupported packet. When GDB
supports this feature then gdbserver is able to send
errors in the E.errtext format for the qRcmd and m packets.
Update qRcmd packet and m packets documentation as qRcmd newly
accepts errors in a E.errtext format.
Previously these two packets didn't support E.errtext style errors.
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
Convert 'set/show debug separate-debug-file' to the new debug scheme.
Though I'm not sure if we can really call it "new" any more!
Approved-By: Tom Tromey <tom@tromey.com>
While working on a later patch in this series, I noticed that GDB
would print the message:
Reading /path/to/file from remote target...
Even when /path/to/file doesn't exist on the remote target.
GDB does indeed try to open /path/to/file, but I'm not sure we really
need to tell the user unless we actually manage to open the file, and
plan to read content from it.
If we consider how GDB probes for separate debug files, we can attempt
to open multiple possible files, most of them will not exist. When we
are native debugging we don't bother telling the user about each file
we're checking for, we just announce any file we finally use.
I think it makes sense to do a similar thing for remote files.
So, in remote_target::remote_hostio_open(), I'd like to move the block
of code that prints the above message to after the open call has been
made, and we should only print the message if the open succeeds.
Now GDB only tells the user about files that we actually open and read
from the remote.
Approved-By: Tom Tromey <tom@tromey.com>
In build_id_to_bfd_suffix we look in each debug-file-directory for a
file matching the required build-id.
If we don't find one then we add the sysroot and perform the search
again.
However, the default sysroot is 'target:', and for a native target
this just means to search on the local machine. So by default, if the
debug information is not present, then we end up searching each
location twice.
I think we only need to perform the "with sysroot" check if either:
1. The sysroot is something other than 'target:'. If the user has
set it to '/some/directory' then we should check this sysroot. Or if
the user has set it to 'target:/some/other_directory', this is also
worth checking.
2. If the sysroot is 'target:', but the target's filesystem is not
local (i.e. the user is connected to a remote target), then we should
check using the sysroot as this will be looking on the remote
machine.
There's no tests for this as the whole point here is that I'm removing
duplicate work. No test regressions were seen though.
There should be no user visible changes after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
In remote.c lives remote_target::remote_hostio_send_command(), which
is used to handle sending a fileio packet to the remote, and for
parsing the reply packet.
Some commands, e.g. open, pwrite, close, send some arguments to the
remote, and then get back a single integer return value.
Other commands though, e.g. pread, readlink, fstat, send some
arguments and get back an integer return value and some additional
data. This additional data is called the attachment.
Except, we only get the attachment if the command completes
successfully. For example, calling readlink with a non existent path
will result in a return packet: 'F-1,2' with no attachment. This is
as expected.
Within remote_hostio_send_command we call remote_hostio_parse_result,
this parses the status code (-1 in our example above) and
then parses the errno value (2 in our example above).
Back in remote_hostio_parse_result we then hit this block:
/* Make sure we saw an attachment if and only if we expected one. */
if ((attachment_tmp == NULL && attachment != NULL)
|| (attachment_tmp != NULL && attachment == NULL))
{
*remote_errno = FILEIO_EINVAL;
return -1;
}
Which ensures that commands that expect an attachment, got an
attachment.
The problem is, we'll only get an attachment if the command
succeeded. If it didn't, then there is no attachment, and that is as
expected.
As remote_hostio_parse_result always sets the returned error number to
FILEIO_SUCCESS unless the packet contained an actual error
number (e.g. 2 in our example above), I suggest we should return early
if remote_hostio_parse_result indicates an error packet.
I ran into this issue while working on another patch. In that patch I
was checking the error code returned by a remote readlink call and
spotted that when I passed an invalid path I got EINVAL instead of
ENOENT. This patch fixes this issue.
Unfortunately the patch I was working on evolved, and my need to check
the error code went away, and so, right now, I have no way to reveal
this bug. But I think this is an obviously correct fix, and worth
merging even without a test.
Approved-By: Tom Tromey <tom@tromey.com>
The bitshift tests for opencl have these failures:
print /x (signed char) 0x0f << 8
No type named signed char.
(gdb) FAIL: gdb.base/bitshift.exp: lang=opencl: 8-bit, promoted: print /x (signed char) 0x0f << 8
print (signed char) 0x0f << 8
No type named signed char.
(gdb) FAIL: gdb.base/bitshift.exp: lang=opencl: 8-bit, promoted: print (signed char) 0x0f << 8
Apparently opencl doesn't have the 'signed' modifier for types, only
the 'unsigned' modifier.
Even 'char' is guaranteed to be signed if no modifier is used, so
this changes the casts to match this logic.
Approved-By: Tom Tromey <tom@tromey.com>
On systems where long has 32-bit size you get these failures:
print 1 << (unsigned long long) 0xffffffffffffffff
Cannot export value 18446744073709551615 as 32-bits unsigned integer (must be between 0 and 4294967295)
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: max-uint64: print 1 << (unsigned long long) 0xffffffffffffffff
print 1 >> (unsigned long long) 0xffffffffffffffff
Cannot export value 18446744073709551615 as 32-bits unsigned integer (must be between 0 and 4294967295)
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: max-uint64: print 1 >> (unsigned long long) 0xffffffffffffffff
print -1 << (unsigned long long) 0xffffffffffffffff
Cannot export value 18446744073709551615 as 32-bits unsigned integer (must be between 0 and 4294967295)
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: max-uint64: print -1 << (unsigned long long) 0xffffffffffffffff
print -1 >> (unsigned long long) 0xffffffffffffffff
Cannot export value 18446744073709551615 as 32-bits unsigned integer (must be between 0 and 4294967295)
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: max-uint64: print -1 >> (unsigned long long) 0xffffffffffffffff
Fixed by changing the number-of-bits variable to ULONGEST.
Approved-By: Tom Tromey <tom@tromey.com>
As seen in these test failures:
print -1 >> -1
warning: right shift count is negative
$N = 0
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: neg lhs/rhs: print -1 >> -1
print -4 >> -2
warning: right shift count is negative
$N = 0
(gdb) FAIL: gdb.base/bitshift.exp: lang=c: neg lhs/rhs: print -4 >> -2
Fixed by restoring the logic from before the switch to gmp.
Approved-By: Tom Tromey <tom@tromey.com>
PR31590 shows that right shift of negative numbers doesn't work
correctly since GDB 14:
(gdb) p (-3) >> 1
$1 = -1
GDB 13 and earlier returned the correct value -2.
And there actually is one test that shows the failure:
print -1 >> 1
$84 = 0
(gdb) FAIL: gdb.base/bitshift.exp: lang=asm: rsh neg lhs: print -1 >> 1
The problem was introduced with the change to gmp functions in
commit 303a881f87.
It's wrong because gdb_mpz::operator>> uses mpz_tdif_q_2exp, which
always rounds toward zero, and the gmp docu says this:
For positive n both mpz_fdiv_q_2exp and mpz_tdiv_q_2exp are simple
bitwise right shifts.
For negative n, mpz_fdiv_q_2exp is effectively an arithmetic right shift
treating n as two's complement the same as the bitwise logical functions
do, whereas mpz_tdiv_q_2exp effectively treats n as sign and magnitude.
So this changes mpz_tdiv_q_2exp to mpz_fdiv_q_2exp, since it
does right shifts for both positive and negative numbers.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31590
Approved-By: Tom Tromey <tom@tromey.com>
Commit cdd4206647 unintentionally disabled all tests of bitshift.exp,
so it actually just does this:
Running /c/src/repos/binutils-gdb.git/gdb/testsuite/gdb.base/bitshift.exp ...
PASS: gdb.base/bitshift.exp: complete set language
=== gdb Summary ===
# of expected passes 1
It changed the 'continue' of unsupported languages to 'return', and
since ada is the first language and is unsupported, no tests were run.
This changes it back to 'continue', and the following patches fix
the regressions that were introduced since then unnoticed.
Approved-By: Tom Tromey <tom@tromey.com>
Under certain circumstances, a floating point exception in
target_read_string() can happen when the type has been obtained
by a call to stpy_lazy_string_elt_type(). In the latter function,
a call to check_typedef() has been forgotten. This makes
type->length = 0 in this case.
This call to wnoutrefresh is not useful. It's based on the
misunderstanding that wnoutrefresh somehow prevents display, whereas
actually what it does is copy from one curses buffer to another.
I'm working on a larger patch to clean up this area, but this
particular call can be removed immediately without consequence.
Approved-By: Andrew Burgess <aburgess@redhat.com>
On macOS sonoma, printing a string would only print the first
character. For instance, if there was a 'const char *s = "foobar"',
then the 'print s' command would print '$1 = "f"' rather than the
expected '$1 = "foobar"'.
It seems that this is due to Apple silently replacing the version
of libiconv they ship with the OS to one which silently fails to
handle the 'outbytesleft' parameter correctly when using 'wchar_t'
as a target encoding.
This specifically causes issues when using iterating through a
string as wchar_iterator does.
This bug is visible even if you build for an old version of macOS,
but then run on Sonoma. Therefore this fix in the code applies
generally to macOS, and not specific to building on Sonoma. Building
for an older version and expecting forwards compatibility is a
common situation on macOS.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31853
This one was triggered by trying to dump an AMDGPU object.
elf64-amdgcn.c lacks support for objdump relocation handling.
PR 31872
* elfcode.h (elf_slurp_reloc_table_from_section): Don't segfault
on NULL elf_info_to_howto_rel.
Regtested on s390x-redhat-linux.
Rewriting l(g)rl @GOTENT to larl is unnecessarily guarded by
bfd_link_pic(). There were no use cases for this in the past, but
since recently the Linux Kernel on s390x is compiled with -fPIE
and linked with --no-pie. Remove the unnecessary bfd_link_pic()
check.
bfd/ChangeLog:
* elf32-s390.c (elf_s390_relocate_section): Don't check for
bfd_link_pic() when rewriting lrl@GOTENT to larl.
(elf_s390_finish_dynamic_symbol): Emit a relative reloc for
the above case.
* elf64-s390.c (elf_s390_relocate_section): Don't check for
bfd_link_pic() when rewriting lgrl@GOTENT to larl.
(elf_s390_finish_dynamic_symbol): Emit a relative reloc for
the above case.
ld/ChangeLog:
* testsuite/ld-s390/s390.exp: Hook up the new tests.
* testsuite/ld-s390/gotreloc_31-no-pie-1.dd: New test.
* testsuite/ld-s390/gotreloc_64-no-pie-1.dd: New test.
This patch renames global_symbol_searcher::filenames and makes it
private, adding a new method to append a filename to the vector. This
also cleans up memory management here, removing an alloca from rbreak,
and removing a somewhat ugly SCOPE_EXIT from the Python code, in favor
of having global_symbol_searcher manage the memory itself.
Regression tested on x86-64 Fedora 38.