Results in configure output like:
```
checking for X... no
/var/tmp/portage/sys-devel/gdb-12.1/work/gdb-12.1/gdb/configure: 18837: test: yes: unexpected operator
checking whether to use babeltrace... auto
```
... when /bin/sh is provided by a POSIX-compliant shell, like dash,
instead of bash.
Registers in CSKY architecture included:
1. 32 gprs
2. 16 ars (alternative gprs used for quick interrupt)
3. hi, lo, pc
4. fr0~fr31, fcsr, fid, fesr
5. vr0~vr15
6. ((32 banks) * 32) cr regs (max 32 banks, 32 control regs a bank)
For register names:
Except over control registers, other registers, like gprs, hi, lo ...
are fixed names. Among the 32*32 control registers, some used registers
will have fixed names, others will have a default name "cpxcry". 'x'
refers to bank, y refers index in the bank(a control register in bank
4 with index 14 will has a default name cp4cr14).
For register numbers in GDB:
We assign a fixed number to each register in GDB, like:
r0~r31 with 0~31
hi, lo with 36, 37
fpu/vpu with 40~71
...
described in function csky_get_supported_register_by_index().
Function csky_get_supported_tdesc_registers_count():
To calculate the total number of registers that GDB can analyze,
including those with fixed names and those with default register names.
Function csky_get_supported_register_by_index():
To find a supported struct csky_supported_tdesc_register, return a
struct include name with regnum via index.
Arrays csky_supported_tdesc_feature_names[]:
Include all supported feature names in tdesc-xmls.
We use the information described above to load the register description
file of the target from the stub. When loading, do a little check that
whether the register description file contains SP, LR and PC.
On openSUSE Leap 42.3 with python 3.4, I run into:
...
(gdb) python import pygments^M
Traceback (most recent call last):^M
File "<string>", line 1, in <module>^M
ImportError: No module named 'pygments'^M
Error while executing Python code.^M
(gdb) FAIL: gdb.base/style.exp: python import pygments
ERROR: unexpected output from python import
...
because gdb_py_module_available doesn't handle the single quotes around the
module name in the ImportError.
Fix this by allowing the single quotes.
Tested on x86_64-linux.
With its movement to the stack, and with the subsequent desire to
initialize the entire instr_info instances, this has become doubly
inefficient. Individual users have better knowledge of how big a buffer
they need, and in a number of cases going through an intermediate buffer
can be avoided altogether.
Having got confirmation that it wasn't intentional to print memory
operand displacements with inconsistent style, print_displacement() is
now using dis_style_address_offset consistently (eliminating the need
for callers to pass in a style).
While touching print_operand_value() also convert its "hex" parameter to
bool. And while altering (and moving) oappend_immediate(), fold
oappend_maybe_intel_with_style() into its only remaining caller. Finally
where doing adjustments, use snprintf() in favor of sprintf().
By changing the values used for "artificial" prefix values,
all_prefixes[] can be shrunk to array of unsigned char. All that
additionally needs adjusting is the printing of possible apparently
standalone prefixes when recovering from longjmp(): Simply check
whether any prefixes were successfully decoded, to avoid converting
opcode bytes matching the "artificial" values to prefix mnemonics.
Similarly by re-arranging the bits assigned to PREFIX_* mask values
we can fit all segment register masks in a byte and hence shrink
active_seg_prefix to unsigned char.
Somewhat similarly with last_*_prefix representing offsets into the
opcode being disassembled, signed char is sufficient to hold all possible
values.
Commit 39fb369834 ("opcodes: Make i386-dis.c thread-safe") introduced
a lot of uninitialized data. Alan has in particular observed ubsan
taking issue with the loop inverting the order of operands, where
op_riprel[] - an array of bool - can hold values other than 0 or 1.
Move instantiation of struct instr_info into print_insn() (thus having
just a single central point), and make use of C99 dedicated initializers
to fill fields right in the initializer where possible. This way all
fields not explicitly initialized will be zero-filled, which in turn
allows dropping of some other explicit initialization later in the
function or in ckprefix(). Additionally this removes a lot of
indirection, as all "ins->info" uses can simply become "info".
Make one further arrangement though, to limit the amount of data needing
(zero)initializing on every invocation: Convert the op_out structure
member to just an array of pointers, with the actual arrays living
inside print_insn() (and, as befoe, having just their 1st char filled
with nul).
While there, instead of adjusting print_insn()'s forward declaration,
arrange for no such declaration to be needed in the first place.
Mark pointed out that my recent addrmap C++-ficiation changes caused a
regression in the self-tests. This patch fixes the problem by
updating this test not to allocate the mutable addrmap on an obstack.
While working on addrmaps, I noticed that psymtab_addrmap is no longer
needed now. It was introduced in ancient times as an optimization for
DWARF, but no other symbol reader was ever updated to use it. Now
that DWARF does not use psymtabs, it can be deleted.
Mutable addrmaps currently require an obstack. This was probably done
to avoid having to call splay_tree_delete, but examination of the code
shows that all mutable obstacks have a limited lifetime -- now it's
simple to treat them as ordinary C++ objects, in some cases
stack-allocating them, and have a destructor to make the needed call.
This patch implements this change.
This is a simply C++-ification of the basics of addrmap: it uses
virtual methods rather than a table of function pointers, and it
changes the concrete implementations to be subclasses.
Prior to c6ca3dab dropping support for Cygwin 1.5, __USEWIDE was not
defined for Cygwin 1.5. After that, it's always defined if __CYGWIN__
is, so remove __USEWIDE conditionals inside __CYGWIN__ conditionals.
With the registry rewrite series, on Fedora 34, I started seeing this
error in xcoffread.c:
../../binutils-gdb/gdb/xcoffread.c: In function ‘void read_xcoff_symtab(objfile*, legacy_psymtab*)’:
../../binutils-gdb/gdb/xcoffread.c:948:25: error: ‘main_aux’ is used uninitialized [-Werror=uninitialized]
948 | union internal_auxent fcn_aux_saved = main_aux;
| ^~~~~~~~~~~~~
../../binutils-gdb/gdb/xcoffread.c:933:25: note: ‘main_aux’ declared here
933 | union internal_auxent main_aux;
| ^~~~~~~~
I don't know why this error started suddenly... that seems weird,
because it's not obviously related to the changes I made.
Looking into it, it seems this line was intended to avoid a similar
warning -- but since 'main_aux' is uninitialized at the point where it
is used, this fix was incomplete.
This patch avoids the warning by initializing using "{}". I'm
checking this in.
The if statement in case gdb_sys_ioctl in function
record_linux_system_call in file gdb/linux-record.c is as follows:
if (tmpulongest == tdep->ioctl_FIOCLEX
|| tmpulongest == tdep->ioctl_FIONCLEX
....
|| tmpulongest == tdep->ioctl_TCSETSW
...
}
The PowerPC ioctl value for ioctl_TCSETW is 0x802c7415. The variable
ioctl_TCSETW is defined in gdb/linux-record.h as an int. The TCSETW value
has the MSB set to one so it is a negative integer. The comparison of the
unsigned long value tmpulongest to a negative integer value for
ioctl_TCSETSW fails.
This patch changes the declarations for the ioctl_* values in struct
linux_record_tdep to unsigned long to fix the comparisons between
tmpulongest and the tdep->ioctl_* values.
An additional test gdb.reverse/test_ioctl_TCSETSW.exp is added to verify
the gdb record_linux_system_call() if statement for the ioctl TCSETSW
succeeds.
This patch has been tested on Power 10 and Intel with no test failures.
Some of the ioctl numbers are based on the size of kernel termios structure.
Currently the PowerPC GDB definitions are "hard coded" into the ioctl
number.
The current PowerPC values for TCGETS, TCSETS, TCSETSW and TCSETSF are
defined in gdb/ppc-linux-tdep.c as:
record_tdep->ioctl_TCGETS = 0x403c7413;
record_tdep->ioctl_TCSETS = 0x803c7414;
record_tdep->ioctl_TCSETSW = 0x803c7415;
record_tdep->ioctl_TCSETSF = 0x803c7416;
Where the termios structure size is in hex digits [5:4] as 0x3c.
The definition for the PowerPC termios structure is given in:
arch/powerpc/include/uapi/asm/termbits.h
The size of the termios data structure in this file is 0x2c not 0x3c.
This patch changes the hex digits for the size of the PowerPC termios size
in the ioctl values for TCGETS, TCSETS, TCSETSW and TCSETSF to 0x2c.
This patch also changes the hard coding to generate the number based on a
it easier to update the ioctl numbers.
Since pretty much forever the get_compiler_info function has included
these lines:
# Most compilers will evaluate comparisons and other boolean
# operations to 0 or 1.
uplevel \#0 { set true 1 }
uplevel \#0 { set false 0 }
These define global variables true (to 1) and false (to 0).
It seems odd to me that these globals are defined in
get_compiler_info, I guess maybe the original thinking was that if a
compiler had different true/false values then we would detect it there
and define true/false differently.
I don't think we should be bundling this logic into get_compiler_info,
it seems weird to me that in order to use $true/$false a user needs to
first call get_compiler_info.
It would be better I think if each test script that wants these
variables just defined them itself, if in the future we did need
different true/false values based on compiler version then we'd just
do:
if { [test_compiler_info "some_pattern"] } {
# Defined true/false one way...
} else {
# Defined true/false another way...
}
But given the current true/false definitions have been in place since
at least 1999, I suspect this will not be needed any time soon.
Given that the definitions of true/false are so simple, right now my
suggestion is just to define them in each test script that wants
them (there's not that many). If we ever did need more complex logic
then we can always add a function in gdb.exp that sets up these
globals, but that seems overkill for now.
There should be no change in what is tested after this commit.
This variable is useful when exercising AArch64 multi-arch support (debugging
32-bit AArch32 executables).
Unfortunately it isn't well documented. This patch adds information about it
and explains how to use it.
With gcc-12, I get for test-case gdb.base/vla-struct-fields.exp:
...
(gdb) print inner_vla_struct_object_size == sizeof(inner_vla_struct_object)^M
$7 = 1^M
(gdb) XPASS: gdb.base/vla-struct-fields.exp: size of inner_vla_struct_object
...
Fix this by limiting the xfailing to gcc-11 and earlier. Also, limit the
xfailing to the equality test.
Tested on x86_64-linux.
On openSUSE Tumbleweed with gcc-12, I run into a timeout:
...
(gdb) print value^M
Multiple matches for value^M
[0] cancel^M
[1] ada.strings.maps.value (<ref> ada.strings.maps.character_mapping; \
character) return character at a-strmap.adb:599^M
[2] pck.value at src/gdb/testsuite/gdb.ada/ghost/pck.ads:17^M
[3] system.object_reader.value (<ref> system.object_reader.object_symbol) \
return system.object_reader.uint64 at s-objrea.adb:2279^M
[4] system.traceback.symbolic.value (system.address) return string at \
s-trasym.adb:200^M
> FAIL: gdb.ada/ghost.exp: print value (timeout)
print ghost_value^M
Argument must be choice number^M
(gdb) FAIL: gdb.ada/ghost.exp: print ghost_value
...
Fix this by prefixing value (as well as the other printed values) with the
package name:
...
(gdb) print pck.value^M
...
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29055
I noticed that the Python event documentation referred to the event's
"breakpoint" field as a function, whereas it is actually an attribute.
This patch fixes the error.
GDB's ability to run 32-bit ARM processes on an AArch64 native target
is currently broken. The test gdb.multi/multi-arch.exp currently
fails with a timeout.
The cause of these problems is the following three functions:
aarch64_linux_nat_target::thread_architecture
aarch64_linux_nat_target::fetch_registers
aarch64_linux_nat_target::store_registers
What has happened, over time, is that these functions have been
modified, forgetting that any particular thread (running on the native
target) might be an ARM thread, or might be an AArch64 thread.
The problems always start with a line similar to this:
aarch64_gdbarch_tdep *tdep
= (aarch64_gdbarch_tdep *) gdbarch_tdep (inf->gdbarch);
The problem with this line is that if 'inf->gdbarch' is an ARM
architecture, then gdbarch_tdep will return a pointer to an
arm_gdbarch_tdep object, not an aarch64_gdbarch_tdep object. The
result of the above cast will, as a consequence, be undefined.
In aarch64_linux_nat_target::thread_architecture, after the undefined
cast we then proceed to make use of TDEP, like this:
if (vq == tdep->vq)
return inf->gdbarch;
Obviously at this point the result is undefined, but, if this check
returns false we then proceed with this code:
struct gdbarch_info info;
info.bfd_arch_info = bfd_lookup_arch (bfd_arch_aarch64, bfd_mach_aarch64);
info.id = (int *) (vq == 0 ? -1 : vq);
return gdbarch_find_by_info (info);
As a consequence we will return an AArch64 gdbarch object for our ARM
thread! Things go downhill from there on.
There are similar problems, with similar undefined behaviour, in the
fetch_registers and store_registers functions.
The solution is to make use of a check like this:
if (gdbarch_bfd_arch_info (inf->gdbarch)->bits_per_word == 32)
If the word size is 32 then we know we have an ARM architecture. We
just need to make sure that we perform this check before trying to
read the tdep field.
In aarch64_linux_nat_target::thread_architecture a little reordering,
and the addition of the above check allows us to easily avoid the
undefined behaviour.
For fetch_registers and store_registers I made the decision to split
each of the functions into two new helper functions, and so
aarch64_linux_nat_target::fetch_registers now calls to either
aarch64_fetch_registers or aarch32_fetch_registers, and there's a
similar change for store_registers.
One thing I had to decide was whether to place the new aarch32_*
functions into the aarch32-linux-nat.c file. In the end I decided to
NOT place the functions there, but instead leave them in
aarch64-linux-nat.c, my reasoning was this:
The existing functions in that file are shared from arm-linux-nat.c
and aarch64-linux-nat.c, this generic code to support 32-bit ARM
debugging from either native target.
In contrast, the two new aarch32 functions I have added _only_ make
sense when debugging on an AArch64 native target. These function
shouldn't be called from arm-linux-nat.c at all, and so, if we places
the functions into aarch32-linux-nat.c, the functions would be built
into a 32-bit ARM GDB, but never used.
With that said, there's no technical reason why they couldn't go in
aarch32-linux-nat.c, so if that is preferred I'm happy to move them.
After this commit the gdb.multi/multi-arch.exp passes.
Add a description of exception entry context stacking and fix next
frame offset (at 0xA8 relative to R0 location) as well as FPU
registers ones (starting at 0x68 relative to R0).
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
Small performance improvement by fetching previous SP value only
once before the loop and reuse it to avoid fetching at every
iteration.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
The previous patch that introduced the arm_cc_for_target procedure
moved the ARM_CC_FOR_TARGET global check to that procedure, but forgot
to tell tcl that ARM_CC_FOR_TARGET is a global. As a result,
specifying ARM_CC_FOR_TARGET on the command line actually does
nothing. This fixes it.
Change-Id: I4e33b7633fa665e2f7b8f8c9592a949d74a19153
ARMv7-M Architecture Reference "A2.3.1 Arm core registers" states
that LR is set to 0xffffffff on reset.
ARMv8-M Architecture Reference "B3.3 Registers" states that LR is set
to 0xffffffff on warm reset if Main Extension is implemented,
otherwise the value is unknown.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
After this commit:
commit 44d469c5f8
Date: Tue May 31 16:43:44 2022 +0200
gdb/testsuite: add Fortran compiler identification to GDB
Some regressions were noticed:
https://sourceware.org/pipermail/gdb-patches/2022-May/189673.html
The problem is associated with how compiler_info variable is cached
between calls to get_compiler_info.
Even before the above commit, get_compiler_info supported two
language, C and C++. Calling get_compiler_info would set the global
compiler_info based on the language passed as an argument to
get_compiler_info, and, in theory, compiler_info would not be updated
for the rest of the dejagnu run.
This obviously is slightly broken behaviour. If the first call to
get_compiler_info was for the C++ language then compiler_info would be
set based on the C++ compiler in use, while if the first call to
get_compiler_info was for the C language then compiler_info would be
set based on the C compiler.
This probably wasn't very noticable, assuming a GCC based test
environment then in most cases the C and C++ compiler would be the
same version.
However, if the user starting playing with CC_FOR_TARGET or
CXX_FOR_TARGET, then they might not get the behaviour they expect.
Except, to make matters worse, most of the time, the user probably
would get the behaviour they expected .... except when they didn't!
I'll explain:
In gdb.exp we try to avoid global variables leaking between test
scripts, this is done with the help of the two procs
gdb_setup_known_globals and gdb_cleanup_globals. All known globals
are recorded before a test script starts, and then, when the test
script ends, any new globals are deleted.
Normally, compiler_info is only set as a result of a test script
calling get_compiler_info or test_compiler_info. This means that the
compiler_info global will not exist when the test script starts, but
will exist when the test script end, and so, the compiler_info
variable is deleted at the end of each test.
This means that, in reality, the compiler_info is recalculated once
for each test script, hence, if a test script just checks on the C
compiler, or just checks on the C++ compiler, then compiler_info will
be correct and the user will get the behaviour they expect.
However, if a single test script tries to check both the C and C++
compiler versions then this will not work (even before the above
commit).
The situation is made worse be the behaviour or the load_lib proc.
This proc (provided by dejagnu) will only load each library once.
This means that if a library defines a global, then this global would
normally be deleted at the end of the first test script that includes
the library.
As future attempts to load the library will not actually reload it,
then the global will not be redefined and would be missing for later
test scripts that also tried to load that library.
To work around this issue we override load_lib in gdb.exp, this new
version adds all globals from the newly loaded library to the list of
globals that should be preserved (not deleted).
And this is where things get interesting for us. The library
trace-support.exp includes calls, at the file scope, to things like
is_amd64_regs_target, which cause get_compiler_info to be called.
This means that after loading the library the compiler_info global is
defined.
Our override of load_lib then decides that this new global has to be
preserved, and adds it to the gdb_persistent_globals array.
From that point on compiler_info will never be recomputed!
This commit addresses all the caching problems by doing the following:
Change the compiler_info global into compiler_info_cache global. This
new global is an array, the keys of this array will be each of the
supported languages, and the values will be the compiler version for
that language.
Now, when we call get_compiler_info, if the compiler information for
the specific language has not been computed, then we do that, and add
it to the cache.
Next, compiler_info_cache is defined by calling
gdb_persistent_global. This automatically adds the global to the list
of persistent globals. Now the cache will not be deleted at the end
of each test script.
This means that, for a single test run, we will compute the compiler
version just once for each language, this result will then be cached
between test scripts.
Finally, the legacy 'gcc_compiled' flag is now only set when we call
get_compiler_info with the language 'c'. Without making this change
the value of 'gcc_compiled' would change each time a new language is
passed to get_compiler_info. If the last language was e.g. Fortran,
then gcc_compiled might be left false.
Now that get_compiler_info might actually fail (if the language is not
handled), then we should try to handle this failure better in
test_compiler_info.
After this commit, if get_compiler_info fails then we will return a
suitable result depending on how the user called test_compiler_info.
If the user does something like:
set version [test_compiler_info "" "unknown-language"]
Then test_compiler_info will return an empty string. My assumption is
that the user will be trying to match 'version' against something, and
the empty string hopefully will not match.
If the user does something like:
if { [test_compiler_info "some_pattern" "unknown-language"] } {
....
}
Then test_compiler_info will return false which seems the obvious
choice.
There should be no change in the test results after this commit.
This commit is a minor cleanup for the two functions (in gdb.exp)
get_compiler_info and test_compiler_info.
Instead of using the empty string as the default language, and just
"knowing" that this means the C language. Make this explicit. The
language argument now defaults to "c" if not specified, and the if
chain in get_compiler_info that checks the language not explicitly
handles "c" and gives an error for unknown languages.
This is a good thing, now that the API appears to take a language, if
somebody does:
test_compiler_info "xxxx" "rust"
to check the version of the rust compiler then we will now give an
error rather than just using the C compiler and leaving the user
having to figure out why they are not getting the results they
expect.
After a little grepping, I think the only place we were explicitly
passing the empty string to either get_compiler_info or
test_compiler_info was in gdb_compile_shlib_1, this is now changed to
pass "c" as the default language.
There should be no changes to the test results after this commit.
We don't need to call get_compiler_info before calling
test_compiler_info; test_compiler_info includes a call to
get_compiler_info.
This commit cleans up lib/gdb.exp and lib/dwarf.exp a little by
removing some unneeded calls to get_compiler_info. We could do the
same cleanup throughout the testsuite, but I'm leaving that for
another day.
There should be no change in the test results after this commit.
The procedure gcc_major_version was earlier using the global variable
compiler_info to retrieve gcc's major version. This is discouraged and
(as can be read in a comment in compiler.c) compiler_info should be
local to get_compiler_info and test_compiler_info.
The preferred way of getting the compiler string is via calling
test_compiler_info without arguments. Gcc_major_version was changed to
do that.
While running the gdb.threads/tls.exp test with a GDB configured
without Python, I noticed some duplicate test names.
This is caused by a call to skip_python_tests that is within a proc
that is called multiple times by the test script. Each call to
skip_python_tests results in a call to 'unsupported', and this causes
the duplicate test names.
After this commit we now call skip_python_tests just once and place
the result into a variable. Now, instead of calling skip_python_tests
multiple times, we just check the variable.
There should be no change in what is tested after this commit.
While testing on AArch64 I spotted a duplicate test name in the
gdb.base/gnu_vector.exp test.
This commit adds a 'with_test_prefix' to resolve the duplicate.
While I was in the area I updated a 'gdb_test_multiple' call to make
use of $gdb_test_name.
There should be no change in what is tested after this commit.
The throw_perror_with_name function is not used outside of utils.c
right now. And as perror_with_name is just a wrapper around
throw_perror_with_name, then any future calls would be to
perror_with_name.
Lets make throw_perror_with_name static.
There should be no user visible changes after this commit.
I ran into this error while working on AArch64 GDB:
Unable to fetch VFP registers.: Invalid argument.
Notice the '.:' in the middle of this error message.
This is because of this call in aarch64-linux-nat.c:
perror_with_name (_("Unable to fetch VFP registers."));
The perror_with_name function take a string, and adds ': <message>' to
the end the string, so I don't think the string that we pass to
perror_with_name should end in '.'.
This commit removes all of the trailing '.' characters from
perror_with_name calls, which give more readable error messages.
I don't believe that any of these errors are tested in the
testsuite (after a little grepping).
The CU queue is a member of dwarf2_per_bfd, but it is only used when
expanding CUs. Also, the dwarf2_per_objfile destructor checks the
queue -- however, if the per-BFD object is destroyed first, this will
not work. This was pointed out Lancelot as fallout from the patch to
rewrite the registry system.
This patch avoids this problem by moving the queue to the per-objfile
object.