Call unwrap_hash_lookup to restore the wrapper symbol check for standard
function since reference to standard function may not show up in LTO
symbol table:
[hjl@gnu-tgl-3 pr31956-3]$ nm foo.o
00000000 T main
U __real_malloc
00000000 T __wrap_malloc
[hjl@gnu-tgl-3 pr31956-3]$ lto-dump -list foo.o
Type Visibility Size Name
function default 0 malloc
function default 0 __real_malloc
function default 3 main
function default 5 __wrap_malloc
[hjl@gnu-tgl-3 pr31956-3]$ make
gcc -O2 -flto -Wall -c -o foo.o foo.c
gcc -Wl,--wrap=malloc -O2 -flto -Wall -o x foo.o
/usr/local/bin/ld: /tmp/ccsPW0a9.ltrans0.ltrans.o: in function `main':
<artificial>:(.text.startup+0xa): undefined reference to `__wrap_malloc'
collect2: error: ld returned 1 exit status
make: *** [Makefile:22: x] Error 1
[hjl@gnu-tgl-3 pr31956-3]$
Also add a test to verify that the unused wrapper is removed.
PR ld/31956
* plugin.c (get_symbols): Restore the wrapper symbol check for
standard function.
* testsuite/ld-plugin/lto.exp: Run the malloc test and the
unused test.
* testsuite/ld-plugin/pr31956c.c: New file.
* testsuite/ld-plugin/pr31956d.c: New file.
* testsuite/ld-plugin/pr31956d.d: New file.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Now that all known uses of VLAs within GDB are removed, remove the
`-Wno-vla-cxx-extension` (which was used to silence clang warnings) and
add `-Wvla`, such that any use of a VLA will trigger a warning.
Change-Id: I69a8d7f93f973743165b0ba46f9c2ea8adb89025
Reviewed-By: Keith Seitz <keiths@redhat.com>
Remove uses of VLAs, replace with gdb::byte_vector. There might be more
in files that I can't compile, but it's difficult to tell without
actually compiling on all platforms.
Many thanks to the Linaro pre-commit CI for helping find some problems
with an earlier iteration of this patch.
Change-Id: I3e5e34fcac51f3e6b732bb801c77944e010b162e
Reviewed-by: Keith Seitz <keiths@redhat.com>
Thiago Jung Bauermann noticed that gdb.base/list-dot-nodebug was not
actually compiling the test with some debuginfo in the relevant part,
and while fixing I noticed that the base assumption of the "some" case
was wrong, GDB would select some symtab as a default location and the
test would always fail. This fix makes printing the default location
only be tested when there is no debuginfo.
When testing with no debuginfo, if a system had static libc debuginfo,
the test would also fail. To add an extra layer of robustness to the
test, this rewrite also strips any stray debuginfo from the executable.
The test would only fail now if it runs in a system that can't handle
stripped debuginfo and has static debuginfo pre-installed.
Reported-By: Tom de Vries <tdevries@suse.de>
Reported-By: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31721
Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
The final test of gdb.python/py-framefilter-invalidarg.exp expected that
the the backtrace only printed the source file name. However, when using
clang, gdb will always print the full path to the file, which would
cause the test to fail. This commit introduces a regexp that optionally
matches paths, preprended to the file name, which fixes the clang
failure without introducing gcc failures.
Approved-By: Andrew Burgess <aburgess@redhat.com>
The test gdb.fortran/entry-point.exp already has an XFAIL when trying to
set a breakpoint in mod::mod_foo because gcc puts that subprogram in the
wrong scope in the debug information. Clang's debug information looks
the same as gcc's, so the test to setup the xfail has been extended to
also include clang.
Approved-By: Andrew Burgess <aburgess@redhat.com>
Clang doesn't add build-id information by default, unlike gcc. This
means that tests that rely on build-id being available and don't
explicitly add it to the compilation options will fail with clang.
This commit fixes the fails in gdb.python/py-missing-debug.exp,
gdb.server/remote-read-msgs.exp, gdb.base/coredump-filter-build-id.exp
and gdb.server/solib-list.exp
Approved-By: Andrew Burgess <aburgess@redhat.com>
Internal naming of functions / data as well as commentary mixes lines
and statements. It is presumably this confusion which has led to the
wrong use of ignore_rest_of_line() when dealing with line comments in
read_a_source_file(). We shall not (silently) produce different output
depending on whether -f is passed (for suitable input).
Introduce two new helper macros, intended to be used in favor of open-
coded accesses to is_end_of_line[]. To emphasize the difference, convert
ignore_rest_of_line() right away, including adjustments to its comments.
Since most targets have # in line_comment_chars[], add a target-
independent test for that, plus an x86-only one also checking for non-#
to work as intended.
Keep the two symmetrical looking. Makes sense to perform the sanity
checks similarly too.
gas/
* ginsn.c (ginsn_src_print): Buffer up result of snprintf and
add sanity checks on the value.
(ginsn_dst_print): Use switch case instead.
For ginsns with less than 2 source operands or no destination operands,
the current textual dump contains a superfluous comma, like the relevant
testcases show.
Adjust the code a bit to not emit the lone trailing comma. Also, adjust
the aarch64 and x86_64 testcases.
gas/
* ginsn.c (ginsn_src_print): Do not use a trailing comma when
printing the src of ginsn.
(ginsn_print): Check the strlen and prefix a comma before the
src string.
gas/testsuite/
* gas/scfi/aarch64/ginsn-cofi-1.l: Adjust the expected textual
dump of the ginsn.
* gas/scfi/x86_64/ginsn-cofi-1.l: Likewise.
Some flavors of indirect call and jmp instructions were not being
handled earlier, leading to a GAS error (#1):
(#1) "Error: SCFI: unhandled op 0xff may cause incorrect CFI"
Not handling jmp/call (direct or indirect) ops is an error (as shown
above) because SCFI needs an accurate CFG to synthesize CFI correctly.
Recall that the presence of indirect jmp/call, however, does make the
CFG ineligible for SCFI. In other words, generating the ginsns for them
now, will eventually cause SCFI to bail out later with an error (#2)
anyway:
(#2) "Error: untraceable control flow for func 'XXX'"
The first error (#1) gives the impression of missing functionality in
GAS. So, it seems cleaner to synthesize a GINSN_TYPE_JUMP /
GINSN_TYPE_CALL now in the backend, and let SCFI machinery complain with
the error as expected.
The handling for these indirect jmp/call instructions is similar, so
reuse the code by carving out a function for the same.
Adjust the testcase to include the now handled jmp/call instructions as
well.
gas/
* config/tc-i386-ginsn.c (x86_ginsn_indirect_branch): New
function.
(x86_ginsn_new): Refactor out functionality to above.
gas/testsuite/
* gas/scfi/x86_64/ginsn-cofi-1.l: Adjust the output.
* gas/scfi/x86_64/ginsn-cofi-1.s: Add further varieties of
jmp/call opcodes.
get_type_abbrev_from_form is lax in not limiting data for a uleb to
the current CU, because DW_FORM_ref_addr allows access to other CU's
data. This can lead to an assertion fail when skipping or reading
attributes in get_type_signedness.
* dwarf.c (get_type_abbrev_from_form): Limit uleb data to map end
for ref_addr, cu_end otherwise.
This commit moves aarch64_linux_memtag_matches_p,
aarch64_linux_set_memtags, aarch64_linux_get_memtag, and
aarch64_linux_memtag_to_string hooks (plus the aarch64_mte_get_atag
function used by them), along with the setting of the memtag granule
size, from aarch64-linux-tdep.c to aarch64-tdep.c, making MTE available
on baremetal targets. Since the aarch64-linux-tdep.c layer inherits
these hooks from aarch64-tdep.c, there is no effective change for
aarch64-linux targets.
Helpers used both by aarch64-tdep.c and by aarch64-linux-tdep.c were
moved from arch/aarch64-mte-linux.{c,h} to new arch/aarch64-mte.{c,h}
files.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
On fedora rawhide, with python 3.13, I run into:
...
(gdb) python print (gdb.parse_and_eval ('a_point_t').format_string (invalid=True))^M
Python Exception <class 'TypeError'>: \
this function got an unexpected keyword argument 'invalid'^M
Error occurred in Python: \
this function got an unexpected keyword argument 'invalid'^M
(gdb) FAIL: $exp: format_string: lang_c: test_all_common: test_invalid_args: \
a_point_t with option invalid=True
...
A passing version with an older python version looks like:
...
(gdb) python print (gdb.parse_and_eval ('a_point_t').format_string (invalid=True))^M
Python Exception <class 'TypeError'>: \
'invalid' is an invalid keyword argument for this function^M
Error occurred in Python: \
'invalid' is an invalid keyword argument for this function^M
(gdb) PASS: $exp: format_string: lang_c: test_all_common: test_invalid_args: \
a_point_t with option invalid=True
...
Fix this by accepting the updated error message.
Tested on aarch64-linux.
PR testsuite/31912
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31912
To avoid differences in C library paths on different systems
use gcc instead of ld to perform the test.
Problems caused by adding options to different distributions
will not be fixed.
Empty structs in C++ lead to empty LF_FIELDLIST types in the .debug$T
section, but we were mistakenly rejecting these as invalid. Allow
CodeView types of two bytes, and add a test for this.
This failed to properly byteswap its return value.
The ctf_archive format predates the idea of "just write natively and
flip on open", and byteswaps all over the place. It's too easy to
forget one. The next revision of the archive format (not versioned,
so we just tweak the magic number instead) should be native-endianned
like the dicts inside it are.
libctf/
* ctf-archive.c (ctf_archive_count): Byteswap return value.
If you asprintf something and then use it only as input to another asprintf,
it helps to free it afterwards.
libctf/
* ctf-dump.c (ctf_dump_header): Free the flagstr after use.
(ctf_dump): Make a NULL return slightly clearer.
A bug in ctf_dtd_delete led to refs in the string table to the
names of non-root-visible types not being removed when the DTD
was. This seems harmless, but actually it would lead to a write
down a pointer into freed memory if such a type was ctf_rollback()ed
over and then the dict was serialized (updating all the refs as the
strtab was serialized in turn).
Bug introduced in commit fe4c2d5563
("libctf: create: non-root-visible types should not appear in name tables")
which is included in binutils 2.35.
libctf/
* ctf-create.c (ctf_dtd_delete): Remove refs for all types
with names, not just root-visible ones.
The dict and archive opening code in libctf is somewhat unusual, because
unlike everything else, it cannot report errors by setting an error on the
dict, because in case of error there isn't one. They get passed an error
integer pointer that is set on error instead.
Inside ctf_bufopen this is implemented by calling ctf_set_open_errno and
passing it a positive error value. In turn this means that most things it
calls (including init_static_types) return zero on success and a *positive*
ECTF_* or errno value on error.
This trickles down to ctf_dynhash_insert_type, which is used by
init_static_types to add newly-detected types to the name tables. This was
returning the error value it received from a variety of functions without
alteration. ctf_dynhash_insert conformed to this contract by returning a
positive value on error (usually OOM), which is unfortunate for multiple
reasons:
- ctf_dynset_insert returns a *negative* value
- ctf_dynhash_insert and ctf_dynset_insert don't take an fp, so the value
they return is turned into the errno, so it had better be right, callers
don't just check for != 0 here
- more or less every single caller of ctf_dyn*_insert in libctf other than
ctf_dynhash_insert_type (and there are a *lot*, mostly in the
deduplicator) assumes that ctf_dynhash_insert returns a negative value
on error, even though it doesn't. In practice the only possible error is
OOM, but if OOM does happen we end up with a nonsense error value.
The simplest fix for this seems to be to make ctf_dynhash_insert and
ctf_dynset_insert conform to the usual interface contract: negative
values are errors. This in turn means that ctf_dynhash_insert_type
needs to change: let's make it consistent too, returning a negative
value on error, putting the error on the fp in non-negated form.
init_static_types_internal adapts to this by negating the error return from
ctf_dynhash_insert_type, so the value handed back to ctf_bufopen is still
positive: the new call site in ctf_track_enumerator does not need to change.
(The existing tests for this reliably detect when I get it wrong.
I know, because they did.)
libctf/
* ctf-hash.c (ctf_dynhash_insert): Negate return value.
(ctf_dynhash_insert_type): Set de-negated error on the dict:
return negated error.
* ctf-open.c (init_static_types_internal): Adapt to this change.
The recent change to detect duplicate enum values and return ECTF_DUPLICATE
when found turns out to perturb a great many callers. In particular, the
pahole-created kernel BTF has the same problem we historically did, and
gleefully emits duplicated enum constants in profusion. Handling the
resulting duplicate errors from BTF -> CTF converters reasonably is
unreasonably difficult (it amounts to forcing them to skip some types or
reimplement the deduplicator).
So let's step back a bit. What we care about mostly is that the
deduplicator treat enums with conflicting enumeration constants as
conflicting types: programs that want to look up enumeration constant ->
value mappings using the new APIs to do so might well want the same checks
to apply to any ctf_add_* operations they carry out (and since they're
*using* the new APIs, added at the same time as this restriction was
imposed, there is likely to be no negative consequence of this).
So we want some way to allow processes that know about duplicate detection
to opt into it, while allowing everyone else to stay clear of it: but we
want ctf_link to get this behaviour even if its caller has opted out.
So add a new concept to the API: dict-wide CTF flags, set via
ctf_dict_set_flag, obtained via ctf_dict_get_flag. They are not bitflags
but simple arbitrary integers and an on/off value, stored in an unspecified
manner (the one current flag, we translate into an LCTF_* flag value in the
internal ctf_dict ctf_flags word). If you pass in an invalid flag or value
you get a new ECTF_BADFLAG error, so the caller can easily tell whether
flags added in future are valid with a particular libctf or not.
We check this flag in ctf_add_enumerator, and set it around the link
(including on child per-CU dicts). The newish enumerator-iteration test is
souped up to check the semantics of the flag as well.
The fact that the flag can be set and unset at any time has curious
consequences. You can unset the flag, insert a pile of duplicates, then set
it and expect the new duplicates to be detected, not only by
ctf_add_enumerator but also by ctf_lookup_enumerator. This means we now
have to maintain the ctf_names and conflicting_enums enum-duplication
tracking as new enums are added, not purely as the dict is opened.
Move that code out of init_static_types_internal and into a new
ctf_track_enumerator function that addition can also call.
(None of this affects the file format or serialization machinery, which has
to be able to handle duplicate enumeration constants no matter what.)
include/
* ctf-api.h (CTF_ERRORS) [ECTF_BADFLAG]: New.
(ECTF_NERR): Update.
(CTF_STRICT_NO_DUP_ENUMERATORS): New flag.
(ctf_dict_set_flag): New function.
(ctf_dict_get_flag): Likewise.
libctf/
* ctf-impl.h (LCTF_STRICT_NO_DUP_ENUMERATORS): New flag.
(ctf_track_enumerator): Declare.
* ctf-dedup.c (ctf_dedup_emit_type): Set it.
* ctf-link.c (ctf_create_per_cu): Likewise.
(ctf_link_deduplicating_per_cu): Likewise.
(ctf_link): Likewise.
(ctf_link_write): Likewise.
* ctf-subr.c (ctf_dict_set_flag): New function.
(ctf_dict_get_flag): New function.
* ctf-open.c (init_static_types_internal): Move enum tracking to...
* ctf-create.c (ctf_track_enumerator): ... this new function.
(ctf_add_enumerator): Call it.
* libctf.ver: Add the new functions.
* testsuite/libctf-lookup/enumerator-iteration.c: Test them.
We set this flag at the top of ctf_link_write (to tell ctf_serialize, way
down under the archive file writing functions, to do the various link- time
serialization things like symbol filtering and the like), but we never
remember to clear it except on error. This is probably bad if you want to
serialize the dict yourself directly in the future after linking it (which
is... definitely a *possible* use of the API, if rather strange).
libctf/
* ctf-link.c (ctf_link_write): Clear LCTF_LINKING before exit.
libctf's dynsets are a straight wrapper around libiberty hashtab, storing
the key directly in the hashtab slot. However, we'd often like to be able
to store 0 and 1 (HTAB_EMPTY_ENTRY and HTAB_DELETED_ENTRY) in there, so we
move them out of the way and replace them with huge unlikely values
instead. Unfortunately we failed to do this replacement in one place, so
insertion of 0 or 1 ended up misinforming the hashtab machinery that an
entry was empty or deleted when it wasn't.
libctf/
* ctf-hash.c (ctf_dynset_insert): Call key_to_internal properly.
Drop an unnecessary variable, and fix a buggy comment.
No effect on generated code.
libctf/
* ctf-dedup.c (ctf_dedup_detect_name_ambiguity): Drop unnecessary
variable.
(ctf_dedup_rwalk_output_mapping): Fix comment.
This erorr doesn't just indicate that there is no parent dictionary
(that's routine, and true of all dicts that are parents themselves)
but that a parent is *needed* but wasn't found.
include/
* ctf-api.h (_CTF_ERRORS) [ECTF_NOPARENT]: Improve error message.
ld/
* testsuite/ld-ctf/diag-parname.d: Adjust.
Commit 483546ce4f ("libctf: make ctf_serialize() actually serialize")
accidentally broke dict compression. There were two bugs:
- ctf_arc_write_one_ctf was still making its own decision about
whether to compress the dict via direct ctf_size comparison, which is
unfortunate because now that it no longer calls ctf_serialize itself,
ctf_size is always zero when it does this: it should let the writing
functions decide on the threshold, which they contain code to do which is
simply not used for lack of one trivial wrapper to write to an fd and
also provide a compression threshold
- ctf_write_mem, the function underlying all writing as of the commit
above, was calling zlib's compressBound and avoiding compression if this
returned a value larger than the input. Unfortunately compressBound does
not do a trial compression and determine whether the result is
compressible: it just adds zlib header sizes to the value passed in, so
our test would *always* have concluded that the value was incompressible!
Avoid by simply always compressing if the raw size is larger than the
threshold: zlib is quite clever enough to avoid actually compressing
if the data is incompressible.
Add a testcase for this.
libctf/
* ctf-impl.h (ctf_write_thresholded): New...
* ctf-serialize.c (ctf_write_thresholded): ... defined here,
a wrapper around...
(ctf_write_mem): ... this. Don't check compressibility.
(ctf_compress_write): Reimplement as a ctf_write_thresholded
wrapper.
(ctf_write): Likewise.
* ctf-archive.c (arc_write_one_ctf): Just call
ctf_write_thresholded rather than trying to work out whether
to compress.
* testsuite/libctf-writable/ctf-compressed.*: New test.
If you deduplicate non-root-visible types, the resulting type should still
be non-root-visible! We were promoting all such types to root-visible, and
re-demoting them only if their names collided (which might happen on
cu-mapped links if multiple compilation units with conflicting types are
fused into one child dict).
This "worked" before now, in that linking at least didn't fail (if you don't
mind having your non-root flag value destroyed if you're adding
non-root-visible types), but now that conflicting enumerators cause their
containing enums to become conflicted (enums which might have *different
names*), this caused the linker to crash when it hit two enumerators with
conflicting values.
Not testable in ld because cu-mapped links are not exposed to ld, but can be
tested via direct creation of libraries and calls to ctf_link directly.
(This also tests the ctf_dump non-root type printout, which before now
was untested.)
libctf/
* ctf-dedup.c (ctf_dedup_emit_type): Non-root-visible input types
should be emitted as non-root-visible output types.
* testsuite/libctf-writable/ctf-nonroot-linking.c: New test.
* testsuite/libctf-writable/ctf-nonroot-linking.lk: New test.
The flag test when dumping non-root-visible tyeps was doubly wrong: the
flags word is a *bitfield* containing CTF_ADD_ROOT as one possible
value, so needs | and & testing, not just ==, and CTF_ADD_NONROOT is 0,
so cannot be tested for this way: one must check for the non-presence of
CTF_ADD_ROOT.
libctf/
* ctf-dump.c (ctf_dump_format_type): Fix non-root flag test.
In commit 149ce5c263 we introduced the concept of "movable" refs,
which are refs that can be moved in batches, to let us maintain valid ref
lists even when adding refs to blocks of memory that can be realloced (which
is any type containing a vlen which can expand, like names contained within
enum or struct members). Movable refs need a backpointer to the movable
refs dynhash for this dict; since non-movable refs are very common, we tried
to save memory by having a slightly bigger struct for moveable refs with a
backpointer in it, and casting appropriately, indicating which sort of ref
we were dealing with via a flag on the atom.
Unfortunately this doesn't work reliably, because you can perfectly well
have a string ("foo", say) which has both non-movable refs (say, an external
symbol and a variable name) and movable refs (say, a structure member name)
to the same atom. Indicate which struct we're dealing with with an atom
flag and suddenly you're casting a ctf_str_atom_ref to a
ctf_str_atom_ref_movable (which is bigger) and dereferencing random memory
off the end of it and interpreting it as a backpointer to the movable refs
dynhash. This is unlikely to work well.
So bite the bullet and split refs into two separate lists, one for movable
refs, one for immovable refs. It means some annoying code duplication, but
there's not very much of it, and it means we can keep the movable refs
hashtab (which in turn means we don't have to do linear searches to find all
relevant refs when moving refs, which in turn means that
structure/union/enum member additions remain amortized O(n) time, not
O(n^2).
Callers can now purge movable and non-movable refs independently of each
other. We don't use this yet, but a use is coming.
libctf/
* ctf-impl.h (CTF_STR_ATOM_MOVABLE): Delete.
(struct ctf_str_atom) [csa_movable_refs]: New.
(struct ctf_dict): Adjust comment.
(ctf_str_purge_refs): Add MOVABLE arg.
* ctf-string.c (ctf_str_purge_movable_atom_refs): Split out of...
(ctf_str_purge_atom_refs): ... this.
(ctf_str_free_atom): Call it.
(ctf_str_purge_one_atom_refs): Likewise.
(aref_create): Adjust accordingly.
(ctf_str_move_refs): Likewise.
(ctf_str_remove_ref): Remove movable refs too, including
deleting the ref from ctf_str_movable_refs.
(ctf_str_purge_refs): Add MOVABLE arg.
(ctf_str_update_refs): Update movable refs.
(ctf_str_write_strtab): Check, and purge, movable refs.
The PARENTS arg is carefully passed down through all the layers of hash
functions and then never used for anything. (In the distant past it was
used for cycle detection, but the algorithm eventually committed doesn't
need to do cycle detection...)
The PARENTS arg is still used by ctf_dedup_emit(), but even there we can
loosen the requirements and state that you can just leave entries
corresponding to dicts with no parents at zero (which will be useful
in an upcoming commit).
libctf/
* ctf-dedup.c (ctf_dedup_hash_type): Drop PARENTS arg.
(ctf_dedup_rhash_type): Likewise.
(ctf_dedup): Likewise.
(ctf_dedup_emit_struct_members): Mention what you can do to
PARENTS entries for parent dicts.
* ctf-impl.h (ctf_dedup): Adjust accordingly.
* ctf-link.c (ctf_link_deduplicating_per_cu): Likewise.
(ctf_link_deduplicating): Likewise.
The worry that caused this to not be supported was because we don't
bother endian-flipping version-related fields before checking them.
But they're all unsigned chars anyway, and don't need any flipping at
all.
This should be supported and should already work. Enable it.
libctf/
* ctf-open.c (ctf_bufopen): Don't prohibit foreign-endian
upgrades.
When using a duplicate test name:
...
fail foo
fail foo
...
we get:
...
FAIL: $exp: foo
FAIL: $exp: foo
DUPLICATE: $exp: foo
...
But when we do:
...
fail foo
fail "foo (timeout)"
...
we get only:
...
FAIL: $exp: foo
FAIL: $exp: foo (timeout)
...
Trailing text between parentheses prefixed with a space is interpreted as
extra information, and not as part of the test name [1].
Consequently, "foo" and "foo (timeout)" do count as duplicate test names,
which should have been detected. This is PR testsuite/29772.
Fix this in CheckTestNames::_check_duplicates, such that we get:
...
FAIL: $exp: foo
FAIL: $exp: foo (timeout)
DUPLICATE: $exp: foo (timeout)
...
[ One note on the implementation: I used the regexp { \([^()]*\)$}. I don't
know whether that covers all required cases, due to the fact that those are
not unambiguousely specified. It might be possible to reverse-engineer that
information by reading or running the "regression analysis tools" mentioned on
the wiki page [1], but I haven't been able to. Regardless, the current regexp
covers a large amount of cases, which IMO should be sufficient to be
acceptable. ]
Doing so shows many new duplicates in the testsuite.
A significant number of those is due to using a message which is a copy of the
command:
...
gdb_test "print (1)"
...
Fix this by handling those cases using test names "gdb-command<print (1)>" and
"gdb-command<print (2)>.
Fix the remaining duplicates manually (split off as follow-up patch for
readability of this patch).
Tested on x86_64-linux and aarch64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29772
[1] https://sourceware.org/gdb/wiki/GDBTestcaseCookbook#Do_not_use_.22tail_parentheses.22_on_test_messages
I tried to reproduce a problem in test-case gdb.python/py-disasm.exp on a
s390x machine, but when running with target board unix/-m31 I saw that the
required libraries were missing, so I couldn't generate an executable.
However, I realized that I did have an object file, and the test-case should
mostly also work with an object file.
I've renamed gdb.python/py-disasm.exp to gdb.python/py-disasm.exp.tcl and
included it from two new minimal test-case wrappers:
- gdb.python/py-disasm-exec.exp, and
- gdb.python/py-disasm-obj.exp
where the former uses an executable as before, and the latter uses an object
file.
Using an object file required changing the info.read_memory calls in
gdb.python/py-disasm.py:
...
- info.read_memory(1, -info.address + 2)
+ info.read_memory(1, -info.address - 1)
...
because reading from address 2 succeeds. Using address -1 instead does
generate the expected gdb.MemoryError.
Tested on x86_64-linux.
When running test-case gdb.fortran/intrinsics.exp on arm-linux, I get:
...
(gdb) p cmplx (4,4,16)^M
/home/linux/gdb/src/gdb/f-lang.c:1002: internal-error: eval_op_f_cmplx: \
Assertion `kind_arg->code () == TYPE_CODE_COMPLEX' failed.^M
A problem internal to GDB has been detected,^M
further debugging may prove unreliable.^M
----- Backtrace -----^M
FAIL: gdb.fortran/intrinsics.exp: p cmplx (4,4,16) (GDB internal error)
...
The problem is that 16-byte floats are unsupported:
...
$ gfortran test.f90
test.f90:2:17:
2 | REAL(kind=16) :: foo = 1
| 1
Error: Kind 16 not supported for type REAL at (1)
...
and consequently we end up with a builtin_real_s16 and builtin_complex_s16 with
code TYPE_CODE_ERROR.
Fix this by bailing out asap when encountering such a type.
Without this patch we're able to do the rather useless:
...
(gdb) ptype real*16
type = real*16
(gdb) ptype real_16
type = real*16
...
but with this patch we get:
...
(gdb) ptype real*16
unsupported kind 16 for type real*4
(gdb) ptype real_16
unsupported type real*16
...
Tested on arm-linux.
PR fortran/30537
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30537