The three targets that implement gdbarch_adjust_breakpoint_address are
arm, frv, and mips. In each of these targets the adjust breakpoint
address function does some combination of reading the symbol table, or
reading memory at the location the breakpoint could be placed.
The problem is that performing these actions requires that the current
inferior and program space be the one in which the breakpoint will be
placed, and this is not currently always the case.
Consider a GDB session with multiple inferiors. One inferior might be
a native target while another could be a remote target of a completely
different architecture. Alternatively, if we consider ARM and
AArch64, one native inferior might be AArch64, while a second native
inferior could be ARM.
In these cases it is possible, and valid, for a user to have one
inferior selected, and place a breakpoint in the other inferior by
placing a breakpoint on a particular symbol.
If this happens, then currently, when
gdbarch_adjust_breakpoint_address is called, the wrong inferior (and
program space) will be selected, and memory reads, and symbol look
ups, will not return the expected results, this could lead to
breakpoints being placed in the wrong location.
There are currently two places where gdbarch_adjust_breakpoint_address
is called:
1. In infrun.c, in the function handle_step_into_function. In this
case, I believe that the correct inferior and program space will
already be selected as this is called as part of the stop event
handling, so I don't think we need to worry about this case, and
2. In breakpoint.c, in the function adjust_breakpoint_address, which
is itself called from code_breakpoint::add_location and
watch_command_1.
The watch_command_1 case I don't think we need to worry about, this
is for when a local watch expression is created, which can only be
in the currently selected inferior, so this case should be fine.
The code_breakpoint::add_location case is the one that needs fixing,
this is what allows a breakpoint to be created between inferiors.
To fix the code_breakpoint::add_location case, I propose that we pass
the "correct" program_space (i.e. the program space in which the
breakpoint will be created) to the adjust_breakpoint_address function.
Then in adjust_breakpoint_address we can make use of
switch_to_program_space_and_thread to switch program_space and
inferior before calling gdbarch_adjust_breakpoint_address.
I discovered this issue while working on a later patch in this
series. This later patch will detect when we cast the result of
gdbarch_tdep to the wrong type.
With this later patch in place I ran gdb.multi/multi-arch.exp on an
AArch64 target. In this situation, two inferiors are created, an
AArch64 inferior, and an ARM inferior. The test selected the AArch64
inferior and tries to create a breakpoint in the ARM inferior.
As a result of this we end up in arm_adjust_breakpoint_address, which
calls arm_pc_is_thumb. Before this commit the AArch64 inferior would
be current. As a result, all of the checks in arm_pc_is_thumb would
fail (they rely on reading symbols from the current program space),
and so, at the end of arm_pc_is_thumb we would call
arm_frame_is_thumb. However, remember, at this point the current
inferior is the AArch64 inferior, so the current frame is an AArch64
frame.
In arm_frame_is_thumb we call arm_psr_thumb_bit, which calls
gdbarch_tdep and casts the result to arm_gdbarch_tdep. This is wrong,
the tdep field is of type aarch64_gdbarch_tdep. After this we have
undefined behaviour.
With this patch in place, we will have switched to a thread in the ARM
program space before calling arm_adjust_breakpoint_address. As a
result, we now succeed in looking up the required symbols in
arm_pc_is_thumb, and so we never call arm_frame_is_thumb.
However, in the worst case scenario, if we did end up calling
arm_frame_is_thumb, as the current inferior should now be the ARM
inferior, the current frame should be an ARM frame, so we still should
not hit undefined behaviour.
I have added an assert to arm_frame_is_thumb.
This commit is similar to the previous commit, but in this case GDB is
actually relying on undefined behaviour.
Consider building GDB for all targets on x86-64/GNU-Linux, then doing
this:
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is auto.
The 32 bit address mask is set automatically. Currently disabled
(gdb)
The 'show mips mask-address' command ends up in show_mask_address in
mips-tdep.c, and this function does this:
mips_gdbarch_tdep *tdep
= (mips_gdbarch_tdep *) gdbarch_tdep (target_gdbarch ());
Later we might pass TDEP to mips_mask_address_p. However, in my
example above, on an x86-64 native target, the current target
architecture will be an x86-64 gdbarch, and the tdep field within the
gdbarch will be of type i386_gdbarch_tdep, not of type
mips_gdbarch_tdep, as a result the cast above was incorrect, and TDEP
is not pointing at what it thinks it is.
I also think the current output is a little confusing, we appear to
have two lines that show the same information, but using different
words.
The first line comes from calling deprecated_show_value_hack, while
the second line is printed directly from show_mask_address. However,
both of these lines are printing the same mask_address_var value. I
don't think the two lines actually adds any value here.
Finally, none of the text in this function is passed through the
internationalisation mechanism.
It would be nice to remove another use of deprecated_show_value_hack
if possible, so this commit does a complete rewrite of
show_mask_address.
After this commit the output of the above example command, still on my
x86-64 native target is:
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is "auto" (current architecture is not MIPS).
The 'current architecture is not MIPS' text is only displayed when the
current architecture is not MIPS. If the architecture is mips then we
get the more commonly seen 'currently "on"' or 'currently "off"', like
this:
(gdb) set architecture mips
The target architecture is set to "mips".
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is "auto" (currently "off").
(gdb)
All the text is passed through the internationalisation mechanism, and
we only call gdbarch_tdep when we know the gdbarch architecture is
bfd_arch_mips.
This is a small refactor to resolve an issue before it becomes a
problem in a later commit.
Move the fetching of an arm_gdbarch_tdep into a more inner scope
within two functions in arm-tdep.c.
The problem with the current code is that the functions in question
are used as the callbacks for two set/show parameters. These set/show
parameters are available no matter the current architecture, but are
really about controlling an ARM architecture specific setting. And
so, if I build GDB for all targets on an x86-64/GNU-Linux system, I
can still do this:
(gdb) show arm fpu
(gdb) show arm abi
After these calls we end up in show_fp_model and arm_show_abi
respectively, where we unconditionally do this:
arm_gdbarch_tdep *tdep
= (arm_gdbarch_tdep *) gdbarch_tdep (target_gdbarch ());
However, the gdbarch_tdep() result will only be a arm_gdbarch_tdep if
the current architecture is ARM, otherwise the result will actually be
of some other type.
This isn't actually a problem, as in both cases the use of tdep is
guarded by a later check that the gdbarch architecture is
bfd_arch_arm.
This commit just moves the call to gdbarch_tdep() after the
architecture check.
In a later commit gdbarch_tdep() will be able to spot when we are
casting the result to the wrong type, and this function will trigger
assertion failures if things are not fixed.
There should be not user visible changes after this commit.
All usages of this helper are really made to check if the register is
one of the alternative SP registers (MSP/MSP_S/MSP_NS/PSP/PSP_S/PSP_NS)
with the ARM_SP_REGNUM case being handled separately.
Signed-off-by: Luis Machado <luis.machado@arm.com>
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
With python 3.11 I noticed:
...
$ gdb -q -batch -ex "maint selftest python"
Running selftest python.
Self test failed: self-test failed at gdb/python/python.c:2246
Ran 1 unit tests, 1 failed
...
In more detail:
...
(gdb) p output
$5 = "Traceback (most recent call last):\n File \"<string>\", line 0, \
in <module>\nKeyboardInterrupt\n"
(gdb) p ref_output
$6 = "Traceback (most recent call last):\n File \"<string>\", line 1, \
in <module>\nKeyboardInterrupt\n"
...
Fix this by also allowing line number 0.
Tested on x86_64-linux.
This should hopefully fix buildbot builder gdb-rawhide-x86_64.
When I changed the initialization of parallel_for_each_debug from 0 to false,
I forgot to change the type from int to bool. Fix this.
Tested by rebuilding on x86_64-linux.
I noticed this code in dw2_debug_names_iterator::next:
...
case DW_IDX_compile_unit:
/* Don't crash on bad data. */
if (ull >= per_bfd->all_comp_units.size ())
{
complaint (_(".debug_names entry has bad CU index %s"
" [in module %s]"),
pulongest (ull),
objfile_name (objfile));
continue;
}
per_cu = per_bfd->get_cu (ull);
break;
...
This code used to DTRT, before we started keeping both CUs and TUs in
all_comp_units.
Fix by using "per_bfd->all_comp_units.size () - per_bfd->tu_stats.nr_tus"
instead.
It's hard to produce a test-case for this, but let's try at least to trigger
the complaint somehow. We start out by creating an exec with .debug_types and
.debug_names:
...
$ gcc -g ~/hello.c -fdebug-types-section
$ gdb-add-index -dwarf-5 a.out
...
and verify that we don't see any complaints:
...
$ gdb -q -batch -iex "set complaints 100" ./a.out
...
We look at the CU and TU table using readelf -w and conclude that we have
nr_cus == 6 and nr_tus == 1.
Now override ull in dw2_debug_names_iterator::next for the DW_IDX_compile_unit
case to 6, and we have:
...
$ gdb -q -batch -iex "set complaints 100" ./a.out
During symbol reading: .debug_names entry has bad CU index 6 [in module a.out]
...
After this, it still crashes because this code in
dw2_debug_names_iterator::next:
...
/* Skip if already read in. */
if (m_per_objfile->symtab_set_p (per_cu))
goto again;
...
is called with per_cu == nullptr.
Fix this by skipping the entry if per_cu == nullptr.
Now revert the fix and observe that the complaint disappears, so we've
confirmed that the fix is required.
A somewhat similar issue for .gdb_index in dw2_symtab_iter_next has been filed
as PR29367.
Tested on x86_64-linux, with native and target board cc-with-debug-names.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29336
A standalone (without SAE) StaticRounding attribute is meaningless, and
indeed all other similar insns have ATTSyntax there instead. I can only
assume this was some strange copy-and-paste mistake.
I clearly screwed up in 6ff00b5e12 ("x86/Intel: correct permitted
operand sizes for AVX512 scatter/gather") giving all AVX512F scatter
insns Dword element size. Update testcases (also their gather parts),
utilizing that there previously were two identical lines each (for no
apparent reason).
Commit 244e19c791 changed a number of variables in display_gdb_index
to count entries rather than words.
PR 29337
* dwarf.c (display_gdb_index): Correct use of cu_list_elements.
The PR29370 testcase is a fuzzed object file with multiple
.trace_abbrev sections. Multiple .trace_abbrev or .debug_abbrev
sections are not a violation of the DWARF standard. The DWARF5
standard even gives an example of multiple .debug_abbrev sections
contained in groups. Caching and lookup of processed abbrevs thus
needs to be done by section and offset rather than base and offset.
(Why base anyway?) Or, since section contents are kept, by a pointer
into the contents.
PR 29370
* dwarf.c (struct abbrev_list): Replace abbrev_base and
abbrev_offset with raw field.
(find_abbrev_list_by_abbrev_offset): Delete.
(find_abbrev_list_by_raw_abbrev): New function.
(process_abbrev_set): Set list->raw and list->next.
(find_and_process_abbrev_set): Replace abbrev list lookup with
new function. Don't set list abbrev_base, abbrev_offset or next.
I'm inclined to think that abbrev caching is counter-productive. The
time taken to search the list of abbrevs converted to internal form is
non-zero, and it's easy to decode the raw abbrevs. It's especially
silly to cache empty lists of decoded abbrevs (happens with zero
padding in .debug_abbrev), or abbrevs as they are displayed when there
is no further use of those abbrevs. This patch stops caching in those
cases.
* dwarf.c (record_abbrev_list_for_cu): Add free_list param.
Put abbrevs on abbrev_lists here.
(new_abbrev_list): Delete function.
(process_abbrev_set): Return newly allocated list. Move
abbrev base, offset and size checking to..
(find_and_process_abbrev_set): ..here, new function. Handle
lookup of cached abbrevs here, and calculate start and end
for process_abbrev_set. Return free_list if newly alloc'd.
(process_debug_info): Consolidate cached list lookup, new list
alloc and processing into find_and_process_abbrev_set call.
Free list when not cached.
(display_debug_abbrev): Similarly.
* dwarf.c: Leading and trailing whitespace fixes.
(free_abbrev_list): New function.
(free_all_abbrevs): Use the above. Free cu_abbrev_map here too.
(process_abbrev_set): Print actual section name on error.
(get_type_abbrev_from_form): Add overflow check.
(free_debug_memory): Don't free cu_abbrev_map here..
(process_debug_info): ..or here. Warn on another case of not
finding a neeeded abbrev.
elf64-ppc.c:11673:33: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘bfd_vma’ {aka ‘long long unsigned int’} [-Werror=format=]
11673 | fprintf (stderr, "offset = %#lx:", stub_entry->stub_offset);
| ~~~^ ~~~~~~~~~~~~~~~~~~~~~~~
| | |
| | bfd_vma {aka long long unsigned int}
| long unsigned int
| %#llx
* elf64-ppc.c (dump_stub): Use BFD_VMA_FMT.
Python 3.11 deprecates PySys_SetPath and Py_SetProgramName. The
PyConfig API replaces these and other functions. This commit uses the
PyConfig API to provide equivalent functionality while also preserving
support for older versions of Python, i.e. those before Python 3.8.
A beta version of Python 3.11 is available in Fedora Rawhide. Both
Fedora 35 and Fedora 36 use Python 3.10, while Fedora 34 still used
Python 3.9. I've tested these changes on Fedora 34, Fedora 36, and
rawhide, though complete testing was not possible on rawhide due to
a kernel bug. That being the case, I decided to enable the newer
PyConfig API by testing PY_VERSION_HEX against 0x030a0000. This
corresponds to Python 3.10.
We could try to use the PyConfig API for Python versions as early as 3.8,
but I'm reluctant to do this as there may have been PyConfig related
bugs in earlier versions which have since been fixed. Recent linux
distributions should have support for Python 3.10. This should be
more than adequate for testing the new Python initialization code in
GDB.
Information about the PyConfig API as well as the motivation behind
deprecating the old interface can be found at these links:
https://github.com/python/cpython/issues/88279https://peps.python.org/pep-0587/https://docs.python.org/3.11/c-api/init_config.html
The v2 commit also addresses several problems that Simon found in
the v1 version.
In v1, I had used Py_DontWriteBytecodeFlag in the new initialization
code, but Simon pointed out that this global configuration variable
will be deprecated in Python 3.12. This version of the patch no longer
uses Py_DontWriteBytecodeFlag in the new initialization code.
Additionally, both Py_DontWriteBytecodeFlag and Py_IgnoreEnvironmentFlag
will no longer be used when building GDB against Python 3.10 or higher.
While it's true that both of these global configuration variables are
deprecated in Python 3.12, it makes sense to disable their use for
gdb builds against 3.10 and higher since those are the versions for
which the PyConfig API is now being used by GDB. (The PyConfig API
includes different mechanisms for making the same settings afforded
by use of the soon-to-be deprecated global configuration variables.)
Simon also noted that PyConfig_Clear() would not have be called for
one of the failure paths. I've fixed that problem and also made the
rest of the "bail out" code more direct. In particular,
PyConfig_Clear() will always be called, both for success and failure.
The v3 patch addresses some rebase conflicts related to module
initialization . Commit 3acd9a692d ("Make 'import gdb.events' work")
uses PyImport_ExtendInittab instead of PyImport_AppendInittab. That
commit also initializes a struct for each module to import. Both the
initialization and the call to were moved ahead of the ifdefs to avoid
having to replicate (at least some of) the code three times in various
portions of the ifdefs.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28668
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29287
Building GDB currently fails to build with libc++, because libc++ is
stricter about which headers "leak" entities they're not guaranteed
to support. The following headers have been added:
* `<iterator>`, to support `std::back_inserter`
* `<utility>`, to support `std::move` and `std::swap`
* `<vector>`, to support `std::vector`
Change-Id: Iaeb15057c5fbb43217df77ce34d4e54446dbcf3d
In all-stop mode, when the target is itself in non-stop mode (like
GNU/Linux), if you use the "step N" (or "stepi/next/nexti N") to step
a thread a number of times:
(gdb) help step
step, s
Step program until it reaches a different source line.
Usage: step [N]
Argument N means step N times (or till program stops for another reason).
... GDB prematurely stops all threads after the first step, and
doesn't re-resume them for the subsequent N-1 steps. It's as if for
the 2nd and subsequent steps, the command was running with
scheduler-locking enabled.
This can be observed with the testcase added by this commit, which
looks like this:
static pthread_barrier_t barrier;
static void *
thread_func (void *arg)
{
pthread_barrier_wait (&barrier);
return NULL;
}
int
main ()
{
pthread_t thread;
int ret;
pthread_barrier_init (&barrier, NULL, 2);
/* We run to this line below, and then issue "next 3". That should
step over the 3 lines below and land on the return statement. If
GDB prematurely stops the thread_func thread after the first of
the 3 nexts (and never resumes it again), then the join won't
ever return. */
pthread_create (&thread, NULL, thread_func, NULL); /* set break here */
pthread_barrier_wait (&barrier);
pthread_join (thread, NULL);
return 0;
}
The test hangs and times out without the GDB fix:
(gdb) next 3
[New Thread 0x7ffff7d89700 (LWP 525772)]
FAIL: gdb.threads/step-N-all-progress.exp: non-stop=off: target-non-stop=on: next 3 (timeout)
The problem is a core gdb bug.
When you do "step/stepi/next/nexti N", GDB internally creates a
thread_fsm object and associates it with the stepping thread. For the
stepping commands, the FSM's class is step_command_fsm. That object
is what keeps track of how many steps are left to make. When one step
finishes, handle_inferior_event calls stop_waiting and returns, and
then fetch_inferior_event calls the "should_stop" method of the event
thread's FSM. The implementation of that method decrements the
steps-left counter. If the counter is 0, it returns true and we
proceed to presenting the stop to the user. If it isn't 0 yet, then
the method returns false, indicating to fetch_inferior_event to "keep
going".
Focusing now on when the first step finishes -- we're in "all-stop"
mode, with the target in non-stop mode. When a step finishes,
handle_inferior_event calls stop_waiting, which itself calls
stop_all_threads to stop everything. I.e., after the first step
completes, all threads are stopped, before handle_inferior_event
returns. And after that, now in fetch_inferior_event, we consult the
thread's thread_fsm::should_stop, which as we've seen, for the first
step returns false -- i.e., we need to keep_going for another step.
However, since the target is in non-stop mode, keep_going resumes
_only_ the current thread. All the other threads remain stopped,
inadvertently.
If the target is in non-stop mode, we don't actually need to stop all
threads right after each step finishes, and then re-resume them again.
We can instead defer stopping all threads until all the steps are
completed.
So fix this by delaying the stopping of all threads until after we
called the FSM's "should_stop" method. I.e., move it from
stop_waiting, to handle_inferior_events's callers,
fetch_inferior_event and wait_for_inferior.
New test included. Tested on x86-64 GNU/Linux native and gdbserver.
Change-Id: Iaad50dcfea4464c84bdbac853a89df92ade6ae01
Assuming GMSD is a special operand, marked as O_md1, the code:
.set VREG, GMSD
.set REG, VREG
extsw REG, 2
...fails upon attempts to resolve the value of the symbol. This happens
since machine-dependent values are not handled in the giant op switch.
We introduce a custom md_resolve_symbol macro; the ports can use this
macro to customize the behavior when resolve_symbol_value hits O_md
operand.
Since glibc 2.36 will issue warnings for copy relocation against
protected symbols and non-canonical reference to canonical protected
functions, change the linker to always disallow such relocations.
bfd/
* elf32-i386.c (elf_i386_scan_relocs): Remove check for
elf_has_indirect_extern_access.
* elf64-x86-64.c (elf_x86_64_scan_relocs): Likewise.
(elf_x86_64_relocate_section): Remove check for
elf_has_no_copy_on_protected.
* elfxx-x86.c (elf_x86_allocate_dynrelocs): Check for building
executable instead of elf_has_no_copy_on_protected.
(_bfd_x86_elf_adjust_dynamic_symbol): Disallow copy relocation
against non-copyable protected symbol.
* elfxx-x86.h (SYMBOL_NO_COPYRELOC): Remove check for
elf_has_no_copy_on_protected.
ld/
* testsuite/ld-i386/i386.exp: Expect linker error for PR ld/17709
test.
* testsuite/ld-i386/pr17709.rd: Removed.
* testsuite/ld-i386/pr17709.err: New file.
* testsuite/ld-x86-64/pr17709.rd: Removed.
* testsuite/ld-x86-64/pr17709.err: New file.
* testsuite/ld-x86-64/pr28875-func.err: Updated.
* testsuite/ld-x86-64/x86-64.exp: Expect linker error for PR
ld/17709 test. Add tests for function pointer against protected
function.
Call _bfd_elf_symbol_refs_local_p with local_protected==true. This has
2 noticeable effects for -shared:
* GOT-generating relocations referencing a protected data symbol no
longer lead to a GLOB_DAT (similar to a hidden symbol).
* Direct access relocations (e.g. R_X86_64_PC32) no longer has the
confusing diagnostic below.
__attribute__((visibility("protected"))) void *foo() {
return (void *)foo;
}
// gcc -fpic -shared -fuse-ld=bfd
relocation R_X86_64_PC32 against protected symbol `foo' can not be used when making a shared object
The new behavior matches arm, aarch64 (commit
83c325007c), and powerpc ports, and other
linkers: gold and ld.lld.
Note: if some code tries to use direct access relocations to take the
address of foo, the pointer equality will break, but the error should be
reported on the executable link, not on the innocent shared object link.
glibc 2.36 will give a warning at relocation resolving time.
With this change, `#define elf_backend_extern_protected_data 1` is no
longer effective. Just remove it.
Remove the test "Run protected-func-1 without PIE" since -fno-pic
address taken operation in the executable doesn't work with protected
symbol in a shared object by default. Similarly, remove
protected-data-1a and protected-data-1b. protected-data-1b can be made
working by removing HAVE_LD_PIE_COPYRELOC from GCC
(https://sourceware.org/pipermail/gcc-patches/2022-June/596678.html).
Teach GDB how to dump memory tags for AArch64 when using the gcore command
and how to read memory tag data back from a core file generated by GDB
(via gcore) or by the Linux kernel.
The format is documented in the Linux Kernel documentation [1].
Each tagged memory range (listed in /proc/<pid>/smaps) gets dumped to its
own PT_AARCH64_MEMTAG_MTE segment. A section named ".memtag" is created for each
of those segments when reading the core file back.
To save a little bit of space, given MTE tags only take 4 bits, the memory tags
are stored packed as 2 tags per byte.
When reading the data back, the tags are unpacked.
I've added a new testcase to exercise the feature.
Build-tested with --enable-targets=all and regression tested on aarch64-linux
Ubuntu 20.04.
[1] Documentation/arm64/memory-tagging-extension.rst (Core Dump Support)
The Linux kernel can dump memory tag segments to a core file, one segment
per mapped range. The format and documentation can be found in the Linux
kernel tree [1].
The following patch adjusts bfd and binutils so they can handle this new
segment type and display it accordingly. It also adds code required so GDB
can properly read/dump core file data containing memory tags.
Upon reading, each segment that contains memory tags gets mapped to a
section named "memtag". These sections will be used by GDB to lookup the tag
data. There can be multiple such sections with the same name, and they are not
numbered to simplify GDB's handling and lookup.
There is another patch for GDB that enables both reading
and dumping of memory tag segments.
Tested on aarch64-linux Ubuntu 20.04.
[1] Documentation/arm64/memory-tagging-extension.rst (Core Dump Support)
Newer distros carry newer headers that contains MTE definitions. Account
for that fact in the MTE testcases (gdb.arch/aarch64-mte.exp) and define
constants conditionally to prevent compilation failures.
Only check invalid relocation against protected symbol defined in shared
object.
bfd/
PR ld/29377
* elf32-i386.c (elf_i386_scan_relocs): Only check invalid
relocation against protected symbol defined in shared object.
* elf64-x86-64.c (elf_x86_64_scan_relocs): Likewise.
ld/
PR ld/29377
* testsuite/ld-elf/linux-x86.exp: Run PR ld/29377 tests.
* testsuite/ld-elf/pr29377a.c: New file.
* testsuite/ld-elf/pr29377b.c: Likewise.
Currently, Python code can use event registries to detect when gdb
loads a new objfile, and when gdb clears the objfile list. However,
there's no way to detect the removal of an objfile, say when the
inferior calls dlclose.
This patch adds a gdb.free_objfile event registry and arranges for an
event to be emitted in this case.
I noticed that gdb.base/bt-on-fatal-signal.exp was contributing four
core files to the count of unexpected core files:
$ make check TESTS="gdb.base/bt-on-fatal-signal.exp"
=== gdb Summary ===
# of unexpected core files 4
# of expected passes 21
These are GDB core dumps. They are expected, however, because the
whole point of the testcase is to crash GDB with a signal.
Make GDB change its current directory to the output dir just before
crashing, so that the core files end up there. The result is now:
=== gdb Summary ===
# of expected passes 25
and:
$ find . -name "core.*"
./testsuite/outputs/gdb.base/bt-on-fatal-signal/core.gdb.1676506.nelson.1657727692
./testsuite/outputs/gdb.base/bt-on-fatal-signal/core.gdb.1672585.nelson.1657727671
./testsuite/outputs/gdb.base/bt-on-fatal-signal/core.gdb.1674833.nelson.1657727683
./testsuite/outputs/gdb.base/bt-on-fatal-signal/core.gdb.1673709.nelson.1657727676
(Note the test is skipped at the top if on a remote host.)
Change-Id: I79e4fb2e91330279c7a509930b1952194a72e85a
Currently the Ada code assumes that it can distinguish between a
multi-dimensional array and an array of arrays by looking for an
intervening typedef -- that is, for an array of arrays, there will be
a typedef wrapping the innermost array type.
A recent compiler change removes this typedef, which causes a gdb
failure in the internal AdaCore test suite.
This patch handles this case by checking whether the array type in
question has a name.
This patch removes ui_register_input_event_handler and
ui_unregister_input_event_handler, replacing them with methods on
'ui'. It also changes gdb to use these methods everywhere, rather
than sometimes reaching in to the ui to manage the file descriptor
directly.
Update the ARC disassembler to supply style information to the
disassembler output. The output formatting remains unchanged.
opcodes/ChangeLog:
* disassemble.c (disassemble_init_for_target): Set
created_styled_output for ARC based targets.
* arc-dis.c (find_format_from_table): Use fprintf_styled_ftype
instead of fprintf_ftype throughout.
(find_format): Likewise.
(print_flags): Likewise.
(print_insn_arc): Likewise.
Signed-off-by: Claudiu Zissulescu <claziss@gmail.com>
The ciphers 5,7, and 9 are missing when parsing an assembly
instruction leading to errors when those ciphers are
used.
gas/config
* tc-arc.c (md_assembly): Update strspn string with the
missing ciphers.
Signed-off-by: Claudiu Zissulescu <claziss@synopsys.com>
In commit 9d9dd861e9 ("[gdb/testsuite] Fix regression in
step-indirect-call-thunk.exp with gcc 7") I accidentally committed a duplicate
of supports_gnuc, which caused:
...
DUPLICATE: gdb.base/gdb-caching-proc.exp: supports_gnuc: consistency
...
Fix this by removing the duplicate.
Tested on x86_64-linux.
It is possible that a system might have a python3 executable, but no
python executable. For example, on my Fedora system the python2
package provides /usr/bin/python2, the python3 package provides
/usr/bin/python3, and the python-unversioned-command package provides
/usr/bin/python, which picks between python2 and python3.
It is quite possible to only have python3 available on a system.
Currently, when GDB configures, it looks for a 'python' executable.
If non is found then GDB will be built without python support. Or the
user needs to configure using --with-python=/usr/bin/python3.
This commit updates GDB's configure.ac script to first look for
'python', and then 'python3'. Now, on a system that only has a
python3 executable, GDB will automatically find, and use that in order
to provide python support, no user supplied configure arguments are
needed.
I've tested this on my local machine by removing the
python-unversioned-command package, confirming that there is no longer
a 'python' executable in my $PATH, and then rebuilding GDB from
scratch. GDB with this patch has python support.
Both forms were missing VexW0 (thus allowing Evex.W=1 to be encoded by
suitable means, which would cause #UD). The memory operand form further
was using the wrong Masking value, thus allowing zeroing-masking to be
encoded for the store form (which would again cause #UD).
This saves quite a number of shift instructions: The "operands" field
can now be retrieved by just masking (no shift), and extracting the
"extension_opcode" field now only requires a (signed) right shift, with
no prereq left one. (Of course there may be architectures where, in a
cross build, there might be no difference at all, e.g. when there are
suitable bitfield extraction insns.)
When running a task using parallel_for_each, we get the following
distribution:
...
Parallel for: n_elements: 7271
Parallel for: minimum elements per thread: 10
Parallel for: elts_per_thread: 1817
Parallel for: elements on worker thread 0 : 1817
Parallel for: elements on worker thread 1 : 1817
Parallel for: elements on worker thread 2 : 1817
Parallel for: elements on worker thread 3 : 0
Parallel for: elements on main thread : 1820
...
Note that there are 4 active threads, and scheduling elts_per_thread on each
of those handles 4 * 1817 == 7268, leaving 3 "left over" elements.
These leftovers are currently handled in the main thread.
That doesn't seem to matter much for this example, but for say 10 threads and
99 elements, you'd have 9 threads handling 9 elements and 1 thread handling 18
elements.
Instead, distribute the left over elements over the worker threads, such that
we have:
...
Parallel for: elements on worker thread 0 : 1818
Parallel for: elements on worker thread 1 : 1818
Parallel for: elements on worker thread 2 : 1818
Parallel for: elements on worker thread 3 : 0
Parallel for: elements on main thread : 1817
...
Tested on x86_64-linux.
Use set_sanitizer_default for ASAN_OPTIONS in lib/gdb.exp.
This allows us to override the default detect_leaks=0 setting, by manually
doing:
...
$ export ASAN_OPTIONS=detect_leaks=1
$ make check
...
Tested on x86_64-linux, by building with -fsanitize=address and running
test-case gdb.dwarf2/gdb-add-index.exp with and without
"export ASAN_OPTIONS=detect_leaks=1".