This changes how bookmarks are allocated and stored, replacing a
linked list with a vector and removing some ALL_* iterator macros.
Regression tested on x86-64 Fedora 34.
It exercises a bug that GDB previously had where it would lose track of
some registers when the inferior changed its vector length.
It also checks that the vg register and the size of the z0-z31 registers
correctly reflect the new vector length.
When the inferior program changes the SVE length, GDB can stop tracking
some registers as it obtains the new gdbarch that corresponds to the
updated length:
Breakpoint 1, do_sve_ioctl_test () at sve-ioctls.c:44
44 res = prctl(PR_SVE_SET_VL, i, 0, 0, 0, 0);
(gdb) print i
$2 = 32
(gdb) info registers
⋮
[ snip registers x0 to x30 ]
⋮
sp 0xffffffffeff0 0xffffffffeff0
pc 0xaaaaaaaaa8ac 0xaaaaaaaaa8ac <do_sve_ioctl_test+112>
cpsr 0x60000000 [ EL=0 BTYPE=0 C Z ]
fpsr 0x0 0
fpcr 0x0 0
vg 0x8 8
tpidr 0xfffff7fcb320 0xfffff7fcb320
(gdb) next
45 if (res < 0) {
(gdb) info registers
⋮
[ snip registers x0 to x30 ]
⋮
sp 0xffffffffeff0 0xffffffffeff0
pc 0xaaaaaaaaa8cc 0xaaaaaaaaa8cc <do_sve_ioctl_test+144>
cpsr 0x200000 [ EL=0 BTYPE=0 SS ]
fpsr 0x0 0
fpcr 0x0 0
vg 0x4 4
(gdb)
Notice that register tpidr disappeared when vg (which holds the vector
length) changed from 8 to 4. The tpidr register is provided by the
org.gnu.gdb.aarch64.tls feature.
This happens because the code that searches for a new gdbarch to match the
new vector length in aarch64_linux_nat_target::thread_architecture doesn't
take into account the features present in the target description associated
with the previous gdbarch. This patch makes it do that.
Since the id member of struct gdbarch_info is now unused, it's removed.
Since commit b2d8657, having a per-interpreter event/command loop is not
possible anymore.
As Insight uses a GUI that has its own event loop, gdb and GUI event
loops have then to be "merged" (i.e.: work together). But this is
problematic as gdb_do_one_event is not aware of this alternate event
loop and thus may wait forever.
A solution is to delegate GUI events handling to the gdb events handler.
Insight uses Tck/Tk as GUI and the latter offers a "notifier" feature to
implement such a delegation. The Tcl notifier spec requires the event wait
function to support a timeout parameter. Unfortunately gdb_do_one_event
does not feature such a parameter.
This timeout cannot be implemented externally with a gdb timer, because
it would become an event by itself and thus can cause a legitimate event to
be missed if the timeout is 0.
Tcl implements "idle events" that are (internally) triggered only when no
other event is pending. For this reason, it can call the event wait function
with a 0 timeout quite often.
This patch implements a wait timeout to gdb_do_one_event. The initial
pending events monitoring is performed as before without the possibility
to enter a wait state. If no pending event has been found during this
phase, a timer is then created for the given timeout in order to re-use
the implemented timeout logic and the event wait is then performed.
This "internal" timer only limits the wait time and should never be triggered.
It is deleted upon gdb_do_one_event exit.
The new parameter defaults to "no timeout" (-1): as it is used by Insight
only, there is no need to update calls from the gdb source tree.
Emitting this warning for every insn, including ones having actual
errors, is annoying. Introduce a boolean variable to emit the warning
just once on the first insn after .arch may have changed the things, and
move the warning to output_insn(). (I didn't want to go as far as
checking whether the .arch actually turned off the i386 bit, but doing
so would be an option.)
objcopy of broken SHT_GROUP sections shouldn't write garbage.
* elf.c (bfd_elf_set_group_contents): If number of entries is
unexpected, fill out section with zeros.
Fix mmo_get_byte to return a fail-safe value, not just on the first
call with a read error but on subsequent calls too.
* mmo.c (mmo_get_byte): Return the fail-safe value on every
call after a read error.
mmo_get_loc needs to handle arbitrary vma and size chunks. Fuzzers
found that it wasn't working so well when the end of chunks were
getting close to address wrap-around.
* mmo.c (mmo_get_loc): Make "size" unsigned. Avoid arithmetic
overflow when calculating whether range hits an existing chunk.
Swap params of is_note, so they are section, segment like others used
in rewrite_elf_program_header. Whitespace fixes, plus wrapping of
overlong lines.
In commit 68e80d96a8, the usage of
___lc_codepage_func was introduced to determine the current encoding.
Prior to version 9.0 of MinGW-w64, the function prototype for
___lc_codepage_func was missing and trying to build BFD caused the
following error:
error: implicit declaration of function ‘___lc_codepage_func’
This changeset adds a conditonal definition of
___lc_codepage_func to allow a sucessful build with MinGW-w64.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
When displaying operands, invalid opcodes may overflow operand buffer
due to additional styling characters. Each style is encoded with 3
bytes. Define MAX_OPERAND_BUFFER_SIZE for operand buffer size and
increase it from 100 bytes to 128 bytes to accommodate 9 sets of styles
in an operand.
gas/
PR binutils/29483
* testsuite/gas/i386/i386.exp: Run pr29483.
* testsuite/gas/i386/pr29483.d: New file.
* testsuite/gas/i386/pr29483.s: Likewise.
opcodes/
PR binutils/29483
* i386-dis.c (MAX_OPERAND_BUFFER_SIZE): New.
(obuf): Replace 100 with MAX_OPERAND_BUFFER_SIZE.
(staging_area): Likewise.
(op_out): Likewise.
I noticed that the gdb.arch/riscv-unwind-long-insn.exp test was
failing when run on a 64-bit RISC-V target.
The problem was that GDB was failing to stop after a finish command,
and was then running to an unexpected location.
The reason GDB failed to stop at the finish breakpoint was that the
frame-id of the inferior, when we reached the finish breakpoint,
didn't match the expected frame-id that was stored on the breakpoint.
The reason for this mismatch was that the assembler code that is
included in this test, was written only taking 32-bit RISC-V into
account, as a result, the $fp register was being corrupted, and this
was causing the frame-id mismatch.
Specifically, the $fp register would end up being sign-extended from
32 to 64 bits. If the expected $fp value has some significant bits
above bit 31 then the computed and expected frame-ids would not match.
To fix this I propose merging the two .s files into a single .S file,
and making use of preprocessor macros to specialise the file for the
correct size of $fp. There are plenty of existing tests that already
make use of preprocessor macros in assembler files, so I assume this
approach is fine.
Once I'd decided to make use of preprocessor macros to solve the 32/64
bit issue, then I figured I might as well merge the two test assembler
files, they only differed by a single instruction.
With this change in place I now see this test fully passing on 32 and
64 bit RISC-V targets.
Commit 9db0d8536d ("gdb/mi: fix breakpoint script field output") fixed
the output of the script key in the MI breakpoint output, from
script={"print 10","continue"}
to
script=["print 10","continue"]
However, it missed updating this test case, which still tests for the
old (broken) form, causing:
FAIL: gdb.mi/mi-break.exp: mi-mode=main: test_breakpoint_commands: breakpoint commands: check that commands are set (unexpected output)
FAIL: gdb.mi/mi-break.exp: mi-mode=separate: test_breakpoint_commands: breakpoint commands: check that commands are set (unexpected output)
Update the test to expect the new form.
Change-Id: I174919d4eea53e96d914ca9bd1cf6f01c8de30b8
When working on windows-nat.c, it's useful to see an error message in
addition to the error number given by GetLastError. This patch moves
strwinerror from gdbserver to gdbsupport, and then updates
windows-nat.c to use it. A couple of minor changes to strwinerror
(constify the return type and use the ARRAY_SIZE macro) are also
included.
This patch, in order of significance:
1) Replaces some macros with inline functions.
2) Those inline functions catch and avoid arithmetic overflows when
comparing addresses.
3) When assigning sections to segments (IS_SECTION_IN_INPUT_SEGMENT)
use bed->want_p_paddr_set_to_zero to decide whether lma vs p_paddr
or vma vs p_vaddr should be tested. When remapping, use the same
test, and use is_note rather than the more restrictive
IS_COREFILE_NOTE.
It's important that the later tests not be more restrictive. If they
are it can lead to the situation triggered by the testcases, where a
section seemingly didn't fit and thus needed a new mapping. It didn't
fit the new mapping either, and this repeated until memory exhausted.
PR 29495
* elf.c (SEGMENT_END, SECTION_SIZE, IS_CONTAINED_BY_VMA): Delete.
(IS_CONTAINED_BY_LMA, IS_NOTE, IS_COREFILE_NOTE): Delete.
(segment_size, segment_end, section_size): New inline function.
(is_contained_by, is_note): Likewise.
(rewrite_elf_program_header): Use new functions.
Now that we can purge templates, let's use this to improve readability a
little by shortening a few of their names, making functionally similar
ones also have identical names in their multiple incarnations.
Many of the vector conversion insns come with X/Y/Z suffixed forms, for
disambiguation purposes in AT&T syntax. All of these gorups follow
certain patterns. Introduce "xy" and "xyz" templates to reduce
redundancy.
To facilitate using a uniform name for both AVX and AVX512, further
introduce a means to purge a previously defined template: A standalone
<name> will be recognized to have this effect.
Note that in the course of the conversion VFPCLASSPH is properly split
to separate AT&T and Intel syntax forms, matching VFPCLASSP{S,D} and
yielding the intended "ambiguous operand size" diagnostic in Intel mode.
Many of the vector integer insns come in byte/word element pairs. Most
of these pairs follow certain encoding patterns. Introduce a "bw"
template to reduce redundancy.
Note that in the course of the conversion
- the AVX VPEXTRW template which is not being touched needs to remain
ahead of the new "combined" ones, as (a) this should be tried first
when matching insns against templates and (b) its Load attributes
requires it to be first,
- this add a benign/meaningless IgnoreSize attribute to the memory form
of PEXTRB; it didn't seem worth avoiding this.
Many of the vector integer insns come in dword/qword element pairs. Most
of these pairs follow certain encoding patterns. Introduce a "dq"
template to reduce redundancy.
Note that in the course of the conversion
- a few otherwise untouched templates are moved, so they end up next to
their siblings),
- drop an unhelpful Cpu64 from the GPR form of VPBROADCASTQ, matching
what we already have for KMOVQ - the diagnostic is better this way for
insns with multiple forms (i.e. the same Cpu64 attributes on {,V}MOVQ,
{,V}PEXTRQ, and {,V}PINSRQ are useful to keep),
- this adds benign/meaningless IgnoreSize attributes to the GPR forms of
KMOVD and VPBROADCASTD; it didn't seem worth avoiding this.
The vast majority of vector FP insns comes in single/double pairs. Many
pairs follow certain encoding patterns. Introduce an "sd" template to
reduce redundancy. Similarly, to further cover similarities between
AVX512F and AVX512-FP16, introduce an "sdh" template.
For element-size Disp8 shift generalize i386-gen's broadcast size
determination, allowing Disp8MemShift to be specified without an operand
in the affected templated templates. While doing the adjustment also
eliminate an unhelpful (lost information) diagnostic combined with a use
after free in what is now get_element_size().
Note that in the course of the conversion
- the AVX512F form of VMOVUPD has a stray (leftover) Load attribute
dropped,
- VMOVSH has a benign IgnoreSize added (the attribute is still strictly
necessary for VMOVSD, and necessary for VMOVSS as long as we permit
strange combinations like "-march=i286+avx"),
- VFPCLASSPH is properly split to separate AT&T and Intel syntax forms,
matching VFPCLASSP{S,D}.
This reverts commit 384f368958, which
broke i386-gen's emitting of diagnostics. As a replacement to address
the original issue of newer gcc no longer splicing lines when dropping
the line continuation backslashes, switch to using + as the line
continuation character, doing the line splicing in i386-gen.
2022-08-16 Alan Modra <amodra@gmail.com>
Cunlong Li <shenxiaogll@163.com>
PR 29362
* dwarf.c (free_debug_information): New function, extracted..
(free_debug_memory): ..from here.
(process_debug_info): Use it when before clearing out unit
debug_information. Clear all fields.
* objcopy.c (delete_symbol_htabs): New function.
(main): Call it via xatexit.
(copy_archive): Free "dir".
* objdump.c (free_debug_section): Free reloc_info.
When kernel veriosn >= V4.x, the characteristic values used to
determine whether it is a signal function call are:
movi r7, 139
trap 0
Registers are saved at (sp + CSKY_SIGINFO_OFFSET + CSKY_SIGINFO_SIZE
+ CSKY_UCONTEXT_SIGCONTEXT + CSKY_SIGCONTEXT_PT_REGS_TLS). The order
is described in csky_linux_rt_sigreturn_init_pt_regs.
I know this target is just a skeleton, but let's not write out relocs
with uninitialised garbage.
* coff-aarch64.c (SWAP_IN_RELOC_OFFSET): Define.
(SWAP_OUT_RELOC_OFFSET): Define.
There's a comment in riscv-tdep.c that explains some of the background
about how we check for the fcsr, fflags, and frm registers within a
riscv target description.
This comment (and the functionality it describes) relates to how QEMU
advertises these registers within its target description.
Unfortunately, QEMU includes these three registers in both the fpu and
crs target description features. To work around this GDB uses one of
the register declarations, and ignores the other, this means the GDB
user sees a single copy of each register, and things just work.
When I originally wrote the comment I thought it didn't matter which
copy of the register GDB selected, the fpu copy or the csr copy, so
long as we just used one of them. The comment reflected this belief.
Upon further investigation, it turns out I was wrong. GDB has to use
the csr copy of the register. If GDB tries to use the register from
the fpu feature then QEMU will return an error when GDB tries to read
or write the register.
Luckily, the code within GDB (currently) will always select the csr
copy of the register, so nothing is broken, but the comment is wrong.
This commit updates the comment to better describe what is actually
going on.
Of course, I should probably also send a patch to QEMU to fix up the
target description that is sent to GDB.
After this commit:
commit 7b7c365c5c
Date: Wed Sep 15 10:10:46 2021 +0200
[bfd] Ensure unique printable names for bfd archs
The printable name field of the default nds32 bfd_arch_info changed
from 'n1h' to 'n1'. As a consequence the generated feature file
within GDB should have been recreated. Recreate it now.
breakpoint::decode_location_spec just asserts if called. It turned
out to be relatively easy to remove this method from breakpoint and
instead move the base implementation to code_breakpoint.
This changes readelf output a little, removing the 0x prefix on hex
output when the value is 0, except in cases where a fixed field
width is shown. %#010x is not a good replacement for 0x%08x.
This replaces dwarf_vma, dwarf_size_type and dwarf_signed_vma with
uint64_t and int64_t everywhere. The patch also gets rid of
DWARF_VMA_FMT since we can't use that with uint64_t, and all of the
configure support for deciding the flavour of HOST_WIDEST_INT.
dwarf_vmatoa also disappears, replacing most uses with one of
PRIx64, PRId64 or PRIu64. Printing of size_t and ptrdiff_t values
now use %z and %t rather than by casting to unsigned long. Also,
most warning messages that used 0x%lx or similar now use %#lx and a
few that didn't print the 0x hex prefix now also use %#. The patch
doesn't change normal readelf output, except in odd cases where values
previously might have been truncated.
This replaces bfd_vma with uint64_t in readelf, defines BFD64
unconditionally, removes tests of BFD64 and sizeof (bfd_vma), and
removes quite a few now unnecessary casts.
Replacing bfd_size_type with dwarf_size_type or uint64_t is mostly
cosmetic. The point of the change is to avoid use of a BFD type
in readelf, where we'd like to keep as independent of BFD as
possible. Also, the patch is a step towards using standard types.
This patch replaces all uses of elf_vma with uint64_t, removes
tests of sizeof (elf_vma), and does a little tidying of
byte_get_little_endian and byte_get_big_endian.