mirror of
https://sourceware.org/git/binutils-gdb.git
synced 2024-11-27 03:51:15 +08:00
cf141dd8cc
This commit aims to address a problem that exists with the current approach to displaced stepping, and was identified in PR gdb/22921. Displaced stepping is currently supported on AArch64, ARM, amd64, i386, rs6000 (ppc), and s390. Of these, I believe there is a problem with the current approach which will impact amd64 and ARM, and can lead to random register corruption when the inferior makes use of asynchronous signals and GDB is using displaced stepping. The problem can be found in displaced_step_buffers::finish in displaced-stepping.c, and is this; after GDB tries to perform a displaced step, and the inferior stops, GDB classifies the stop into one of two states, either the displaced step succeeded, or the displaced step failed. If the displaced step succeeded then gdbarch_displaced_step_fixup is called, which has the job of fixing up the state of the current inferior as if the step had not been performed in a displaced manner. This all seems just fine. However, if the displaced step is considered to have not completed then GDB doesn't call gdbarch_displaced_step_fixup, instead GDB remains in displaced_step_buffers::finish and just performs a minimal fixup which involves adjusting the program counter back to its original value. The problem here is that for amd64 and ARM setting up for a displaced step can involve changing the values in some temporary registers. If the displaced step succeeds then this is fine; after the step the temporary registers are restored to their original values in the architecture specific code. But if the displaced step does not succeed then the temporary registers are never restored, and they retain their modified values. In this context a temporary register is simply any register that is not otherwise used by the instruction being stepped that the architecture specific code considers safe to borrow for the lifetime of the instruction being stepped. In the bug PR gdb/22921, the amd64 instruction being stepped is an rip-relative instruction like this: jmp *0x2fe2(%rip) When we displaced step this instruction we borrow a register, and modify the instruction to something like: jmp *0x2fe2(%rcx) with %rcx having its value adjusted to contain the original %rip value. Now if the displaced step does not succeed, then %rcx will be left with a corrupted value. Obviously corrupting any register is bad; in the bug report this problem was spotted because %rcx is used as a function argument register. And finally, why might a displaced step not succeed? Asynchronous signals provides one reason. GDB sets up for the displaced step and, at that precise moment, the OS delivers a signal (SIGALRM in the bug report), the signal stops the inferior at the address of the displaced instruction. GDB cancels the displaced instruction, handles the signal, and then tries again with the displaced step. But it is that first cancellation of the displaced step that causes the problem; in that case GDB (correctly) sees the displaced step as having not completed, and so does not perform the architecture specific fixup, leaving the register corrupted. The reason why I think AArch64, rs600, i386, and s390 are not effected by this problem is that I don't believe these architectures make use of any temporary registers, so when a displaced step is not completed successfully, the minimal fix up is sufficient. On amd64 we use at most one temporary register. On ARM, looking at arm_displaced_step_copy_insn_closure, we could modify up to 16 temporary registers, and the instruction being displaced stepped could be expanded to multiple replacement instructions, which increases the chances of this bug triggering. This commit only aims to address the issue on amd64 for now, though I believe that the approach I'm proposing here might be applicable for ARM too. What I propose is that we always call gdbarch_displaced_step_fixup. We will now pass an extra argument to gdbarch_displaced_step_fixup, this a boolean that indicates whether GDB thinks the displaced step completed successfully or not. When this flag is false this indicates that the displaced step halted for some "other" reason. On ARM GDB can potentially read the inferior's program counter in order figure out how far through the sequence of replacement instructions we got, and from that GDB can figure out what fixup needs to be performed. On targets like amd64 the problem is slightly easier as displaced stepping only uses a single replacement instruction. If the displaced step didn't complete the GDB knows that the single instruction didn't execute. The point is that by always calling gdbarch_displaced_step_fixup, each architecture can now ensure that the inferior state is fixed up correctly in all cases, not just the success case. On amd64 this ensures that we always restore the temporary register value, and so bug PR gdb/22921 is resolved. In order to move all architectures to this new API, I have moved the minimal roll-back version of the code inside the architecture specific fixup functions for AArch64, rs600, s390, and ARM. For all of these except ARM I think this is good enough, as no temporaries are used all that's needed is the program counter restore anyway. For ARM the minimal code is no worse than what we had before, though I do consider this architecture's displaced-stepping broken. I've updated the gdb.arch/amd64-disp-step.exp test to cover the 'jmpq*' instruction that was causing problems in the original bug, and also added support for testing the displaced step in the presence of asynchronous signal delivery. I've also added two new tests (for amd64 and i386) that check that GDB can correctly handle displaced stepping over a single instruction that branches to itself. I added these tests after a first version of this patch relied too much on checking the program-counter value in order to see if the displaced instruction had executed. This works fine in almost all cases, but when an instruction branches to itself a pure program counter check is not sufficient. The new tests expose this problem. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=22921 Approved-By: Pedro Alves <pedro@palves.net>
150 lines
4.2 KiB
C
150 lines
4.2 KiB
C
/* Common target dependent code for GDB on AArch64 systems.
|
|
|
|
Copyright (C) 2009-2023 Free Software Foundation, Inc.
|
|
Contributed by ARM Ltd.
|
|
|
|
This file is part of GDB.
|
|
|
|
This program is free software; you can redistribute it and/or modify
|
|
it under the terms of the GNU General Public License as published by
|
|
the Free Software Foundation; either version 3 of the License, or
|
|
(at your option) any later version.
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
GNU General Public License for more details.
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>. */
|
|
|
|
|
|
#ifndef AARCH64_TDEP_H
|
|
#define AARCH64_TDEP_H
|
|
|
|
#include "arch/aarch64.h"
|
|
#include "displaced-stepping.h"
|
|
#include "infrun.h"
|
|
#include "gdbarch.h"
|
|
|
|
/* Forward declarations. */
|
|
struct gdbarch;
|
|
struct regset;
|
|
|
|
/* AArch64 Dwarf register numbering. */
|
|
#define AARCH64_DWARF_X0 0
|
|
#define AARCH64_DWARF_SP 31
|
|
#define AARCH64_DWARF_PC 32
|
|
#define AARCH64_DWARF_RA_SIGN_STATE 34
|
|
#define AARCH64_DWARF_V0 64
|
|
#define AARCH64_DWARF_SVE_VG 46
|
|
#define AARCH64_DWARF_SVE_FFR 47
|
|
#define AARCH64_DWARF_SVE_P0 48
|
|
#define AARCH64_DWARF_SVE_Z0 96
|
|
|
|
/* Size of integer registers. */
|
|
#define X_REGISTER_SIZE 8
|
|
#define B_REGISTER_SIZE 1
|
|
#define H_REGISTER_SIZE 2
|
|
#define S_REGISTER_SIZE 4
|
|
#define D_REGISTER_SIZE 8
|
|
#define Q_REGISTER_SIZE 16
|
|
|
|
/* Total number of general (X) registers. */
|
|
#define AARCH64_X_REGISTER_COUNT 32
|
|
/* Total number of D registers. */
|
|
#define AARCH64_D_REGISTER_COUNT 32
|
|
|
|
/* The maximum number of modified instructions generated for one
|
|
single-stepped instruction. */
|
|
#define AARCH64_DISPLACED_MODIFIED_INSNS 1
|
|
|
|
/* Target-dependent structure in gdbarch. */
|
|
struct aarch64_gdbarch_tdep : gdbarch_tdep_base
|
|
{
|
|
/* Lowest address at which instructions will appear. */
|
|
CORE_ADDR lowest_pc = 0;
|
|
|
|
/* Offset to PC value in jump buffer. If this is negative, longjmp
|
|
support will be disabled. */
|
|
int jb_pc = 0;
|
|
|
|
/* And the size of each entry in the buf. */
|
|
size_t jb_elt_size = 0;
|
|
|
|
/* Types for AdvSISD registers. */
|
|
struct type *vnq_type = nullptr;
|
|
struct type *vnd_type = nullptr;
|
|
struct type *vns_type = nullptr;
|
|
struct type *vnh_type = nullptr;
|
|
struct type *vnb_type = nullptr;
|
|
struct type *vnv_type = nullptr;
|
|
|
|
/* syscall record. */
|
|
int (*aarch64_syscall_record) (struct regcache *regcache,
|
|
unsigned long svc_number) = nullptr;
|
|
|
|
/* The VQ value for SVE targets, or zero if SVE is not supported. */
|
|
uint64_t vq = 0;
|
|
|
|
/* Returns true if the target supports SVE. */
|
|
bool has_sve () const
|
|
{
|
|
return vq != 0;
|
|
}
|
|
|
|
int pauth_reg_base = 0;
|
|
/* Number of pauth masks. */
|
|
int pauth_reg_count = 0;
|
|
int ra_sign_state_regnum = 0;
|
|
|
|
/* Returns true if the target supports pauth. */
|
|
bool has_pauth () const
|
|
{
|
|
return pauth_reg_base != -1;
|
|
}
|
|
|
|
/* First MTE register. This is -1 if no MTE registers are available. */
|
|
int mte_reg_base = 0;
|
|
|
|
/* Returns true if the target supports MTE. */
|
|
bool has_mte () const
|
|
{
|
|
return mte_reg_base != -1;
|
|
}
|
|
|
|
/* TLS registers. This is -1 if the TLS registers are not available. */
|
|
int tls_regnum_base = 0;
|
|
int tls_register_count = 0;
|
|
|
|
bool has_tls() const
|
|
{
|
|
return tls_regnum_base != -1;
|
|
}
|
|
|
|
/* The W pseudo-registers. */
|
|
int w_pseudo_base = 0;
|
|
int w_pseudo_count = 0;
|
|
};
|
|
|
|
const target_desc *aarch64_read_description (const aarch64_features &features);
|
|
aarch64_features
|
|
aarch64_features_from_target_desc (const struct target_desc *tdesc);
|
|
|
|
extern int aarch64_process_record (struct gdbarch *gdbarch,
|
|
struct regcache *regcache, CORE_ADDR addr);
|
|
|
|
displaced_step_copy_insn_closure_up
|
|
aarch64_displaced_step_copy_insn (struct gdbarch *gdbarch,
|
|
CORE_ADDR from, CORE_ADDR to,
|
|
struct regcache *regs);
|
|
|
|
void aarch64_displaced_step_fixup (struct gdbarch *gdbarch,
|
|
displaced_step_copy_insn_closure *dsc,
|
|
CORE_ADDR from, CORE_ADDR to,
|
|
struct regcache *regs, bool completed_p);
|
|
|
|
bool aarch64_displaced_step_hw_singlestep (struct gdbarch *gdbarch);
|
|
|
|
#endif /* aarch64-tdep.h */
|