binutils-gdb/gdb/displaced-stepping.c

306 lines
9.1 KiB
C
Raw Normal View History

2020-12-05 05:43:55 +08:00
/* Displaced stepping related things.
Copyright (C) 2020-2022 Free Software Foundation, Inc.
2020-12-05 05:43:55 +08:00
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
#include "defs.h"
#include "displaced-stepping.h"
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
#include "cli/cli-cmds.h"
2020-12-05 05:43:55 +08:00
#include "command.h"
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
#include "gdbarch.h"
#include "gdbcore.h"
#include "gdbthread.h"
#include "inferior.h"
#include "regcache.h"
#include "target/target.h"
2020-12-05 05:43:55 +08:00
/* Default destructor for displaced_step_copy_insn_closure. */
displaced_step_copy_insn_closure::~displaced_step_copy_insn_closure ()
= default;
bool debug_displaced = false;
static void
show_debug_displaced (struct ui_file *file, int from_tty,
struct cmd_list_element *c, const char *value)
{
gdb_printf (file, _("Displace stepping debugging is %s.\n"), value);
2020-12-05 05:43:55 +08:00
}
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
displaced_step_prepare_status
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
displaced_step_buffers::prepare (thread_info *thread, CORE_ADDR &displaced_pc)
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
{
gdb_assert (!thread->displaced_step_state.in_progress ());
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Sanity check: the thread should not be using a buffer at this point. */
for (displaced_step_buffer &buf : m_buffers)
gdb_assert (buf.current_thread != thread);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
regcache *regcache = get_thread_regcache (thread);
const address_space *aspace = regcache->aspace ();
gdbarch *arch = regcache->arch ();
ULONGEST len = gdbarch_max_insn_length (arch);
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Search for an unused buffer. */
displaced_step_buffer *buffer = nullptr;
displaced_step_prepare_status fail_status
= DISPLACED_STEP_PREPARE_STATUS_CANT;
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
for (displaced_step_buffer &candidate : m_buffers)
{
bool bp_in_range = breakpoint_in_range_p (aspace, candidate.addr, len);
bool is_free = candidate.current_thread == nullptr;
if (!bp_in_range)
{
if (is_free)
{
buffer = &candidate;
break;
}
else
{
/* This buffer would be suitable, but it's used right now. */
fail_status = DISPLACED_STEP_PREPARE_STATUS_UNAVAILABLE;
}
}
else
{
/* There's a breakpoint set in the scratch pad location range
(which is usually around the entry point). We'd either
install it before resuming, which would overwrite/corrupt the
scratch pad, or if it was already inserted, this displaced
step would overwrite it. The latter is OK in the sense that
we already assume that no thread is going to execute the code
in the scratch pad range (after initial startup) anyway, but
the former is unacceptable. Simply punt and fallback to
stepping over this breakpoint in-line. */
displaced_debug_printf ("breakpoint set in displaced stepping "
"buffer at %s, can't use.",
paddress (arch, candidate.addr));
}
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
}
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
if (buffer == nullptr)
return fail_status;
displaced_debug_printf ("selected buffer at %s",
paddress (arch, buffer->addr));
/* Save the original PC of the thread. */
buffer->original_pc = regcache_read_pc (regcache);
/* Return displaced step buffer address to caller. */
displaced_pc = buffer->addr;
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* Save the original contents of the displaced stepping buffer. */
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
buffer->saved_copy.resize (len);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
int status = target_read_memory (buffer->addr,
buffer->saved_copy.data (), len);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
if (status != 0)
throw_error (MEMORY_ERROR,
_("Error accessing memory address %s (%s) for "
"displaced-stepping scratch space."),
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
paddress (arch, buffer->addr), safe_strerror (status));
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
displaced_debug_printf ("saved %s: %s",
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
paddress (arch, buffer->addr),
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
displaced_step_dump_bytes
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
(buffer->saved_copy.data (), len).c_str ());
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* Save this in a local variable first, so it's released if code below
throws. */
displaced_step_copy_insn_closure_up copy_insn_closure
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
= gdbarch_displaced_step_copy_insn (arch, buffer->original_pc,
buffer->addr, regcache);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
if (copy_insn_closure == nullptr)
{
/* The architecture doesn't know how or want to displaced step
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
this instruction or instruction sequence. Fallback to
stepping over the breakpoint in-line. */
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
return DISPLACED_STEP_PREPARE_STATUS_CANT;
}
/* Resume execution at the copy. */
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
regcache_write_pc (regcache, buffer->addr);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* This marks the buffer as being in use. */
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
buffer->current_thread = thread;
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* Save this, now that we know everything went fine. */
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
buffer->copy_insn_closure = std::move (copy_insn_closure);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Tell infrun not to try preparing a displaced step again for this inferior if
all buffers are taken. */
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
thread->inf->displaced_step_state.unavailable = true;
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
for (const displaced_step_buffer &buf : m_buffers)
{
if (buf.current_thread == nullptr)
{
thread->inf->displaced_step_state.unavailable = false;
break;
}
}
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
return DISPLACED_STEP_PREPARE_STATUS_OK;
}
static void
write_memory_ptid (ptid_t ptid, CORE_ADDR memaddr,
const gdb_byte *myaddr, int len)
{
scoped_restore save_inferior_ptid = make_scoped_restore (&inferior_ptid);
inferior_ptid = ptid;
write_memory (memaddr, myaddr, len);
}
static bool
displaced_step_instruction_executed_successfully (gdbarch *arch,
gdb_signal signal)
{
if (signal != GDB_SIGNAL_TRAP)
return false;
if (target_stopped_by_watchpoint ())
{
if (gdbarch_have_nonsteppable_watchpoint (arch)
|| target_have_steppable_watchpoint ())
return false;
}
return true;
}
displaced_step_finish_status
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
displaced_step_buffers::finish (gdbarch *arch, thread_info *thread,
gdb_signal sig)
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
{
gdb_assert (thread->displaced_step_state.in_progress ());
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Find the buffer this thread was using. */
displaced_step_buffer *buffer = nullptr;
for (displaced_step_buffer &candidate : m_buffers)
{
if (candidate.current_thread == thread)
{
buffer = &candidate;
break;
}
}
gdb_assert (buffer != nullptr);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* Move this to a local variable so it's released in case something goes
wrong. */
displaced_step_copy_insn_closure_up copy_insn_closure
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
= std::move (buffer->copy_insn_closure);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
gdb_assert (copy_insn_closure != nullptr);
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Reset BUFFER->CURRENT_THREAD immediately to mark the buffer as available,
in case something goes wrong below. */
buffer->current_thread = nullptr;
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
/* Now that a buffer gets freed, tell infrun it can ask us to prepare a displaced
step again for this inferior. Do that here in case something goes wrong
below. */
thread->inf->displaced_step_state.unavailable = false;
ULONGEST len = gdbarch_max_insn_length (arch);
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
/* Restore memory of the buffer. */
write_memory_ptid (thread->ptid, buffer->addr,
buffer->saved_copy.data (), len);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
displaced_debug_printf ("restored %s %s",
thread->ptid.to_string ().c_str (),
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
paddress (arch, buffer->addr));
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
regcache *rc = get_thread_regcache (thread);
bool instruction_executed_successfully
= displaced_step_instruction_executed_successfully (arch, sig);
if (instruction_executed_successfully)
{
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
gdbarch_displaced_step_fixup (arch, copy_insn_closure.get (),
buffer->original_pc,
buffer->addr, rc);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
return DISPLACED_STEP_FINISH_STATUS_OK;
}
else
{
/* Since the instruction didn't complete, all we can do is relocate the
PC. */
CORE_ADDR pc = regcache_read_pc (rc);
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
pc = buffer->original_pc + (pc - buffer->addr);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
regcache_write_pc (rc, pc);
return DISPLACED_STEP_FINISH_STATUS_NOT_EXECUTED;
}
}
const displaced_step_copy_insn_closure *
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
displaced_step_buffers::copy_insn_closure_by_addr (CORE_ADDR addr)
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
{
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
for (const displaced_step_buffer &buffer : m_buffers)
{
if (addr == buffer.addr)
return buffer.copy_insn_closure.get ();
}
return nullptr;
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
}
void
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
displaced_step_buffers::restore_in_ptid (ptid_t ptid)
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
{
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
for (const displaced_step_buffer &buffer : m_buffers)
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
{
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
if (buffer.current_thread == nullptr)
continue;
regcache *regcache = get_thread_regcache (buffer.current_thread);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
gdbarch *arch = regcache->arch ();
ULONGEST len = gdbarch_max_insn_length (arch);
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
write_memory_ptid (ptid, buffer.addr, buffer.saved_copy.data (), len);
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
displaced_debug_printf ("restored in ptid %s %s",
ptid.to_string ().c_str (),
gdb: make displaced stepping implementation capable of managing multiple buffers The displaced_step_buffer class, introduced in the previous patch, manages access to a single displaced step buffer. Change it into displaced_step_buffers (note the plural), which manages access to multiple displaced step buffers. When preparing a displaced step for a thread, it looks for an unused buffer. For now, all users still pass a single displaced step buffer, so no real behavior change is expected here. The following patch makes a user pass more than one buffer, so the functionality introduced by this patch is going to be useful in the next one. gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_buffer): Rename to... (struct displaced_step_buffers): ... this. <m_addr, m_current_thread, m_copy_insn_closure>: Remove. <struct displaced_step_buffer>: New inner class. <m_buffers>: New. * displaced-stepping.c (displaced_step_buffer::prepare): Rename to... (displaced_step_buffers::prepare): ... this, adjust for multiple buffers. (displaced_step_buffer::finish): Rename to... (displaced_step_buffers::finish): ... this, adjust for multiple buffers. (displaced_step_buffer::copy_insn_closure_by_addr): Rename to... (displaced_step_buffers::copy_insn_closure_by_addr): ... this, adjust for multiple buffers. (displaced_step_buffer::restore_in_ptid): Rename to... (displaced_step_buffers::restore_in_ptid): ... this, adjust for multiple buffers. * linux-tdep.h (linux_init_abi): Change supports_displaced_step for num_disp_step_buffers. * linux-tdep.c (struct linux_gdbarch_data) <num_disp_step_buffers>: New field. (struct linux_info) <disp_step_buf>: Rename to... <disp_step_bufs>: ... this, change type to displaced_step_buffers. (linux_displaced_step_prepare): Use linux_gdbarch_data::num_disp_step_buffers to create that number of buffers. (linux_displaced_step_finish): Adjust. (linux_displaced_step_copy_insn_closure_by_addr): Adjust. (linux_displaced_step_restore_all_in_ptid): Adjust. (linux_init_abi): Change supports_displaced_step parameter for num_disp_step_buffers, save it in linux_gdbarch_data. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Change supports_displaced_step parameter for num_disp_step_buffers. (amd64_linux_init_abi): Adjust. (amd64_x32_linux_init_abi): Adjust. * arc-linux-tdep.c (arc_linux_init_osabi): Adjust. * arm-linux-tdep.c (arm_linux_init_abi): Adjust. * bfin-linux-tdep.c (bfin_linux_init_abi): Adjust. * cris-linux-tdep.c (cris_linux_init_abi): Adjust. * csky-linux-tdep.c (csky_linux_init_abi): Adjust. * frv-linux-tdep.c (frv_linux_init_abi): Adjust. * hppa-linux-tdep.c (hppa_linux_init_abi): Adjust. * i386-linux-tdep.c (i386_linux_init_abi): Adjust. * ia64-linux-tdep.c (ia64_linux_init_abi): Adjust. * m32r-linux-tdep.c (m32r_linux_init_abi): Adjust. * m68k-linux-tdep.c (m68k_linux_init_abi): * microblaze-linux-tdep.c (microblaze_linux_init_abi): * mips-linux-tdep.c (mips_linux_init_abi): Adjust. * mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust. * nios2-linux-tdep.c (nios2_linux_init_abi): Adjust. * or1k-linux-tdep.c (or1k_linux_init_abi): Adjust. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust. * riscv-linux-tdep.c (riscv_linux_init_abi): Adjust. * rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>: Change type to displaced_step_buffers. * s390-linux-tdep.c (s390_linux_init_abi_any): Adjust. * sh-linux-tdep.c (sh_linux_init_abi): Adjust. * sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust. Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
2020-12-05 05:43:56 +08:00
paddress (arch, buffer.addr));
gdb: move displaced stepping logic to gdbarch, allow starting concurrent displaced steps Today, GDB only allows a single displaced stepping operation to happen per inferior at a time. There is a single displaced stepping buffer per inferior, whose address is fixed (obtained with gdbarch_displaced_step_location), managed by infrun.c. In the case of the AMD ROCm target [1] (in the context of which this work has been done), it is typical to have thousands of threads (or waves, in SMT terminology) executing the same code, hitting the same breakpoint (possibly conditional) and needing to to displaced step it at the same time. The limitation of only one displaced step executing at a any given time becomes a real bottleneck. To fix this bottleneck, we want to make it possible for threads of a same inferior to execute multiple displaced steps in parallel. This patch builds the foundation for that. In essence, this patch moves the task of preparing a displaced step and cleaning up after to gdbarch functions. This allows using different schemes for allocating and managing displaced stepping buffers for different platforms. The gdbarch decides how to assign a buffer to a thread that needs to execute a displaced step. On the ROCm target, we are able to allocate one displaced stepping buffer per thread, so a thread will never have to wait to execute a displaced step. On Linux, the entry point of the executable if used as the displaced stepping buffer, since we assume that this code won't get used after startup. From what I saw (I checked with a binary generated against glibc and musl), on AMD64 we have enough space there to fit two displaced stepping buffers. A subsequent patch makes AMD64/Linux use two buffers. In addition to having multiple displaced stepping buffers, there is also the idea of sharing displaced stepping buffers between threads. Two threads doing displaced steps for the same PC could use the same buffer at the same time. Two threads stepping over the same instruction (same opcode) at two different PCs may also be able to share a displaced stepping buffer. This is an idea for future patches, but the architecture built by this patch is made to allow this. Now, the implementation details. The main part of this patch is moving the responsibility of preparing and finishing a displaced step to the gdbarch. Before this patch, preparing a displaced step is driven by the displaced_step_prepare_throw function. It does some calls to the gdbarch to do some low-level operations, but the high-level logic is there. The steps are roughly: - Ask the gdbarch for the displaced step buffer location - Save the existing bytes in the displaced step buffer - Ask the gdbarch to copy the instruction into the displaced step buffer - Set the pc of the thread to the beginning of the displaced step buffer Similarly, the "fixup" phase, executed after the instruction was successfully single-stepped, is driven by the infrun code (function displaced_step_finish). The steps are roughly: - Restore the original bytes in the displaced stepping buffer - Ask the gdbarch to fixup the instruction result (adjust the target's registers or memory to do as if the instruction had been executed in its original location) The displaced_step_inferior_state::step_thread field indicates which thread (if any) is currently using the displaced stepping buffer, so it is used by displaced_step_prepare_throw to check if the displaced stepping buffer is free to use or not. This patch defers the whole task of preparing and cleaning up after a displaced step to the gdbarch. Two new main gdbarch methods are added, with the following semantics: - gdbarch_displaced_step_prepare: Prepare for the given thread to execute a displaced step of the instruction located at its current PC. Upon return, everything should be ready for GDB to resume the thread (with either a single step or continue, as indicated by gdbarch_displaced_step_hw_singlestep) to make it displaced step the instruction. - gdbarch_displaced_step_finish: Called when the thread stopped after having started a displaced step. Verify if the instruction was executed, if so apply any fixup required to compensate for the fact that the instruction was executed at a different place than its original pc. Release any resources that were allocated for this displaced step. Upon return, everything should be ready for GDB to resume the thread in its "normal" code path. The displaced_step_prepare_throw function now pretty much just offloads to gdbarch_displaced_step_prepare and the displaced_step_finish function offloads to gdbarch_displaced_step_finish. The gdbarch_displaced_step_location method is now unnecessary, so is removed. Indeed, the core of GDB doesn't know how many displaced step buffers there are nor where they are. To keep the existing behavior for existing architectures, the logic that was previously implemented in infrun.c for preparing and finishing a displaced step is moved to displaced-stepping.c, to the displaced_step_buffer class. Architectures are modified to implement the new gdbarch methods using this class. The behavior is not expected to change. The other important change (which arises from the above) is that the core of GDB no longer prevents concurrent displaced steps. Before this patch, start_step_over walks the global step over chain and tries to initiate a step over (whether it is in-line or displaced). It follows these rules: - if an in-line step is in progress (in any inferior), don't start any other step over - if a displaced step is in progress for an inferior, don't start another displaced step for that inferior After starting a displaced step for a given inferior, it won't start another displaced step for that inferior. In the new code, start_step_over simply tries to initiate step overs for all the threads in the list. But because threads may be added back to the global list as it iterates the global list, trying to initiate step overs, start_step_over now starts by stealing the global queue into a local queue and iterates on the local queue. In the typical case, each thread will either: - have initiated a displaced step and be resumed - have been added back by the global step over queue by displaced_step_prepare_throw, because the gdbarch will have returned that there aren't enough resources (i.e. buffers) to initiate a displaced step for that thread Lastly, if start_step_over initiates an in-line step, it stops iterating, and moves back whatever remaining threads it had in its local step over queue to the global step over queue. Two other gdbarch methods are added, to handle some slightly annoying corner cases. They feel awkwardly specific to these cases, but I don't see any way around them: - gdbarch_displaced_step_copy_insn_closure_by_addr: in arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given buffer address. - gdbarch_displaced_step_restore_all_in_ptid: when a process forks (at least on Linux), the address space is copied. If some displaced step buffers were in use at the time of the fork, we need to restore the original bytes in the child's address space. These two adjustments are also made in infrun.c: - prepare_for_detach: there may be multiple threads doing displaced steps when we detach, so wait until all of them are done - handle_inferior_event: when we handle a fork event for a given thread, it's possible that other threads are doing a displaced step at the same time. Make sure to restore the displaced step buffer contents in the child for them. [1] https://github.com/ROCm-Developer-Tools/ROCgdb gdb/ChangeLog: * displaced-stepping.h (struct displaced_step_copy_insn_closure): Adjust comments. (struct displaced_step_inferior_state) <step_thread, step_gdbarch, step_closure, step_original, step_copy, step_saved_copy>: Remove fields. (struct displaced_step_thread_state): New. (struct displaced_step_buffer): New. * displaced-stepping.c (displaced_step_buffer::prepare): New. (write_memory_ptid): Move from infrun.c. (displaced_step_instruction_executed_successfully): New, factored out of displaced_step_finish. (displaced_step_buffer::finish): New. (displaced_step_buffer::copy_insn_closure_by_addr): New. (displaced_step_buffer::restore_in_ptid): New. * gdbarch.sh (displaced_step_location): Remove. (displaced_step_prepare, displaced_step_finish, displaced_step_copy_insn_closure_by_addr, displaced_step_restore_all_in_ptid): New. * gdbarch.c: Re-generate. * gdbarch.h: Re-generate. * gdbthread.h (class thread_info) <displaced_step_state>: New field. (thread_step_over_chain_remove): New declaration. (thread_step_over_chain_next): New declaration. (thread_step_over_chain_length): New declaration. * thread.c (thread_step_over_chain_remove): Make non-static. (thread_step_over_chain_next): New. (global_thread_step_over_chain_next): Use thread_step_over_chain_next. (thread_step_over_chain_length): New. (global_thread_step_over_chain_enqueue): Add debug print. (global_thread_step_over_chain_remove): Add debug print. * infrun.h (get_displaced_step_copy_insn_closure_by_addr): Remove. * infrun.c (get_displaced_stepping_state): New. (displaced_step_in_progress_any_inferior): Remove. (displaced_step_in_progress_thread): Adjust. (displaced_step_in_progress): Adjust. (displaced_step_in_progress_any_thread): New. (get_displaced_step_copy_insn_closure_by_addr): Remove. (gdbarch_supports_displaced_stepping): Use gdbarch_displaced_step_prepare_p. (displaced_step_reset): Change parameter from inferior to thread. (displaced_step_prepare_throw): Implement using gdbarch_displaced_step_prepare. (write_memory_ptid): Move to displaced-step.c. (displaced_step_restore): Remove. (displaced_step_finish): Implement using gdbarch_displaced_step_finish. (start_step_over): Allow starting more than one displaced step. (prepare_for_detach): Handle possibly multiple threads doing displaced steps. (handle_inferior_event): Handle possibility that fork event happens while another thread displaced steps. * linux-tdep.h (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter. * linux-tdep.c (struct linux_info) <disp_step_buf>: New field. (linux_displaced_step_prepare): New. (linux_displaced_step_finish): New. (linux_displaced_step_copy_insn_closure_by_addr): New. (linux_displaced_step_restore_all_in_ptid): New. (linux_init_abi): Add supports_displaced_step parameter, register displaced step methods if true. (_initialize_linux_tdep): Register inferior_execd observer. * amd64-linux-tdep.c (amd64_linux_init_abi_common): Add supports_displaced_step parameter, adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. (amd64_linux_init_abi): Adjust call to amd64_linux_init_abi_common. (amd64_x32_linux_init_abi): Likewise. * aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-linux-tdep.c (arm_linux_init_abi): Likewise. * i386-linux-tdep.c (i386_linux_init_abi): Likewise. * alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to linux_init_abi. * arc-linux-tdep.c (arc_linux_init_osabi): Likewise. * bfin-linux-tdep.c (bfin_linux_init_abi): Likewise. * cris-linux-tdep.c (cris_linux_init_abi): Likewise. * csky-linux-tdep.c (csky_linux_init_abi): Likewise. * frv-linux-tdep.c (frv_linux_init_abi): Likewise. * hppa-linux-tdep.c (hppa_linux_init_abi): Likewise. * ia64-linux-tdep.c (ia64_linux_init_abi): Likewise. * m32r-linux-tdep.c (m32r_linux_init_abi): Likewise. * m68k-linux-tdep.c (m68k_linux_init_abi): Likewise. * microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise. * mips-linux-tdep.c (mips_linux_init_abi): Likewise. * mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise. * nios2-linux-tdep.c (nios2_linux_init_abi): Likewise. * or1k-linux-tdep.c (or1k_linux_init_abi): Likewise. * riscv-linux-tdep.c (riscv_linux_init_abi): Likewise. * s390-linux-tdep.c (s390_linux_init_abi_any): Likewise. * sh-linux-tdep.c (sh_linux_init_abi): Likewise. * sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise. * sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise. * tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise. * tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise. * xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise. * ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to linux_init_abi. Remove call to set_gdbarch_displaced_step_location. * arm-tdep.c (arm_pc_is_thumb): Call gdbarch_displaced_step_copy_insn_closure_by_addr instead of get_displaced_step_copy_insn_closure_by_addr. * rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to clear gdbarch methods. * rs6000-tdep.c (struct ppc_inferior_data): New structure. (get_ppc_per_inferior): New function. (ppc_displaced_step_prepare): New function. (ppc_displaced_step_finish): New function. (ppc_displaced_step_restore_all_in_ptid): New function. (rs6000_gdbarch_init): Register new gdbarch methods. * s390-tdep.c (s390_gdbarch_init): Don't call set_gdbarch_displaced_step_location, set new gdbarch methods. gdb/testsuite/ChangeLog: * gdb.arch/amd64-disp-step-avx.exp: Adjust pattern. * gdb.threads/forking-threads-plus-breakpoint.exp: Likewise. * gdb.threads/non-stop-fair-events.exp: Likewise. Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
2020-12-05 05:43:55 +08:00
}
}
2020-12-05 05:43:55 +08:00
void _initialize_displaced_stepping ();
void
_initialize_displaced_stepping ()
{
add_setshow_boolean_cmd ("displaced", class_maintenance,
&debug_displaced, _("\
Set displaced stepping debugging."), _("\
Show displaced stepping debugging."), _("\
When non-zero, displaced stepping specific debugging is enabled."),
NULL,
show_debug_displaced,
&setdebuglist, &showdebuglist);
}