mirror of
https://sourceware.org/git/binutils-gdb.git
synced 2025-01-30 12:44:10 +08:00
gdb: Coalesce/aggregate (async) vCont packets/actions
Currently, with "maint set target-non-stop on", that is, when gdb connects with the non-stop/asynchronous variant of the remote protocol, even with "set non-stop off", GDB always sends one vCont packet per thread resumed. This patch makes GDB aggregate and coalesce vCont packets, so we send vCont packets like "vCont;s:p1.1;c" in non-stop mode too. Basically, this is done by: - Adding a new target method target_commit_resume that is called after calling target_resume one or more times. When resuming a batch of threads, we'll only call target_commit_resume once after calling target_resume for all threads. - Making the remote target defer sending the actual vCont packet to target_commit_resume. Special care must be taken to avoid sending a vCont action with a "wildcard" thread-id (all threads of process / all threads) when that would resume threads/processes that should not be resumed. See remote_commit_resume comments for details. Unlike all-stop's remote_resume implementation, this handles the case of too many actions resulting in a too-big vCont packet, by flushing the vCont packet and starting a new one. E.g., imagining that the "c" action in: vCont;s:1;c overflows the packet buffer, we split the actions like: vCont;s:1 vCont;c Tested on x86_64 Fedora 20, with and without "maint set target-non-stop on". Also tested with a hack that makes remote_commit_resume flush the vCont packet after every action appended (which caught a few bugs). gdb/ChangeLog: 2016-10-26 Pedro Alves <palves@redhat.com> * inferior.h (ALL_NON_EXITED_INFERIORS): New macro. * infrun.c (do_target_resume): Call target_commit_resume. (proceed): Defer target_commit_resume while looping over threads, resuming them. Call target_commit_resume at the end. * record-btrace.c (record_btrace_commit_resume): New function. (init_record_btrace_ops): Install it as to_commit_resume method. * record-full.c (record_full_commit_resume): New function. (record_full_wait_1): Call the beneath target's to_commit_resume method. (init_record_full_ops): Install record_full_commit_resume as to_commit_resume method. * remote.c (struct private_thread_info) <last_resume_step, last_resume_sig, vcont_resumed>: New fields. (remote_add_thread): Set the new thread's vcont_resumed flag. (demand_private_info): Delete. (get_private_info_thread, get_private_info_ptid): New functions. (remote_update_thread_list): Adjust. (process_initial_stop_replies): Clear the thread's vcont_resumed flag. (remote_resume): If connected in non-stop mode, record the resume request and return early. (struct private_inferior): New. (struct vcont_builder): New. (vcont_builder_restart, vcont_builder_flush) (vcont_builder_push_action): New functions. (MAX_ACTION_SIZE): New macro. (remote_commit_resume): New function. (thread_pending_fork_status, is_pending_fork_parent_thread): New functions. (check_pending_event_prevents_wildcard_vcont_callback) (check_pending_events_prevent_wildcard_vcont): New functions. (process_stop_reply): Adjust. Clear the thread's vcont_resumed flag. (init_remote_ops): Install remote_commit_resume. * target-delegates.c: Regenerate. * target.c (defer_target_commit_resume): New global. (target_commit_resume, make_cleanup_defer_target_commit_resume): New functions. * target.h (struct target_ops) <to_commit_resume>: New field. (target_resume): Update comments. (target_commit_resume): New declaration.
This commit is contained in:
parent
5a04c4cf5d
commit
85ad3aaf40
@ -1,3 +1,47 @@
|
||||
2016-10-26 Pedro Alves <palves@redhat.com>
|
||||
|
||||
* inferior.h (ALL_NON_EXITED_INFERIORS): New macro.
|
||||
* infrun.c (do_target_resume): Call target_commit_resume.
|
||||
(proceed): Defer target_commit_resume while looping over threads,
|
||||
resuming them. Call target_commit_resume at the end.
|
||||
* record-btrace.c (record_btrace_commit_resume): New function.
|
||||
(init_record_btrace_ops): Install it as to_commit_resume method.
|
||||
* record-full.c (record_full_commit_resume): New function.
|
||||
(record_full_wait_1): Call the beneath target's to_commit_resume
|
||||
method.
|
||||
(init_record_full_ops): Install record_full_commit_resume as
|
||||
to_commit_resume method.
|
||||
* remote.c (struct private_thread_info) <last_resume_step,
|
||||
last_resume_sig, vcont_resumed>: New fields.
|
||||
(remote_add_thread): Set the new thread's vcont_resumed flag.
|
||||
(demand_private_info): Delete.
|
||||
(get_private_info_thread, get_private_info_ptid): New functions.
|
||||
(remote_update_thread_list): Adjust.
|
||||
(process_initial_stop_replies): Clear the thread's vcont_resumed
|
||||
flag.
|
||||
(remote_resume): If connected in non-stop mode, record the resume
|
||||
request and return early.
|
||||
(struct private_inferior): New.
|
||||
(struct vcont_builder): New.
|
||||
(vcont_builder_restart, vcont_builder_flush)
|
||||
(vcont_builder_push_action): New functions.
|
||||
(MAX_ACTION_SIZE): New macro.
|
||||
(remote_commit_resume): New function.
|
||||
(thread_pending_fork_status, is_pending_fork_parent_thread): New
|
||||
functions.
|
||||
(check_pending_event_prevents_wildcard_vcont_callback)
|
||||
(check_pending_events_prevent_wildcard_vcont): New functions.
|
||||
(process_stop_reply): Adjust. Clear the thread's vcont_resumed
|
||||
flag.
|
||||
(init_remote_ops): Install remote_commit_resume.
|
||||
* target-delegates.c: Regenerate.
|
||||
* target.c (defer_target_commit_resume): New global.
|
||||
(target_commit_resume, make_cleanup_defer_target_commit_resume):
|
||||
New functions.
|
||||
* target.h (struct target_ops) <to_commit_resume>: New field.
|
||||
(target_resume): Update comments.
|
||||
(target_commit_resume): New declaration.
|
||||
|
||||
2016-10-26 Pedro Alves <palves@redhat.com>
|
||||
|
||||
* inferior.c (exit_inferior_1): Free 'priv'.
|
||||
|
@ -513,6 +513,12 @@ extern struct cleanup *save_current_inferior (void);
|
||||
#define ALL_INFERIORS(I) \
|
||||
for ((I) = inferior_list; (I); (I) = (I)->next)
|
||||
|
||||
/* Traverse all non-exited inferiors. */
|
||||
|
||||
#define ALL_NON_EXITED_INFERIORS(I) \
|
||||
ALL_INFERIORS (I) \
|
||||
if ((I)->pid != 0)
|
||||
|
||||
extern struct inferior *inferior_list;
|
||||
|
||||
/* Prune away automatically added inferiors that aren't required
|
||||
|
@ -2366,6 +2366,8 @@ do_target_resume (ptid_t resume_ptid, int step, enum gdb_signal sig)
|
||||
target_pass_signals ((int) GDB_SIGNAL_LAST, signal_pass);
|
||||
|
||||
target_resume (resume_ptid, step, sig);
|
||||
|
||||
target_commit_resume ();
|
||||
}
|
||||
|
||||
/* Resume the inferior, but allow a QUIT. This is useful if the user
|
||||
@ -2984,6 +2986,7 @@ proceed (CORE_ADDR addr, enum gdb_signal siggnal)
|
||||
struct execution_control_state ecss;
|
||||
struct execution_control_state *ecs = &ecss;
|
||||
struct cleanup *old_chain;
|
||||
struct cleanup *defer_resume_cleanup;
|
||||
int started;
|
||||
|
||||
/* If we're stopped at a fork/vfork, follow the branch set by the
|
||||
@ -3125,6 +3128,8 @@ proceed (CORE_ADDR addr, enum gdb_signal siggnal)
|
||||
until the target stops again. */
|
||||
tp->prev_pc = regcache_read_pc (regcache);
|
||||
|
||||
defer_resume_cleanup = make_cleanup_defer_target_commit_resume ();
|
||||
|
||||
started = start_step_over ();
|
||||
|
||||
if (step_over_info_valid_p ())
|
||||
@ -3189,6 +3194,9 @@ proceed (CORE_ADDR addr, enum gdb_signal siggnal)
|
||||
error (_("Command aborted."));
|
||||
}
|
||||
|
||||
do_cleanups (defer_resume_cleanup);
|
||||
target_commit_resume ();
|
||||
|
||||
discard_cleanups (old_chain);
|
||||
|
||||
/* Tell the event loop to wait for it to stop. If the target
|
||||
|
@ -2150,6 +2150,16 @@ record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
|
||||
}
|
||||
}
|
||||
|
||||
/* The to_commit_resume method of target record-btrace. */
|
||||
|
||||
static void
|
||||
record_btrace_commit_resume (struct target_ops *ops)
|
||||
{
|
||||
if ((execution_direction != EXEC_REVERSE)
|
||||
&& !record_btrace_is_replaying (ops, minus_one_ptid))
|
||||
ops->beneath->to_commit_resume (ops->beneath);
|
||||
}
|
||||
|
||||
/* Cancel resuming TP. */
|
||||
|
||||
static void
|
||||
@ -2875,6 +2885,7 @@ init_record_btrace_ops (void)
|
||||
ops->to_get_unwinder = &record_btrace_to_get_unwinder;
|
||||
ops->to_get_tailcall_unwinder = &record_btrace_to_get_tailcall_unwinder;
|
||||
ops->to_resume = record_btrace_resume;
|
||||
ops->to_commit_resume = record_btrace_commit_resume;
|
||||
ops->to_wait = record_btrace_wait;
|
||||
ops->to_stop = record_btrace_stop;
|
||||
ops->to_update_thread_list = record_btrace_update_thread_list;
|
||||
|
@ -1002,6 +1002,15 @@ record_full_resume (struct target_ops *ops, ptid_t ptid, int step,
|
||||
target_async (1);
|
||||
}
|
||||
|
||||
/* "to_commit_resume" method for process record target. */
|
||||
|
||||
static void
|
||||
record_full_commit_resume (struct target_ops *ops)
|
||||
{
|
||||
if (!RECORD_FULL_IS_REPLAY)
|
||||
ops->beneath->to_commit_resume (ops->beneath);
|
||||
}
|
||||
|
||||
static int record_full_get_sig = 0;
|
||||
|
||||
/* SIGINT signal handler, registered by "to_wait" method. */
|
||||
@ -1172,6 +1181,7 @@ record_full_wait_1 (struct target_ops *ops,
|
||||
"target beneath\n");
|
||||
ops->beneath->to_resume (ops->beneath, ptid, step,
|
||||
GDB_SIGNAL_0);
|
||||
ops->beneath->to_commit_resume (ops->beneath);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@ -1975,6 +1985,7 @@ init_record_full_ops (void)
|
||||
record_full_ops.to_close = record_full_close;
|
||||
record_full_ops.to_async = record_full_async;
|
||||
record_full_ops.to_resume = record_full_resume;
|
||||
record_full_ops.to_commit_resume = record_full_commit_resume;
|
||||
record_full_ops.to_wait = record_full_wait;
|
||||
record_full_ops.to_disconnect = record_disconnect;
|
||||
record_full_ops.to_detach = record_detach;
|
||||
|
456
gdb/remote.c
456
gdb/remote.c
@ -452,6 +452,24 @@ struct private_thread_info
|
||||
/* This is set to the data address of the access causing the target
|
||||
to stop for a watchpoint. */
|
||||
CORE_ADDR watch_data_address;
|
||||
|
||||
/* Fields used by the vCont action coalescing implemented in
|
||||
remote_resume / remote_commit_resume. remote_resume stores each
|
||||
thread's last resume request in these fields, so that a later
|
||||
remote_commit_resume knows which is the proper action for this
|
||||
thread to include in the vCont packet. */
|
||||
|
||||
/* True if the last target_resume call for this thread was a step
|
||||
request, false if a continue request. */
|
||||
int last_resume_step;
|
||||
|
||||
/* The signal specified in the last target_resume call for this
|
||||
thread. */
|
||||
enum gdb_signal last_resume_sig;
|
||||
|
||||
/* Whether this thread was already vCont-resumed on the remote
|
||||
side. */
|
||||
int vcont_resumed;
|
||||
};
|
||||
|
||||
static void
|
||||
@ -1805,6 +1823,9 @@ remote_add_inferior (int fake_pid_p, int pid, int attached,
|
||||
return inf;
|
||||
}
|
||||
|
||||
static struct private_thread_info *
|
||||
get_private_info_thread (struct thread_info *info);
|
||||
|
||||
/* Add thread PTID to GDB's thread list. Tag it as executing/running
|
||||
according to RUNNING. */
|
||||
|
||||
@ -1812,6 +1833,7 @@ static void
|
||||
remote_add_thread (ptid_t ptid, int running, int executing)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
struct thread_info *thread;
|
||||
|
||||
/* GDB historically didn't pull threads in the initial connection
|
||||
setup. If the remote target doesn't even have a concept of
|
||||
@ -1820,10 +1842,11 @@ remote_add_thread (ptid_t ptid, int running, int executing)
|
||||
might be confusing to the user. Be silent then, preserving the
|
||||
age old behavior. */
|
||||
if (rs->starting_up)
|
||||
add_thread_silent (ptid);
|
||||
thread = add_thread_silent (ptid);
|
||||
else
|
||||
add_thread (ptid);
|
||||
thread = add_thread (ptid);
|
||||
|
||||
get_private_info_thread (thread)->vcont_resumed = executing;
|
||||
set_executing (ptid, executing);
|
||||
set_running (ptid, running);
|
||||
}
|
||||
@ -1918,25 +1941,40 @@ remote_notice_new_inferior (ptid_t currthread, int executing)
|
||||
}
|
||||
}
|
||||
|
||||
/* Return the private thread data, creating it if necessary. */
|
||||
/* Return THREAD's private thread data, creating it if necessary. */
|
||||
|
||||
static struct private_thread_info *
|
||||
demand_private_info (ptid_t ptid)
|
||||
get_private_info_thread (struct thread_info *thread)
|
||||
{
|
||||
gdb_assert (thread != NULL);
|
||||
|
||||
if (thread->priv == NULL)
|
||||
{
|
||||
struct private_thread_info *priv = XNEW (struct private_thread_info);
|
||||
|
||||
thread->private_dtor = free_private_thread_info;
|
||||
thread->priv = priv;
|
||||
|
||||
priv->core = -1;
|
||||
priv->extra = NULL;
|
||||
priv->name = NULL;
|
||||
priv->name = NULL;
|
||||
priv->last_resume_step = 0;
|
||||
priv->last_resume_sig = GDB_SIGNAL_0;
|
||||
priv->vcont_resumed = 0;
|
||||
}
|
||||
|
||||
return thread->priv;
|
||||
}
|
||||
|
||||
/* Return PTID's private thread data, creating it if necessary. */
|
||||
|
||||
static struct private_thread_info *
|
||||
get_private_info_ptid (ptid_t ptid)
|
||||
{
|
||||
struct thread_info *info = find_thread_ptid (ptid);
|
||||
|
||||
gdb_assert (info);
|
||||
|
||||
if (!info->priv)
|
||||
{
|
||||
info->priv = XNEW (struct private_thread_info);
|
||||
info->private_dtor = free_private_thread_info;
|
||||
info->priv->core = -1;
|
||||
info->priv->extra = NULL;
|
||||
info->priv->name = NULL;
|
||||
}
|
||||
|
||||
return info->priv;
|
||||
return get_private_info_thread (info);
|
||||
}
|
||||
|
||||
/* Call this function as a result of
|
||||
@ -3280,7 +3318,7 @@ remote_update_thread_list (struct target_ops *ops)
|
||||
|
||||
remote_notice_new_inferior (item->ptid, executing);
|
||||
|
||||
info = demand_private_info (item->ptid);
|
||||
info = get_private_info_ptid (item->ptid);
|
||||
info->core = item->core;
|
||||
info->extra = item->extra;
|
||||
item->extra = NULL;
|
||||
@ -3913,6 +3951,7 @@ process_initial_stop_replies (int from_tty)
|
||||
|
||||
set_executing (event_ptid, 0);
|
||||
set_running (event_ptid, 0);
|
||||
thread->priv->vcont_resumed = 0;
|
||||
}
|
||||
|
||||
/* "Notice" the new inferiors before anything related to
|
||||
@ -5701,6 +5740,26 @@ remote_resume (struct target_ops *ops,
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
|
||||
/* When connected in non-stop mode, the core resumes threads
|
||||
individually. Resuming remote threads directly in target_resume
|
||||
would thus result in sending one packet per thread. Instead, to
|
||||
minimize roundtrip latency, here we just store the resume
|
||||
request; the actual remote resumption will be done in
|
||||
target_commit_resume / remote_commit_resume, where we'll be able
|
||||
to do vCont action coalescing. */
|
||||
if (target_is_non_stop_p () && execution_direction != EXEC_REVERSE)
|
||||
{
|
||||
struct private_thread_info *remote_thr;
|
||||
|
||||
if (ptid_equal (minus_one_ptid, ptid) || ptid_is_pid (ptid))
|
||||
remote_thr = get_private_info_ptid (inferior_ptid);
|
||||
else
|
||||
remote_thr = get_private_info_ptid (ptid);
|
||||
remote_thr->last_resume_step = step;
|
||||
remote_thr->last_resume_sig = siggnal;
|
||||
return;
|
||||
}
|
||||
|
||||
/* In all-stop, we can't mark REMOTE_ASYNC_GET_PENDING_EVENTS_TOKEN
|
||||
(explained in remote-notif.c:handle_notification) so
|
||||
remote_notif_process is not called. We need find a place where
|
||||
@ -5736,6 +5795,283 @@ remote_resume (struct target_ops *ops,
|
||||
if (!target_is_non_stop_p ())
|
||||
rs->waiting_for_stop_reply = 1;
|
||||
}
|
||||
|
||||
static void check_pending_events_prevent_wildcard_vcont
|
||||
(int *may_global_wildcard_vcont);
|
||||
static int is_pending_fork_parent_thread (struct thread_info *thread);
|
||||
|
||||
/* Private per-inferior info for target remote processes. */
|
||||
|
||||
struct private_inferior
|
||||
{
|
||||
/* Whether we can send a wildcard vCont for this process. */
|
||||
int may_wildcard_vcont;
|
||||
};
|
||||
|
||||
/* Structure used to track the construction of a vCont packet in the
|
||||
outgoing packet buffer. This is used to send multiple vCont
|
||||
packets if we have more actions than would fit a single packet. */
|
||||
|
||||
struct vcont_builder
|
||||
{
|
||||
/* Pointer to the first action. P points here if no action has been
|
||||
appended yet. */
|
||||
char *first_action;
|
||||
|
||||
/* Where the next action will be appended. */
|
||||
char *p;
|
||||
|
||||
/* The end of the buffer. Must never write past this. */
|
||||
char *endp;
|
||||
};
|
||||
|
||||
/* Prepare the outgoing buffer for a new vCont packet. */
|
||||
|
||||
static void
|
||||
vcont_builder_restart (struct vcont_builder *builder)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
|
||||
builder->p = rs->buf;
|
||||
builder->endp = rs->buf + get_remote_packet_size ();
|
||||
builder->p += xsnprintf (builder->p, builder->endp - builder->p, "vCont");
|
||||
builder->first_action = builder->p;
|
||||
}
|
||||
|
||||
/* If the vCont packet being built has any action, send it to the
|
||||
remote end. */
|
||||
|
||||
static void
|
||||
vcont_builder_flush (struct vcont_builder *builder)
|
||||
{
|
||||
struct remote_state *rs;
|
||||
|
||||
if (builder->p == builder->first_action)
|
||||
return;
|
||||
|
||||
rs = get_remote_state ();
|
||||
putpkt (rs->buf);
|
||||
getpkt (&rs->buf, &rs->buf_size, 0);
|
||||
if (strcmp (rs->buf, "OK") != 0)
|
||||
error (_("Unexpected vCont reply in non-stop mode: %s"), rs->buf);
|
||||
}
|
||||
|
||||
/* The largest action is range-stepping, with its two addresses. This
|
||||
is more than sufficient. If a new, bigger action is created, it'll
|
||||
quickly trigger a failed assertion in append_resumption (and we'll
|
||||
just bump this). */
|
||||
#define MAX_ACTION_SIZE 200
|
||||
|
||||
/* Append a new vCont action in the outgoing packet being built. If
|
||||
the action doesn't fit the packet along with previous actions, push
|
||||
what we've got so far to the remote end and start over a new vCont
|
||||
packet (with the new action). */
|
||||
|
||||
static void
|
||||
vcont_builder_push_action (struct vcont_builder *builder,
|
||||
ptid_t ptid, int step, enum gdb_signal siggnal)
|
||||
{
|
||||
char buf[MAX_ACTION_SIZE + 1];
|
||||
char *endp;
|
||||
size_t rsize;
|
||||
|
||||
endp = append_resumption (buf, buf + sizeof (buf),
|
||||
ptid, step, siggnal);
|
||||
|
||||
/* Check whether this new action would fit in the vCont packet along
|
||||
with previous actions. If not, send what we've got so far and
|
||||
start a new vCont packet. */
|
||||
rsize = endp - buf;
|
||||
if (rsize > builder->endp - builder->p)
|
||||
{
|
||||
vcont_builder_flush (builder);
|
||||
vcont_builder_restart (builder);
|
||||
|
||||
/* Should now fit. */
|
||||
gdb_assert (rsize <= builder->endp - builder->p);
|
||||
}
|
||||
|
||||
memcpy (builder->p, buf, rsize);
|
||||
builder->p += rsize;
|
||||
*builder->p = '\0';
|
||||
}
|
||||
|
||||
/* to_commit_resume implementation. */
|
||||
|
||||
static void
|
||||
remote_commit_resume (struct target_ops *ops)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
struct inferior *inf;
|
||||
struct thread_info *tp;
|
||||
int any_process_wildcard;
|
||||
int may_global_wildcard_vcont;
|
||||
struct vcont_builder vcont_builder;
|
||||
|
||||
/* If connected in all-stop mode, we'd send the remote resume
|
||||
request directly from remote_resume. Likewise if
|
||||
reverse-debugging, as there are no defined vCont actions for
|
||||
reverse execution. */
|
||||
if (!target_is_non_stop_p () || execution_direction == EXEC_REVERSE)
|
||||
return;
|
||||
|
||||
/* Try to send wildcard actions ("vCont;c" or "vCont;c:pPID.-1")
|
||||
instead of resuming all threads of each process individually.
|
||||
However, if any thread of a process must remain halted, we can't
|
||||
send wildcard resumes and must send one action per thread.
|
||||
|
||||
Care must be taken to not resume threads/processes the server
|
||||
side already told us are stopped, but the core doesn't know about
|
||||
yet, because the events are still in the vStopped notification
|
||||
queue. For example:
|
||||
|
||||
#1 => vCont s:p1.1;c
|
||||
#2 <= OK
|
||||
#3 <= %Stopped T05 p1.1
|
||||
#4 => vStopped
|
||||
#5 <= T05 p1.2
|
||||
#6 => vStopped
|
||||
#7 <= OK
|
||||
#8 (infrun handles the stop for p1.1 and continues stepping)
|
||||
#9 => vCont s:p1.1;c
|
||||
|
||||
The last vCont above would resume thread p1.2 by mistake, because
|
||||
the server has no idea that the event for p1.2 had not been
|
||||
handled yet.
|
||||
|
||||
The server side must similarly ignore resume actions for the
|
||||
thread that has a pending %Stopped notification (and any other
|
||||
threads with events pending), until GDB acks the notification
|
||||
with vStopped. Otherwise, e.g., the following case is
|
||||
mishandled:
|
||||
|
||||
#1 => g (or any other packet)
|
||||
#2 <= [registers]
|
||||
#3 <= %Stopped T05 p1.2
|
||||
#4 => vCont s:p1.1;c
|
||||
#5 <= OK
|
||||
|
||||
Above, the server must not resume thread p1.2. GDB can't know
|
||||
that p1.2 stopped until it acks the %Stopped notification, and
|
||||
since from GDB's perspective all threads should be running, it
|
||||
sends a "c" action.
|
||||
|
||||
Finally, special care must also be given to handling fork/vfork
|
||||
events. A (v)fork event actually tells us that two processes
|
||||
stopped -- the parent and the child. Until we follow the fork,
|
||||
we must not resume the child. Therefore, if we have a pending
|
||||
fork follow, we must not send a global wildcard resume action
|
||||
(vCont;c). We can still send process-wide wildcards though. */
|
||||
|
||||
/* Start by assuming a global wildcard (vCont;c) is possible. */
|
||||
may_global_wildcard_vcont = 1;
|
||||
|
||||
/* And assume every process is individually wildcard-able too. */
|
||||
ALL_NON_EXITED_INFERIORS (inf)
|
||||
{
|
||||
if (inf->priv == NULL)
|
||||
inf->priv = XNEW (struct private_inferior);
|
||||
inf->priv->may_wildcard_vcont = 1;
|
||||
}
|
||||
|
||||
/* Check for any pending events (not reported or processed yet) and
|
||||
disable process and global wildcard resumes appropriately. */
|
||||
check_pending_events_prevent_wildcard_vcont (&may_global_wildcard_vcont);
|
||||
|
||||
ALL_NON_EXITED_THREADS (tp)
|
||||
{
|
||||
/* If a thread of a process is not meant to be resumed, then we
|
||||
can't wildcard that process. */
|
||||
if (!tp->executing)
|
||||
{
|
||||
tp->inf->priv->may_wildcard_vcont = 0;
|
||||
|
||||
/* And if we can't wildcard a process, we can't wildcard
|
||||
everything either. */
|
||||
may_global_wildcard_vcont = 0;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* If a thread is the parent of an unfollowed fork, then we
|
||||
can't do a global wildcard, as that would resume the fork
|
||||
child. */
|
||||
if (is_pending_fork_parent_thread (tp))
|
||||
may_global_wildcard_vcont = 0;
|
||||
}
|
||||
|
||||
/* Now let's build the vCont packet(s). Actions must be appended
|
||||
from narrower to wider scopes (thread -> process -> global). If
|
||||
we end up with too many actions for a single packet vcont_builder
|
||||
flushes the current vCont packet to the remote side and starts a
|
||||
new one. */
|
||||
vcont_builder_restart (&vcont_builder);
|
||||
|
||||
/* Threads first. */
|
||||
ALL_NON_EXITED_THREADS (tp)
|
||||
{
|
||||
struct private_thread_info *remote_thr = tp->priv;
|
||||
|
||||
if (!tp->executing || remote_thr->vcont_resumed)
|
||||
continue;
|
||||
|
||||
gdb_assert (!thread_is_in_step_over_chain (tp));
|
||||
|
||||
if (!remote_thr->last_resume_step
|
||||
&& remote_thr->last_resume_sig == GDB_SIGNAL_0
|
||||
&& tp->inf->priv->may_wildcard_vcont)
|
||||
{
|
||||
/* We'll send a wildcard resume instead. */
|
||||
remote_thr->vcont_resumed = 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
vcont_builder_push_action (&vcont_builder, tp->ptid,
|
||||
remote_thr->last_resume_step,
|
||||
remote_thr->last_resume_sig);
|
||||
remote_thr->vcont_resumed = 1;
|
||||
}
|
||||
|
||||
/* Now check whether we can send any process-wide wildcard. This is
|
||||
to avoid sending a global wildcard in the case nothing is
|
||||
supposed to be resumed. */
|
||||
any_process_wildcard = 0;
|
||||
|
||||
ALL_NON_EXITED_INFERIORS (inf)
|
||||
{
|
||||
if (inf->priv->may_wildcard_vcont)
|
||||
{
|
||||
any_process_wildcard = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (any_process_wildcard)
|
||||
{
|
||||
/* If all processes are wildcard-able, then send a single "c"
|
||||
action, otherwise, send an "all (-1) threads of process"
|
||||
continue action for each running process, if any. */
|
||||
if (may_global_wildcard_vcont)
|
||||
{
|
||||
vcont_builder_push_action (&vcont_builder, minus_one_ptid,
|
||||
0, GDB_SIGNAL_0);
|
||||
}
|
||||
else
|
||||
{
|
||||
ALL_NON_EXITED_INFERIORS (inf)
|
||||
{
|
||||
if (inf->priv->may_wildcard_vcont)
|
||||
{
|
||||
vcont_builder_push_action (&vcont_builder,
|
||||
pid_to_ptid (inf->pid),
|
||||
0, GDB_SIGNAL_0);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
vcont_builder_flush (&vcont_builder);
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* Non-stop version of target_stop. Uses `vCont;t' to stop a remote
|
||||
@ -6102,7 +6438,7 @@ struct queue_iter_param
|
||||
struct stop_reply *output;
|
||||
};
|
||||
|
||||
/* Determine if THREAD is a pending fork parent thread. ARG contains
|
||||
/* Determine if THREAD_PTID is a pending fork parent thread. ARG contains
|
||||
the pid of the process that owns the threads we want to check, or
|
||||
-1 if we want to check all threads. */
|
||||
|
||||
@ -6120,6 +6456,29 @@ is_pending_fork_parent (struct target_waitstatus *ws, int event_pid,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Return the thread's pending status used to determine whether the
|
||||
thread is a fork parent stopped at a fork event. */
|
||||
|
||||
static struct target_waitstatus *
|
||||
thread_pending_fork_status (struct thread_info *thread)
|
||||
{
|
||||
if (thread->suspend.waitstatus_pending_p)
|
||||
return &thread->suspend.waitstatus;
|
||||
else
|
||||
return &thread->pending_follow;
|
||||
}
|
||||
|
||||
/* Determine if THREAD is a pending fork parent thread. */
|
||||
|
||||
static int
|
||||
is_pending_fork_parent_thread (struct thread_info *thread)
|
||||
{
|
||||
struct target_waitstatus *ws = thread_pending_fork_status (thread);
|
||||
int pid = -1;
|
||||
|
||||
return is_pending_fork_parent (ws, pid, thread->ptid);
|
||||
}
|
||||
|
||||
/* Check whether EVENT is a fork event, and if it is, remove the
|
||||
fork child from the context list passed in DATA. */
|
||||
|
||||
@ -6159,12 +6518,7 @@ remove_new_fork_children (struct threads_listing_context *context)
|
||||
fork child threads from the CONTEXT list. */
|
||||
ALL_NON_EXITED_THREADS (thread)
|
||||
{
|
||||
struct target_waitstatus *ws;
|
||||
|
||||
if (thread->suspend.waitstatus_pending_p)
|
||||
ws = &thread->suspend.waitstatus;
|
||||
else
|
||||
ws = &thread->pending_follow;
|
||||
struct target_waitstatus *ws = thread_pending_fork_status (thread);
|
||||
|
||||
if (is_pending_fork_parent (ws, pid, thread->ptid))
|
||||
{
|
||||
@ -6182,6 +6536,56 @@ remove_new_fork_children (struct threads_listing_context *context)
|
||||
remove_child_of_pending_fork, ¶m);
|
||||
}
|
||||
|
||||
/* Check whether EVENT would prevent a global or process wildcard
|
||||
vCont action. */
|
||||
|
||||
static int
|
||||
check_pending_event_prevents_wildcard_vcont_callback
|
||||
(QUEUE (stop_reply_p) *q,
|
||||
QUEUE_ITER (stop_reply_p) *iter,
|
||||
stop_reply_p event,
|
||||
void *data)
|
||||
{
|
||||
struct inferior *inf;
|
||||
int *may_global_wildcard_vcont = (int *) data;
|
||||
|
||||
if (event->ws.kind == TARGET_WAITKIND_NO_RESUMED
|
||||
|| event->ws.kind == TARGET_WAITKIND_NO_HISTORY)
|
||||
return 1;
|
||||
|
||||
if (event->ws.kind == TARGET_WAITKIND_FORKED
|
||||
|| event->ws.kind == TARGET_WAITKIND_VFORKED)
|
||||
*may_global_wildcard_vcont = 0;
|
||||
|
||||
inf = find_inferior_ptid (event->ptid);
|
||||
|
||||
/* This may be the first time we heard about this process.
|
||||
Regardless, we must not do a global wildcard resume, otherwise
|
||||
we'd resume this process too. */
|
||||
*may_global_wildcard_vcont = 0;
|
||||
if (inf != NULL)
|
||||
inf->priv->may_wildcard_vcont = 0;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Check whether any event pending in the vStopped queue would prevent
|
||||
a global or process wildcard vCont action. Clear
|
||||
*may_global_wildcard if we can't do a global wildcard (vCont;c),
|
||||
and clear the event inferior's may_wildcard_vcont flag if we can't
|
||||
do a process-wide wildcard resume (vCont;c:pPID.-1). */
|
||||
|
||||
static void
|
||||
check_pending_events_prevent_wildcard_vcont (int *may_global_wildcard)
|
||||
{
|
||||
struct notif_client *notif = ¬if_client_stop;
|
||||
|
||||
remote_notif_get_pending_events (notif);
|
||||
QUEUE_iterate (stop_reply_p, stop_reply_queue,
|
||||
check_pending_event_prevents_wildcard_vcont_callback,
|
||||
may_global_wildcard);
|
||||
}
|
||||
|
||||
/* Remove stop replies in the queue if its pid is equal to the given
|
||||
inferior's pid. */
|
||||
|
||||
@ -6812,10 +7216,11 @@ process_stop_reply (struct stop_reply *stop_reply,
|
||||
}
|
||||
|
||||
remote_notice_new_inferior (ptid, 0);
|
||||
remote_thr = demand_private_info (ptid);
|
||||
remote_thr = get_private_info_ptid (ptid);
|
||||
remote_thr->core = stop_reply->core;
|
||||
remote_thr->stop_reason = stop_reply->stop_reason;
|
||||
remote_thr->watch_data_address = stop_reply->watch_data_address;
|
||||
remote_thr->vcont_resumed = 0;
|
||||
}
|
||||
|
||||
stop_reply_xfree (stop_reply);
|
||||
@ -13114,6 +13519,7 @@ Specify the serial device it is connected to\n\
|
||||
remote_ops.to_detach = remote_detach;
|
||||
remote_ops.to_disconnect = remote_disconnect;
|
||||
remote_ops.to_resume = remote_resume;
|
||||
remote_ops.to_commit_resume = remote_commit_resume;
|
||||
remote_ops.to_wait = remote_wait;
|
||||
remote_ops.to_fetch_registers = remote_fetch_registers;
|
||||
remote_ops.to_store_registers = remote_store_registers;
|
||||
|
@ -109,6 +109,28 @@ debug_resume (struct target_ops *self, ptid_t arg1, int arg2, enum gdb_signal ar
|
||||
fputs_unfiltered (")\n", gdb_stdlog);
|
||||
}
|
||||
|
||||
static void
|
||||
delegate_commit_resume (struct target_ops *self)
|
||||
{
|
||||
self = self->beneath;
|
||||
self->to_commit_resume (self);
|
||||
}
|
||||
|
||||
static void
|
||||
tdefault_commit_resume (struct target_ops *self)
|
||||
{
|
||||
}
|
||||
|
||||
static void
|
||||
debug_commit_resume (struct target_ops *self)
|
||||
{
|
||||
fprintf_unfiltered (gdb_stdlog, "-> %s->to_commit_resume (...)\n", debug_target.to_shortname);
|
||||
debug_target.to_commit_resume (&debug_target);
|
||||
fprintf_unfiltered (gdb_stdlog, "<- %s->to_commit_resume (", debug_target.to_shortname);
|
||||
target_debug_print_struct_target_ops_p (&debug_target);
|
||||
fputs_unfiltered (")\n", gdb_stdlog);
|
||||
}
|
||||
|
||||
static ptid_t
|
||||
delegate_wait (struct target_ops *self, ptid_t arg1, struct target_waitstatus *arg2, int arg3)
|
||||
{
|
||||
@ -4108,6 +4130,8 @@ install_delegators (struct target_ops *ops)
|
||||
ops->to_disconnect = delegate_disconnect;
|
||||
if (ops->to_resume == NULL)
|
||||
ops->to_resume = delegate_resume;
|
||||
if (ops->to_commit_resume == NULL)
|
||||
ops->to_commit_resume = delegate_commit_resume;
|
||||
if (ops->to_wait == NULL)
|
||||
ops->to_wait = delegate_wait;
|
||||
if (ops->to_fetch_registers == NULL)
|
||||
@ -4413,6 +4437,7 @@ install_dummy_methods (struct target_ops *ops)
|
||||
ops->to_detach = tdefault_detach;
|
||||
ops->to_disconnect = tdefault_disconnect;
|
||||
ops->to_resume = tdefault_resume;
|
||||
ops->to_commit_resume = tdefault_commit_resume;
|
||||
ops->to_wait = default_target_wait;
|
||||
ops->to_fetch_registers = tdefault_fetch_registers;
|
||||
ops->to_store_registers = tdefault_store_registers;
|
||||
@ -4570,6 +4595,7 @@ init_debug_target (struct target_ops *ops)
|
||||
ops->to_detach = debug_detach;
|
||||
ops->to_disconnect = debug_disconnect;
|
||||
ops->to_resume = debug_resume;
|
||||
ops->to_commit_resume = debug_commit_resume;
|
||||
ops->to_wait = debug_wait;
|
||||
ops->to_fetch_registers = debug_fetch_registers;
|
||||
ops->to_store_registers = debug_store_registers;
|
||||
|
28
gdb/target.c
28
gdb/target.c
@ -2329,6 +2329,34 @@ target_resume (ptid_t ptid, int step, enum gdb_signal signal)
|
||||
clear_inline_frame_state (ptid);
|
||||
}
|
||||
|
||||
/* If true, target_commit_resume is a nop. */
|
||||
static int defer_target_commit_resume;
|
||||
|
||||
/* See target.h. */
|
||||
|
||||
void
|
||||
target_commit_resume (void)
|
||||
{
|
||||
struct target_ops *t;
|
||||
|
||||
if (defer_target_commit_resume)
|
||||
return;
|
||||
|
||||
current_target.to_commit_resume (¤t_target);
|
||||
}
|
||||
|
||||
/* See target.h. */
|
||||
|
||||
struct cleanup *
|
||||
make_cleanup_defer_target_commit_resume (void)
|
||||
{
|
||||
struct cleanup *old_chain;
|
||||
|
||||
old_chain = make_cleanup_restore_integer (&defer_target_commit_resume);
|
||||
defer_target_commit_resume = 1;
|
||||
return old_chain;
|
||||
}
|
||||
|
||||
void
|
||||
target_pass_signals (int numsigs, unsigned char *pass_signals)
|
||||
{
|
||||
|
45
gdb/target.h
45
gdb/target.h
@ -461,6 +461,8 @@ struct target_ops
|
||||
int TARGET_DEBUG_PRINTER (target_debug_print_step),
|
||||
enum gdb_signal)
|
||||
TARGET_DEFAULT_NORETURN (noprocess ());
|
||||
void (*to_commit_resume) (struct target_ops *)
|
||||
TARGET_DEFAULT_IGNORE ();
|
||||
ptid_t (*to_wait) (struct target_ops *,
|
||||
ptid_t, struct target_waitstatus *,
|
||||
int TARGET_DEBUG_PRINTER (target_debug_print_options))
|
||||
@ -1328,19 +1330,42 @@ extern void target_detach (const char *, int);
|
||||
|
||||
extern void target_disconnect (const char *, int);
|
||||
|
||||
/* Resume execution of the target process PTID (or a group of
|
||||
threads). STEP says whether to hardware single-step or to run free;
|
||||
SIGGNAL is the signal to be given to the target, or GDB_SIGNAL_0 for no
|
||||
signal. The caller may not pass GDB_SIGNAL_DEFAULT. A specific
|
||||
PTID means `step/resume only this process id'. A wildcard PTID
|
||||
(all threads, or all threads of process) means `step/resume
|
||||
INFERIOR_PTID, and let other threads (for which the wildcard PTID
|
||||
matches) resume with their 'thread->suspend.stop_signal' signal
|
||||
(usually GDB_SIGNAL_0) if it is in "pass" state, or with no signal
|
||||
if in "no pass" state. */
|
||||
/* Resume execution (or prepare for execution) of a target thread,
|
||||
process or all processes. STEP says whether to hardware
|
||||
single-step or to run free; SIGGNAL is the signal to be given to
|
||||
the target, or GDB_SIGNAL_0 for no signal. The caller may not pass
|
||||
GDB_SIGNAL_DEFAULT. A specific PTID means `step/resume only this
|
||||
process id'. A wildcard PTID (all threads, or all threads of
|
||||
process) means `step/resume INFERIOR_PTID, and let other threads
|
||||
(for which the wildcard PTID matches) resume with their
|
||||
'thread->suspend.stop_signal' signal (usually GDB_SIGNAL_0) if it
|
||||
is in "pass" state, or with no signal if in "no pass" state.
|
||||
|
||||
In order to efficiently handle batches of resumption requests,
|
||||
targets may implement this method such that it records the
|
||||
resumption request, but defers the actual resumption to the
|
||||
target_commit_resume method implementation. See
|
||||
target_commit_resume below. */
|
||||
extern void target_resume (ptid_t ptid, int step, enum gdb_signal signal);
|
||||
|
||||
/* Commit a series of resumption requests previously prepared with
|
||||
target_resume calls.
|
||||
|
||||
GDB always calls target_commit_resume after calling target_resume
|
||||
one or more times. A target may thus use this method in
|
||||
coordination with the target_resume method to batch target-side
|
||||
resumption requests. In that case, the target doesn't actually
|
||||
resume in its target_resume implementation. Instead, it prepares
|
||||
the resumption in target_resume, and defers the actual resumption
|
||||
to target_commit_resume. E.g., the remote target uses this to
|
||||
coalesce multiple resumption requests in a single vCont packet. */
|
||||
extern void target_commit_resume ();
|
||||
|
||||
/* Setup to defer target_commit_resume calls, and return a cleanup
|
||||
that reactivates target_commit_resume, if it was previously
|
||||
active. */
|
||||
struct cleanup *make_cleanup_defer_target_commit_resume ();
|
||||
|
||||
/* For target_read_memory see target/target.h. */
|
||||
|
||||
/* The default target_ops::to_wait implementation. */
|
||||
|
Loading…
Reference in New Issue
Block a user