target.h (struct spec_info_def): New opaque declaration.

2006-03-16  Maxim Kuvyrkov <mkuvyrkov@ispras.ru>

        * target.h (struct spec_info_def): New opaque declaration.
        (struct gcc_target.sched): New fields: adjust_cost_2, h_i_d_extended,
        speculate_insn, needs_block_p, gen_check,
        first_cycle_multipass_dfa_lookahead_guard_spec, set_sched_flags.
        * target-def.h (TARGET_SCHED_ADJUST_COST_2,
        TARGET_SCHED_H_I_D_EXTENDED, TARGET_SCHED_SPECULATE_INSN,
        TARGET_SCHED_NEEDS_BLOCK_P, TARGET_SCHED_GEN_CHECK,
        TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC,
        TARGET_SCHED_SET_SCHED_FLAGS): New macros to initialize fields in
        gcc_target.sched.
        (TARGET_SCHED): Use new macros.
        * rtl.h (copy_DEPS_LIST_list): New prototype.
        * sched-int.h (struct sched_info): Change signature of new_ready field,
	adjust all initializations. New fields: add_remove_insn,
        begin_schedule_ready, add_block, advance_target_bb, fix_recovery_cfg,
	region_head_or_leaf_p.
        (struct spec_info_def): New structure declaration.
        (spec_info_t): New typedef.
        (struct haifa_insn_data): New fields: todo_spec, done_spec, check_spec,
        recovery_block, orig_pat.
        (glat_start, glat_end): New variables declaraions.
        (TODO_SPEC, DONE_SPEC, CHECK_SPEC, RECOVERY_BLOCK, ORIG_PAT):
	New access macros.
        (enum SCHED_FLAGS): New constants: SCHED_RGN, SCHED_EBB,
        DETACH_LIFE_INFO, USE_GLAT.
        (enum SPEC_SCHED_FLAGS): New enumeration.
        (NOTE_NOTE_BB_P): New macro.
        (extend_dependency_caches, xrecalloc, unlink_bb_notes, add_block,
        attach_life_info, debug_spec_status, check_reg_live): New functions.
        (get_block_head_tail): Change signature to get_ebb_head_tail, adjust
        all uses in ddg.c, modulo-sched.c, haifa-sched.c, sched-rgn.c,
        sched-ebb.c
	(get_dep_weak, ds_merge): Prototype functions from sched-deps.c .
        * ddg.c (get_block_head_tail): Adjust all uses.
        * modulo-sched.c (get_block_head_tail): Adjust all uses.
	(sms_sched_info): Initialize new fields.
	(contributes_to_priority): Removed.
        * haifa-sched.c (params.h): New include.
	(get_block_head_tail): Adjust all uses.
        (ISSUE_POINTS): New macro.
        (glat_start, glat_end): New global variables.
        (spec_info_var, spec_info, added_recovery_block_p, nr_begin_data,
	nr_be_in_data, nr_begin_control, nr_be_in_control, bb_header,
	old_last_basic_block, before_recovery, current_sched_info_var,
	rgn_n_insns, luid): New static variables.
        (insn_cost1): New function.  Move logic from insn_cost to here.
        (find_insn_reg_weight1): New function.  Move logic from
        find_insn_reg_weight to here.
        (reemit_notes, move_insn, max_issue): Change signature.
        (move_insn1): Removed.
        (extend_h_i_d, extend_ready, extend_global, extend_all, init_h_i_d,
        extend_bb): New static functions to support extension of scheduler's
        data structures.
        (generate_recovery_code, process_insn_depend_be_in_spec,
        begin_speculative_block, add_to_speculative_block,
        init_before_recovery, create_recovery_block, create_check_block_twin,
        fix_recovery_deps): New static functions to support
        generation of recovery code.
        (fix_jump_move, find_fallthru_edge, dump_new_block_header,
        restore_bb_notes, move_block_after_check, move_succs): New static
        functions to support ebb scheduling.
        (init_glat, init_glat1, attach_life_info1, free_glat): New static
        functions to support handling of register live information.
        (associate_line_notes_with_blocks, change_pattern, speculate_insn,
	sched_remove_insn, clear_priorities, calc_priorities, bb_note,
	add_jump_dependencies):	New static functions.
        (check_cfg, has_edge_p, check_sched_flags): New static functions for
	consistancy checking.
	(debug_spec_status): New function to call from debugger.
	(priority): Added code to handle speculation checks.
	(rank_for_schedule): Added code to distinguish speculative instructions.
	(schedule_insn): Added code to handle speculation checks.
	(unlink_other_notes, rm_line_notes, restore_line_notes, rm_other_notes):
	Fixed to handle ebbs.
        (move_insn): Added code to handle ebb scheduling.
	(max_issue): Added code to use ISSUE_POINTS of instructions.
        (choose_ready): Added code to choose between speculative and
        non-speculative instructions.
        (schedule_block): Added code to handle ebb scheduling and scheduling of
        speculative instructions.
        (sched_init): Initialize new variables.
        (sched_finish): Free new variables.  Print statistics.
        (try_ready): Added code to handle speculative instructions.
        * lists.c (copy_DEPS_LIST_list): New function.
        * sched-deps.c (extend_dependency_caches): New function.  Move logic
        from create_dependency_caches to here.
	(get_dep_weak, ds_merge): Make global.
        * genattr.c (main): Code to output prototype for
        dfa_clear_single_insn_cache.
        * genautomata.c (DFA_CLEAR_SINGLE_INSN_CACHE_FUNC_NAME): New macros.
        (output_dfa_clean_insn_cache_func): Code to output
        dfa_clear_single_insn_cache function.
        * sched-ebb.c (target_n_insns): Remove.  Adjust all users to use
	n_insns.
        (can_schedule_ready_p, fix_basic_block_boundaries, add_missing_bbs):
        Removed.
        (n_insns, dont_calc_deps, ebb_head, ebb_tail, last_bb):
        New static variables.
        (begin_schedule_ready, add_remove_insn, add_block1, advance_target_bb,
	fix_recovery_cfg, ebb_head_or_leaf_p): Implement hooks from
	struct sched_info.
        (ebb_sched_info): Initialize new fields.
	(get_block_head_tail): Adjust all uses.
	(compute_jump_reg_dependencies): Fixed to use glat_start.
	(schedule_ebb): Code to remove unreachable last block.
        (schedule_ebbs): Added code to update register live information.
        * sched-rgn.c (region_sched_info): Initialize new fields.
	(get_block_head_tail): Adjust all uses.
	(last_was_jump): Removed.  Adjust users.
        (begin_schedule_ready, add_remove_insn, insn_points, extend_regions,
	add_block1, fix_recovery_cfg, advance_target_bb, region_head_or_leaf_p):
	Implement new hooks.
        (check_dead_notes1): New static function.
        (struct region): New fields: dont_calc_deps, has_real_ebb.
        (RGN_DONT_CALC_DEPS, RGN_HAS_REAL_EBB): New access macros.
        (BB_TO_BLOCK): Fixed to handle EBBs.
        (EBB_FIRST_BB, EBB_LAST_BB): New macros.
        (ebb_head): New static variable.
        (debug_regions, contributes_to_priority): Fixed to handle EBBs.
        (find_single_block_regions, find_rgns, find_more_rgns): Initialize
	new fields.
	(compute_dom_prob_ps): New assertion.
        (check_live_1, update_live_1): Fixed to work with glat_start instead of
        global_live_at_start.
	(init_ready_list): New assertions.
	(can_schedule_ready_p): Split update code to begin_schedule_ready.
	(new_ready): Add support for BEGIN_CONTROL speculation.
        (schedule_insns): Fixed code that updates register live information
        to handle EBBs.
        (schedule_region): Fixed to handle EBBs.
	(init_regions): Use extend_regions and check_dead_notes1.
        * params.def (PARAM_MAX_SCHED_INSN_CONFLICT_DELAY,
        PARAM_SCHED_SPEC_PROB_CUTOFF): New parameters.
	* doc/tm.texi (TARGET_SCHED_ADJUST_COST_2, TARGET_SCHED_H_I_D_EXTENDED,
	TARGET_SCHED_SPECULATE_INSN, TARGET_SCHED_NEEDS_BLOCK_P,
	TARGET_SCHED_GEN_CHECK,
	TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC,
	TARGET_SCHED_SET_SCHED_FLAGS): Document.
        * doc/invoke.texi (max-sched-insn-conflict-delay,
	sched-spec-prob-cutoff): Document.

From-SVN: r112128
This commit is contained in:
Maxim Kuvyrkov 2006-03-16 05:27:03 +00:00 committed by Maxim Kuvyrkov
parent 63f54b1abd
commit 496d7bb032
18 changed files with 3444 additions and 577 deletions

View File

@ -1,3 +1,146 @@
2006-03-16 Maxim Kuvyrkov <mkuvyrkov@ispras.ru>
* target.h (struct spec_info_def): New opaque declaration.
(struct gcc_target.sched): New fields: adjust_cost_2, h_i_d_extended,
speculate_insn, needs_block_p, gen_check,
first_cycle_multipass_dfa_lookahead_guard_spec, set_sched_flags.
* target-def.h (TARGET_SCHED_ADJUST_COST_2,
TARGET_SCHED_H_I_D_EXTENDED, TARGET_SCHED_SPECULATE_INSN,
TARGET_SCHED_NEEDS_BLOCK_P, TARGET_SCHED_GEN_CHECK,
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC,
TARGET_SCHED_SET_SCHED_FLAGS): New macros to initialize fields in
gcc_target.sched.
(TARGET_SCHED): Use new macros.
* rtl.h (copy_DEPS_LIST_list): New prototype.
* sched-int.h (struct sched_info): Change signature of new_ready field,
adjust all initializations. New fields: add_remove_insn,
begin_schedule_ready, add_block, advance_target_bb, fix_recovery_cfg,
region_head_or_leaf_p.
(struct spec_info_def): New structure declaration.
(spec_info_t): New typedef.
(struct haifa_insn_data): New fields: todo_spec, done_spec, check_spec,
recovery_block, orig_pat.
(glat_start, glat_end): New variables declaraions.
(TODO_SPEC, DONE_SPEC, CHECK_SPEC, RECOVERY_BLOCK, ORIG_PAT):
New access macros.
(enum SCHED_FLAGS): New constants: SCHED_RGN, SCHED_EBB,
DETACH_LIFE_INFO, USE_GLAT.
(enum SPEC_SCHED_FLAGS): New enumeration.
(NOTE_NOTE_BB_P): New macro.
(extend_dependency_caches, xrecalloc, unlink_bb_notes, add_block,
attach_life_info, debug_spec_status, check_reg_live): New functions.
(get_block_head_tail): Change signature to get_ebb_head_tail, adjust
all uses in ddg.c, modulo-sched.c, haifa-sched.c, sched-rgn.c,
sched-ebb.c
(get_dep_weak, ds_merge): Prototype functions from sched-deps.c .
* ddg.c (get_block_head_tail): Adjust all uses.
* modulo-sched.c (get_block_head_tail): Adjust all uses.
(sms_sched_info): Initialize new fields.
(contributes_to_priority): Removed.
* haifa-sched.c (params.h): New include.
(get_block_head_tail): Adjust all uses.
(ISSUE_POINTS): New macro.
(glat_start, glat_end): New global variables.
(spec_info_var, spec_info, added_recovery_block_p, nr_begin_data,
nr_be_in_data, nr_begin_control, nr_be_in_control, bb_header,
old_last_basic_block, before_recovery, current_sched_info_var,
rgn_n_insns, luid): New static variables.
(insn_cost1): New function. Move logic from insn_cost to here.
(find_insn_reg_weight1): New function. Move logic from
find_insn_reg_weight to here.
(reemit_notes, move_insn, max_issue): Change signature.
(move_insn1): Removed.
(extend_h_i_d, extend_ready, extend_global, extend_all, init_h_i_d,
extend_bb): New static functions to support extension of scheduler's
data structures.
(generate_recovery_code, process_insn_depend_be_in_spec,
begin_speculative_block, add_to_speculative_block,
init_before_recovery, create_recovery_block, create_check_block_twin,
fix_recovery_deps): New static functions to support
generation of recovery code.
(fix_jump_move, find_fallthru_edge, dump_new_block_header,
restore_bb_notes, move_block_after_check, move_succs): New static
functions to support ebb scheduling.
(init_glat, init_glat1, attach_life_info1, free_glat): New static
functions to support handling of register live information.
(associate_line_notes_with_blocks, change_pattern, speculate_insn,
sched_remove_insn, clear_priorities, calc_priorities, bb_note,
add_jump_dependencies): New static functions.
(check_cfg, has_edge_p, check_sched_flags): New static functions for
consistancy checking.
(debug_spec_status): New function to call from debugger.
(priority): Added code to handle speculation checks.
(rank_for_schedule): Added code to distinguish speculative instructions.
(schedule_insn): Added code to handle speculation checks.
(unlink_other_notes, rm_line_notes, restore_line_notes, rm_other_notes):
Fixed to handle ebbs.
(move_insn): Added code to handle ebb scheduling.
(max_issue): Added code to use ISSUE_POINTS of instructions.
(choose_ready): Added code to choose between speculative and
non-speculative instructions.
(schedule_block): Added code to handle ebb scheduling and scheduling of
speculative instructions.
(sched_init): Initialize new variables.
(sched_finish): Free new variables. Print statistics.
(try_ready): Added code to handle speculative instructions.
* lists.c (copy_DEPS_LIST_list): New function.
* sched-deps.c (extend_dependency_caches): New function. Move logic
from create_dependency_caches to here.
(get_dep_weak, ds_merge): Make global.
* genattr.c (main): Code to output prototype for
dfa_clear_single_insn_cache.
* genautomata.c (DFA_CLEAR_SINGLE_INSN_CACHE_FUNC_NAME): New macros.
(output_dfa_clean_insn_cache_func): Code to output
dfa_clear_single_insn_cache function.
* sched-ebb.c (target_n_insns): Remove. Adjust all users to use
n_insns.
(can_schedule_ready_p, fix_basic_block_boundaries, add_missing_bbs):
Removed.
(n_insns, dont_calc_deps, ebb_head, ebb_tail, last_bb):
New static variables.
(begin_schedule_ready, add_remove_insn, add_block1, advance_target_bb,
fix_recovery_cfg, ebb_head_or_leaf_p): Implement hooks from
struct sched_info.
(ebb_sched_info): Initialize new fields.
(get_block_head_tail): Adjust all uses.
(compute_jump_reg_dependencies): Fixed to use glat_start.
(schedule_ebb): Code to remove unreachable last block.
(schedule_ebbs): Added code to update register live information.
* sched-rgn.c (region_sched_info): Initialize new fields.
(get_block_head_tail): Adjust all uses.
(last_was_jump): Removed. Adjust users.
(begin_schedule_ready, add_remove_insn, insn_points, extend_regions,
add_block1, fix_recovery_cfg, advance_target_bb, region_head_or_leaf_p):
Implement new hooks.
(check_dead_notes1): New static function.
(struct region): New fields: dont_calc_deps, has_real_ebb.
(RGN_DONT_CALC_DEPS, RGN_HAS_REAL_EBB): New access macros.
(BB_TO_BLOCK): Fixed to handle EBBs.
(EBB_FIRST_BB, EBB_LAST_BB): New macros.
(ebb_head): New static variable.
(debug_regions, contributes_to_priority): Fixed to handle EBBs.
(find_single_block_regions, find_rgns, find_more_rgns): Initialize
new fields.
(compute_dom_prob_ps): New assertion.
(check_live_1, update_live_1): Fixed to work with glat_start instead of
global_live_at_start.
(init_ready_list): New assertions.
(can_schedule_ready_p): Split update code to begin_schedule_ready.
(new_ready): Add support for BEGIN_CONTROL speculation.
(schedule_insns): Fixed code that updates register live information
to handle EBBs.
(schedule_region): Fixed to handle EBBs.
(init_regions): Use extend_regions and check_dead_notes1.
* params.def (PARAM_MAX_SCHED_INSN_CONFLICT_DELAY,
PARAM_SCHED_SPEC_PROB_CUTOFF): New parameters.
* doc/tm.texi (TARGET_SCHED_ADJUST_COST_2, TARGET_SCHED_H_I_D_EXTENDED,
TARGET_SCHED_SPECULATE_INSN, TARGET_SCHED_NEEDS_BLOCK_P,
TARGET_SCHED_GEN_CHECK,
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC,
TARGET_SCHED_SET_SCHED_FLAGS): Document.
* doc/invoke.texi (max-sched-insn-conflict-delay,
sched-spec-prob-cutoff): Document.
2006-03-16 Maxim Kuvyrkov <mkuvyrkov@ispras.ru>
* sched-int.h (struct haifa_insn_data): New fields: resolved_deps,

View File

@ -2515,7 +2515,8 @@ modulo-sched.o : modulo-sched.c $(DDG_H) $(CONFIG_H) $(CONFIG_H) $(SYSTEM_H) \
cfghooks.h $(DF_H) $(GCOV_IO_H) hard-reg-set.h $(TM_H) timevar.h tree-pass.h
haifa-sched.o : haifa-sched.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(SCHED_INT_H) $(REGS_H) hard-reg-set.h $(FLAGS_H) insn-config.h $(FUNCTION_H) \
$(INSN_ATTR_H) toplev.h $(RECOG_H) except.h $(TM_P_H) $(TARGET_H) output.h
$(INSN_ATTR_H) toplev.h $(RECOG_H) except.h $(TM_P_H) $(TARGET_H) output.h \
$(PARAMS_H)
sched-deps.o : sched-deps.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
$(RTL_H) $(SCHED_INT_H) $(REGS_H) hard-reg-set.h $(FLAGS_H) insn-config.h \
$(FUNCTION_H) $(INSN_ATTR_H) toplev.h $(RECOG_H) except.h cselib.h \

View File

@ -382,7 +382,7 @@ build_intra_loop_deps (ddg_ptr g)
init_deps (&tmp_deps);
/* Do the intra-block data dependence analysis for the given block. */
get_block_head_tail (g->bb->index, &head, &tail);
get_ebb_head_tail (g->bb, g->bb, &head, &tail);
sched_analyze (&tmp_deps, head, tail);
/* Build intra-loop data dependencies using the scheduler dependency

View File

@ -6183,6 +6183,15 @@ The maximum number of iterations through CFG to extend regions.
N - do at most N iterations.
The default value is 2.
@item max-sched-insn-conflict-delay
The maximum conflict delay for an insn to be considered for speculative motion.
The default value is 3.
@item sched-spec-prob-cutoff
The minimal probability of speculation success (in percents), so that
speculative insn will be scheduled.
The default value is 40.
@item max-last-value-rtl
The maximum size measured as number of RTLs that can be recorded in an expression

View File

@ -5838,8 +5838,8 @@ acceptable, you could use the hook to modify them too. See also
@deftypefn {Target Hook} int TARGET_SCHED_ADJUST_PRIORITY (rtx @var{insn}, int @var{priority})
This hook adjusts the integer scheduling priority @var{priority} of
@var{insn}. It should return the new priority. Reduce the priority to
execute @var{insn} earlier, increase the priority to execute @var{insn}
@var{insn}. It should return the new priority. Increase the priority to
execute @var{insn} earlier, reduce the priority to execute @var{insn}
later. Do not define this hook if you do not need to adjust the
scheduling priorities of insns.
@end deftypefn
@ -6014,6 +6014,70 @@ closer to one another---i.e., closer than the dependence distance; however,
not in cases of "costly dependences", which this hooks allows to define.
@end deftypefn
@deftypefn {Target Hook} int TARGET_SCHED_ADJUST_COST_2 (rtx @var{insn}, int @var{dep_type}, rtx @var{dep_insn}, int @var{cost})
This hook is a modified version of @samp{TARGET_SCHED_ADJUST_COST}. Instead
of passing dependence as a second parameter, it passes a type of that
dependence. This is useful to calculate cost of dependence between insns
not having the corresponding link. If @samp{TARGET_SCHED_ADJUST_COST_2} is
definded it is used instead of @samp{TARGET_SCHED_ADJUST_COST}.
@end deftypefn
@deftypefn {Target Hook} void TARGET_SCHED_H_I_D_EXTENDED (void)
This hook is called by the insn scheduler after emitting a new instruction to
the instruction stream. The hook notifies a target backend to extend its
per instruction data structures.
@end deftypefn
@deftypefn {Target Hook} int TARGET_SCHED_SPECULATE_INSN (rtx @var{insn}, int @var{request}, rtx *@var{new_pat})
This hook is called by the insn scheduler when @var{insn} has only
speculative dependencies and therefore can be scheduled speculatively.
The hook is used to check if the pattern of @var{insn} has a speculative
version and, in case of successful check, to generate that speculative
pattern. The hook should return 1, if the instruction has a speculative form,
or -1, if it doesn't. @var{request} describes the type of requested
speculation. If the return value equals 1 then @var{new_pat} is assigned
the generated speculative pattern.
@end deftypefn
@deftypefn {Target Hook} int TARGET_SCHED_NEEDS_BLOCK_P (rtx @var{insn})
This hook is called by the insn scheduler during generation of recovery code
for @var{insn}. It should return non-zero, if the corresponding check
instruction should branch to recovery code, or zero otherwise.
@end deftypefn
@deftypefn {Target Hook} rtx TARGET_SCHED_GEN_CHECK (rtx @var{insn}, rtx @var{label}, int @var{mutate_p})
This hook is called by the insn scheduler to generate a pattern for recovery
check instruction. If @var{mutate_p} is zero, then @var{insn} is a
speculative instruction for which the check should be generated.
@var{label} is either a label of a basic block, where recovery code should
be emitted, or a null pointer, when requested check doesn't branch to
recovery code (a simple check). If @var{mutate_p} is non-zero, then
a pattern for a branchy check corresponding to a simple check denoted by
@var{insn} should be generated. In this case @var{label} can't be null.
@end deftypefn
@deftypefn {Target Hook} int TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC (rtx @var{insn})
This hook is used as a workaround for
@samp{TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD} not being
called on the first instruction of the ready list. The hook is used to
discard speculative instruction that stand first in the ready list from
being scheduled on the current cycle. For non-speculative instructions,
the hook should always return non-zero. For example, in the ia64 backend
the hook is used to cancel data speculative insns when the ALAT table
is nearly full.
@end deftypefn
@deftypefn {Target Hook} void TARGET_SCHED_SET_SCHED_FLAGS (unsigned int *@var{flags}, spec_info_t @var{spec_info})
This hook is used by the insn scheduler to find out what features should be
enabled/used. @var{flags} initially may have either the SCHED_RGN or SCHED_EBB
bit set. This denotes the scheduler pass for which the data should be
provided. The target backend should modify @var{flags} by modifying
the bits correponding to the following features: USE_DEPS_LIST, USE_GLAT,
DETACH_LIFE_INFO, and DO_SPECULATION. For the DO_SPECULATION feature
an additional structure @var{spec_info} should be filled by the target.
The structure describes speculation types that can be used in the scheduler.
@end deftypefn
@node Sections
@section Dividing the Output into Sections (Texts, Data, @dots{})
@c the above section title is WAY too long. maybe cut the part between

View File

@ -250,6 +250,7 @@ main (int argc, char **argv)
printf (" define_insn_reservation will be changed after\n");
printf (" last call of dfa_start. */\n");
printf ("extern void dfa_clean_insn_cache (void);\n\n");
printf ("extern void dfa_clear_single_insn_cache (rtx);\n\n");
printf ("/* Initiate and finish work with DFA. They should be\n");
printf (" called as the first and the last interface\n");
printf (" functions. */\n");

View File

@ -6852,6 +6852,8 @@ output_reserved_units_table_name (FILE *f, automaton_t automaton)
#define DFA_CLEAN_INSN_CACHE_FUNC_NAME "dfa_clean_insn_cache"
#define DFA_CLEAR_SINGLE_INSN_CACHE_FUNC_NAME "dfa_clear_single_insn_cache"
#define DFA_START_FUNC_NAME "dfa_start"
#define DFA_FINISH_FUNC_NAME "dfa_finish"
@ -8335,7 +8337,8 @@ output_cpu_unit_reservation_p (void)
fprintf (output_file, " return 0;\n}\n\n");
}
/* The function outputs PHR interface function `dfa_clean_insn_cache'. */
/* The function outputs PHR interface functions `dfa_clean_insn_cache'
and 'dfa_clear_single_insn_cache'. */
static void
output_dfa_clean_insn_cache_func (void)
{
@ -8347,6 +8350,16 @@ output_dfa_clean_insn_cache_func (void)
I_VARIABLE_NAME, I_VARIABLE_NAME,
DFA_INSN_CODES_LENGTH_VARIABLE_NAME, I_VARIABLE_NAME,
DFA_INSN_CODES_VARIABLE_NAME, I_VARIABLE_NAME);
fprintf (output_file,
"void\n%s (rtx %s)\n{\n int %s;\n\n",
DFA_CLEAR_SINGLE_INSN_CACHE_FUNC_NAME, INSN_PARAMETER_NAME,
I_VARIABLE_NAME);
fprintf (output_file,
" %s = INSN_UID (%s);\n if (%s < %s)\n %s [%s] = -1;\n}\n\n",
I_VARIABLE_NAME, INSN_PARAMETER_NAME, I_VARIABLE_NAME,
DFA_INSN_CODES_LENGTH_VARIABLE_NAME, DFA_INSN_CODES_VARIABLE_NAME,
I_VARIABLE_NAME);
}
/* The function outputs PHR interface function `dfa_start'. */

File diff suppressed because it is too large Load Diff

View File

@ -249,4 +249,20 @@ remove_free_INSN_LIST_elem (rtx elem, rtx *listp)
free_INSN_LIST_node (remove_list_elem (elem, listp));
}
/* Create and return a copy of the DEPS_LIST LIST. */
rtx
copy_DEPS_LIST_list (rtx list)
{
rtx res = NULL_RTX, *resp = &res;
while (list)
{
*resp = alloc_DEPS_LIST (XEXP (list, 0), 0, XWINT (list, 2));
PUT_REG_NOTE_KIND (*resp, REG_NOTE_KIND (list));
resp = &XEXP (*resp, 1);
list = XEXP (list, 1);
}
return res;
}
#include "gt-lists.h"

View File

@ -237,12 +237,6 @@ sms_print_insn (rtx insn, int aligned ATTRIBUTE_UNUSED)
return tmp;
}
static int
contributes_to_priority (rtx next, rtx insn)
{
return BLOCK_NUM (next) == BLOCK_NUM (insn);
}
static void
compute_jump_reg_dependencies (rtx insn ATTRIBUTE_UNUSED,
regset cond_exec ATTRIBUTE_UNUSED,
@ -259,12 +253,16 @@ static struct sched_info sms_sched_info =
NULL,
NULL,
sms_print_insn,
contributes_to_priority,
NULL,
compute_jump_reg_dependencies,
NULL, NULL,
NULL, NULL,
0, 0, 0,
NULL, NULL, NULL, NULL, NULL,
#ifdef ENABLE_CHECKING
NULL,
#endif
0
};
@ -314,7 +312,7 @@ const_iteration_count (rtx count_reg, basic_block pre_header,
if (! pre_header)
return NULL_RTX;
get_block_head_tail (pre_header->index, &head, &tail);
get_ebb_head_tail (pre_header, pre_header, &head, &tail);
for (insn = tail; insn != PREV_INSN (head); insn = PREV_INSN (insn))
if (INSN_P (insn) && single_set (insn) &&
@ -794,7 +792,7 @@ loop_single_full_bb_p (struct loop *loop)
/* Make sure that basic blocks other than the header
have only notes labels or jumps. */
get_block_head_tail (bbs[i]->index, &head, &tail);
get_ebb_head_tail (bbs[i], bbs[i], &head, &tail);
for (; head != NEXT_INSN (tail); head = NEXT_INSN (head))
{
if (NOTE_P (head) || LABEL_P (head)
@ -972,7 +970,7 @@ sms_schedule (void)
bb = loop->header;
get_block_head_tail (bb->index, &head, &tail);
get_ebb_head_tail (bb, bb, &head, &tail);
latch_edge = loop_latch_edge (loop);
gcc_assert (loop->single_exit);
if (loop->single_exit->count)
@ -1074,7 +1072,7 @@ sms_schedule (void)
if (dump_file)
print_ddg (dump_file, g);
get_block_head_tail (loop->header->index, &head, &tail);
get_ebb_head_tail (loop->header, loop->header, &head, &tail);
latch_edge = loop_latch_edge (loop);
gcc_assert (loop->single_exit);

View File

@ -504,6 +504,16 @@ DEFPARAM(PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS,
"The maximum number of iterations through CFG to extend regions",
2, 0, 0)
DEFPARAM(PARAM_MAX_SCHED_INSN_CONFLICT_DELAY,
"max-sched-insn-conflict-delay",
"The maximum conflict delay for an insn to be considered for speculative motion",
3, 1, 10)
DEFPARAM(PARAM_SCHED_SPEC_PROB_CUTOFF,
"sched-spec-prob-cutoff",
"The minimal probability of speculation success (in percents), so that speculative insn will be scheduled.",
40, 0, 100)
DEFPARAM(PARAM_MAX_LAST_VALUE_RTL,
"max-last-value-rtl",
"The maximum number of RTL nodes that can be recorded as combiner's last value",

View File

@ -1758,6 +1758,7 @@ rtx alloc_DEPS_LIST (rtx, rtx, HOST_WIDE_INT);
void remove_free_DEPS_LIST_elem (rtx, rtx *);
void remove_free_INSN_LIST_elem (rtx, rtx *);
rtx remove_list_elem (rtx, rtx *);
rtx copy_DEPS_LIST_list (rtx);
/* regclass.c */

View File

@ -112,8 +112,6 @@ static void adjust_add_sorted_back_dep (rtx, rtx, rtx *);
static void adjust_back_add_forw_dep (rtx, rtx *);
static void delete_forw_dep (rtx, rtx);
static dw_t estimate_dep_weak (rtx, rtx);
static dw_t get_dep_weak (ds_t, ds_t);
static ds_t ds_merge (ds_t, ds_t);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
static void check_dep_status (enum reg_note, ds_t, bool);
@ -1777,19 +1775,35 @@ init_dependency_caches (int luid)
what we consider "very high". */
if (luid / n_basic_blocks > 100 * 5)
{
int i;
cache_size = 0;
extend_dependency_caches (luid, true);
}
}
true_dependency_cache = XNEWVEC (bitmap_head, luid);
anti_dependency_cache = XNEWVEC (bitmap_head, luid);
output_dependency_cache = XNEWVEC (bitmap_head, luid);
/* Create or extend (depending on CREATE_P) dependency caches to
size N. */
void
extend_dependency_caches (int n, bool create_p)
{
if (create_p || true_dependency_cache)
{
int i, luid = cache_size + n;
true_dependency_cache = XRESIZEVEC (bitmap_head, true_dependency_cache,
luid);
output_dependency_cache = XRESIZEVEC (bitmap_head,
output_dependency_cache, luid);
anti_dependency_cache = XRESIZEVEC (bitmap_head, anti_dependency_cache,
luid);
#ifdef ENABLE_CHECKING
forward_dependency_cache = XNEWVEC (bitmap_head, luid);
forward_dependency_cache = XRESIZEVEC (bitmap_head,
forward_dependency_cache, luid);
#endif
if (current_sched_info->flags & DO_SPECULATION)
spec_dependency_cache = XRESIZEVEC (bitmap_head, spec_dependency_cache,
luid);
for (i = 0; i < luid; i++)
for (i = cache_size; i < luid; i++)
{
bitmap_initialize (&true_dependency_cache[i], 0);
bitmap_initialize (&output_dependency_cache[i], 0);
@ -2037,7 +2051,7 @@ delete_back_forw_dep (rtx insn, rtx elem)
}
/* Return weakness of speculative type TYPE in the dep_status DS. */
static dw_t
dw_t
get_dep_weak (ds_t ds, ds_t type)
{
ds = ds & type;
@ -2074,7 +2088,7 @@ set_dep_weak (ds_t ds, ds_t type, dw_t dw)
}
/* Return the join of two dep_statuses DS1 and DS2. */
static ds_t
ds_t
ds_merge (ds_t ds1, ds_t ds2)
{
ds_t ds, t;

View File

@ -43,14 +43,23 @@ Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
#include "target.h"
#include "output.h"
/* The number of insns to be scheduled in total. */
static int target_n_insns;
/* The number of insns scheduled so far. */
static int sched_n_insns;
/* The number of insns to be scheduled in total. */
static int n_insns;
/* Set of blocks, that already have their dependencies calculated. */
static bitmap_head dont_calc_deps;
/* Set of basic blocks, that are ebb heads of tails respectively. */
static bitmap_head ebb_head, ebb_tail;
/* Last basic block in current ebb. */
static basic_block last_bb;
/* Implementations of the sched_info functions for region scheduling. */
static void init_ready_list (void);
static int can_schedule_ready_p (rtx);
static void begin_schedule_ready (rtx, rtx);
static int schedule_more_p (void);
static const char *ebb_print_insn (rtx, int);
static int rank (rtx, rtx);
@ -59,16 +68,22 @@ static void compute_jump_reg_dependencies (rtx, regset, regset, regset);
static basic_block earliest_block_with_similiar_load (basic_block, rtx);
static void add_deps_for_risky_insns (rtx, rtx);
static basic_block schedule_ebb (rtx, rtx);
static basic_block fix_basic_block_boundaries (basic_block, basic_block, rtx,
rtx);
static void add_missing_bbs (rtx, basic_block, basic_block);
static void add_remove_insn (rtx, int);
static void add_block1 (basic_block, basic_block);
static basic_block advance_target_bb (basic_block, rtx);
static void fix_recovery_cfg (int, int, int);
#ifdef ENABLE_CHECKING
static int ebb_head_or_leaf_p (basic_block, int);
#endif
/* Return nonzero if there are more insns that should be scheduled. */
static int
schedule_more_p (void)
{
return sched_n_insns < target_n_insns;
return sched_n_insns < n_insns;
}
/* Add all insns that are initially ready to the ready list READY. Called
@ -77,11 +92,11 @@ schedule_more_p (void)
static void
init_ready_list (void)
{
int n = 0;
rtx prev_head = current_sched_info->prev_head;
rtx next_tail = current_sched_info->next_tail;
rtx insn;
target_n_insns = 0;
sched_n_insns = 0;
#if 0
@ -95,18 +110,74 @@ init_ready_list (void)
for (insn = NEXT_INSN (prev_head); insn != next_tail; insn = NEXT_INSN (insn))
{
try_ready (insn);
target_n_insns++;
n++;
}
gcc_assert (n == n_insns);
}
/* Called after taking INSN from the ready list. Returns nonzero if this
insn can be scheduled, nonzero if we should silently discard it. */
static int
can_schedule_ready_p (rtx insn ATTRIBUTE_UNUSED)
/* INSN is being scheduled after LAST. Update counters. */
static void
begin_schedule_ready (rtx insn, rtx last)
{
sched_n_insns++;
return 1;
if (BLOCK_FOR_INSN (insn) == last_bb
/* INSN is a jump in the last block, ... */
&& control_flow_insn_p (insn)
/* that is going to be moved over some instructions. */
&& last != PREV_INSN (insn))
{
edge e;
edge_iterator ei;
basic_block bb;
/* An obscure special case, where we do have partially dead
instruction scheduled after last control flow instruction.
In this case we can create new basic block. It is
always exactly one basic block last in the sequence. */
FOR_EACH_EDGE (e, ei, last_bb->succs)
if (e->flags & EDGE_FALLTHRU)
break;
#ifdef ENABLE_CHECKING
gcc_assert (!e || !(e->flags & EDGE_COMPLEX));
gcc_assert (BLOCK_FOR_INSN (insn) == last_bb
&& !RECOVERY_BLOCK (insn)
&& BB_HEAD (last_bb) != insn
&& BB_END (last_bb) == insn);
{
rtx x;
x = NEXT_INSN (insn);
if (e)
gcc_assert (NOTE_P (x) || LABEL_P (x));
else
gcc_assert (BARRIER_P (x));
}
#endif
if (e)
{
bb = split_edge (e);
gcc_assert (NOTE_INSN_BASIC_BLOCK_P (BB_END (bb)));
}
else
bb = create_basic_block (insn, 0, last_bb);
/* split_edge () creates BB before E->DEST. Keep in mind, that
this operation extends scheduling region till the end of BB.
Hence, we need to shift NEXT_TAIL, so haifa-sched.c won't go out
of the scheduling region. */
current_sched_info->next_tail = NEXT_INSN (BB_END (bb));
gcc_assert (current_sched_info->next_tail);
add_block (bb, last_bb);
gcc_assert (last_bb == bb);
}
}
/* Return a string that contains the insn uid and optionally anything else
@ -173,9 +244,9 @@ compute_jump_reg_dependencies (rtx insn, regset cond_set, regset used,
it may guard the fallthrough block from using a value that has
conditionally overwritten that of the main codepath. So we
consider that it restores the value of the main codepath. */
bitmap_and (set, e->dest->il.rtl->global_live_at_start, cond_set);
bitmap_and (set, glat_start [e->dest->index], cond_set);
else
bitmap_ior_into (used, e->dest->il.rtl->global_live_at_start);
bitmap_ior_into (used, glat_start [e->dest->index]);
}
/* Used in schedule_insns to initialize current_sched_info for scheduling
@ -184,7 +255,7 @@ compute_jump_reg_dependencies (rtx insn, regset cond_set, regset used,
static struct sched_info ebb_sched_info =
{
init_ready_list,
can_schedule_ready_p,
NULL,
schedule_more_p,
NULL,
rank,
@ -196,143 +267,19 @@ static struct sched_info ebb_sched_info =
NULL, NULL,
0, 1, 0,
0
add_remove_insn,
begin_schedule_ready,
add_block1,
advance_target_bb,
fix_recovery_cfg,
#ifdef ENABLE_CHECKING
ebb_head_or_leaf_p,
#endif
/* We need to DETACH_LIVE_INFO to be able to create new basic blocks.
See begin_schedule_ready (). */
SCHED_EBB | USE_GLAT | DETACH_LIFE_INFO
};
/* It is possible that ebb scheduling eliminated some blocks.
Place blocks from FIRST to LAST before BEFORE. */
static void
add_missing_bbs (rtx before, basic_block first, basic_block last)
{
for (; last != first->prev_bb; last = last->prev_bb)
{
before = emit_note_before (NOTE_INSN_BASIC_BLOCK, before);
NOTE_BASIC_BLOCK (before) = last;
BB_HEAD (last) = before;
BB_END (last) = before;
update_bb_for_insn (last);
}
}
/* Fixup the CFG after EBB scheduling. Re-recognize the basic
block boundaries in between HEAD and TAIL and update basic block
structures between BB and LAST. */
static basic_block
fix_basic_block_boundaries (basic_block bb, basic_block last, rtx head,
rtx tail)
{
rtx insn = head;
rtx last_inside = BB_HEAD (bb);
rtx aftertail = NEXT_INSN (tail);
head = BB_HEAD (bb);
for (; insn != aftertail; insn = NEXT_INSN (insn))
{
gcc_assert (!LABEL_P (insn));
/* Create new basic blocks just before first insn. */
if (inside_basic_block_p (insn))
{
if (!last_inside)
{
rtx note;
/* Re-emit the basic block note for newly found BB header. */
if (LABEL_P (insn))
{
note = emit_note_after (NOTE_INSN_BASIC_BLOCK, insn);
head = insn;
last_inside = note;
}
else
{
note = emit_note_before (NOTE_INSN_BASIC_BLOCK, insn);
head = note;
last_inside = insn;
}
}
else
last_inside = insn;
}
/* Control flow instruction terminate basic block. It is possible
that we've eliminated some basic blocks (made them empty).
Find the proper basic block using BLOCK_FOR_INSN and arrange things in
a sensible way by inserting empty basic blocks as needed. */
if (control_flow_insn_p (insn) || (insn == tail && last_inside))
{
basic_block curr_bb = BLOCK_FOR_INSN (insn);
rtx note;
if (!control_flow_insn_p (insn))
curr_bb = last;
if (bb == last->next_bb)
{
edge f;
rtx h;
edge_iterator ei;
/* An obscure special case, where we do have partially dead
instruction scheduled after last control flow instruction.
In this case we can create new basic block. It is
always exactly one basic block last in the sequence. Handle
it by splitting the edge and repositioning the block.
This is somewhat hackish, but at least avoid cut&paste
A safer solution can be to bring the code into sequence,
do the split and re-emit it back in case this will ever
trigger problem. */
FOR_EACH_EDGE (f, ei, bb->prev_bb->succs)
if (f->flags & EDGE_FALLTHRU)
break;
if (f)
{
last = curr_bb = split_edge (f);
h = BB_HEAD (curr_bb);
BB_HEAD (curr_bb) = head;
BB_END (curr_bb) = insn;
/* Edge splitting created misplaced BASIC_BLOCK note, kill
it. */
delete_insn (h);
}
/* It may happen that code got moved past unconditional jump in
case the code is completely dead. Kill it. */
else
{
rtx next = next_nonnote_insn (insn);
delete_insn_chain (head, insn);
/* We keep some notes in the way that may split barrier from the
jump. */
if (BARRIER_P (next))
{
emit_barrier_after (prev_nonnote_insn (head));
delete_insn (next);
}
insn = NULL;
}
}
else
{
BB_HEAD (curr_bb) = head;
BB_END (curr_bb) = insn;
add_missing_bbs (BB_HEAD (curr_bb), bb, curr_bb->prev_bb);
}
note = LABEL_P (head) ? NEXT_INSN (head) : head;
NOTE_BASIC_BLOCK (note) = curr_bb;
update_bb_for_insn (curr_bb);
bb = curr_bb->next_bb;
last_inside = NULL;
if (!insn)
break;
}
}
add_missing_bbs (BB_HEAD (last->next_bb), bb, last);
return bb->prev_bb;
}
/* Returns the earliest block in EBB currently being processed where a
"similar load" 'insn2' is found, and hence LOAD_INSN can move
speculatively into the found block. All the following must hold:
@ -488,29 +435,40 @@ add_deps_for_risky_insns (rtx head, rtx tail)
static basic_block
schedule_ebb (rtx head, rtx tail)
{
int n_insns;
basic_block b;
basic_block first_bb, target_bb;
struct deps tmp_deps;
basic_block first_bb = BLOCK_FOR_INSN (head);
basic_block last_bb = BLOCK_FOR_INSN (tail);
first_bb = BLOCK_FOR_INSN (head);
last_bb = BLOCK_FOR_INSN (tail);
if (no_real_insns_p (head, tail))
return BLOCK_FOR_INSN (tail);
init_deps_global ();
gcc_assert (INSN_P (head) && INSN_P (tail));
/* Compute LOG_LINKS. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
if (!bitmap_bit_p (&dont_calc_deps, first_bb->index))
{
init_deps_global ();
/* Compute INSN_DEPEND. */
compute_forward_dependences (head, tail);
/* Compute LOG_LINKS. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
add_deps_for_risky_insns (head, tail);
/* Compute INSN_DEPEND. */
compute_forward_dependences (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
targetm.sched.dependencies_evaluation_hook (head, tail);
add_deps_for_risky_insns (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
targetm.sched.dependencies_evaluation_hook (head, tail);
finish_deps_global ();
}
else
/* Only recovery blocks can have their dependencies already calculated,
and they always are single block ebbs. */
gcc_assert (first_bb == last_bb);
/* Set priorities. */
current_sched_info->sched_max_insns_priority = 0;
@ -546,10 +504,16 @@ schedule_ebb (rtx head, rtx tail)
schedule_block (). */
rm_other_notes (head, tail);
unlink_bb_notes (first_bb, last_bb);
current_sched_info->queue_must_finish_empty = 1;
schedule_block (-1, n_insns);
target_bb = first_bb;
schedule_block (&target_bb, n_insns);
/* We might pack all instructions into fewer blocks,
so we may made some of them empty. Can't assert (b == last_bb). */
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_n_insns == n_insns);
head = current_sched_info->head;
@ -557,10 +521,17 @@ schedule_ebb (rtx head, rtx tail)
if (write_symbols != NO_DEBUG)
restore_line_notes (head, tail);
b = fix_basic_block_boundaries (first_bb, last_bb, head, tail);
finish_deps_global ();
return b;
if (EDGE_COUNT (last_bb->preds) == 0)
/* LAST_BB is unreachable. */
{
gcc_assert (first_bb != last_bb
&& EDGE_COUNT (last_bb->succs) == 0);
last_bb = last_bb->prev_bb;
delete_basic_block (last_bb->next_bb);
}
return last_bb;
}
/* The one entry point in this file. */
@ -570,6 +541,9 @@ schedule_ebbs (void)
{
basic_block bb;
int probability_cutoff;
rtx tail;
sbitmap large_region_blocks, blocks;
int any_large_regions;
if (profile_info && flag_branch_probabilities)
probability_cutoff = PARAM_VALUE (TRACER_MIN_BRANCH_PROBABILITY_FEEDBACK);
@ -590,11 +564,18 @@ schedule_ebbs (void)
compute_bb_for_insn ();
/* Initialize DONT_CALC_DEPS and ebb-{start, end} markers. */
bitmap_initialize (&dont_calc_deps, 0);
bitmap_clear (&dont_calc_deps);
bitmap_initialize (&ebb_head, 0);
bitmap_clear (&ebb_head);
bitmap_initialize (&ebb_tail, 0);
bitmap_clear (&ebb_tail);
/* Schedule every region in the subroutine. */
FOR_EACH_BB (bb)
{
rtx head = BB_HEAD (bb);
rtx tail;
for (;;)
{
@ -628,11 +609,71 @@ schedule_ebbs (void)
break;
}
bitmap_set_bit (&ebb_head, BLOCK_NUM (head));
bb = schedule_ebb (head, tail);
bitmap_set_bit (&ebb_tail, bb->index);
}
bitmap_clear (&dont_calc_deps);
gcc_assert (current_sched_info->flags & DETACH_LIFE_INFO);
/* We can create new basic blocks during scheduling, and
attach_life_info () will create regsets for them
(along with attaching existing info back). */
attach_life_info ();
/* Updating register live information. */
allocate_reg_life_data ();
any_large_regions = 0;
large_region_blocks = sbitmap_alloc (last_basic_block);
sbitmap_zero (large_region_blocks);
FOR_EACH_BB (bb)
SET_BIT (large_region_blocks, bb->index);
blocks = sbitmap_alloc (last_basic_block);
sbitmap_zero (blocks);
/* Update life information. For regions consisting of multiple blocks
we've possibly done interblock scheduling that affects global liveness.
For regions consisting of single blocks we need to do only local
liveness. */
FOR_EACH_BB (bb)
{
int bbi;
bbi = bb->index;
if (!bitmap_bit_p (&ebb_head, bbi)
|| !bitmap_bit_p (&ebb_tail, bbi)
/* New blocks (e.g. recovery blocks) should be processed
as parts of large regions. */
|| !glat_start[bbi])
any_large_regions = 1;
else
{
SET_BIT (blocks, bbi);
RESET_BIT (large_region_blocks, bbi);
}
}
/* Updating life info can be done by local propagation over the modified
superblocks. */
update_life_info (blocks, UPDATE_LIFE_LOCAL, 0);
sbitmap_free (blocks);
if (any_large_regions)
{
update_life_info (large_region_blocks, UPDATE_LIFE_GLOBAL, 0);
#ifdef ENABLE_CHECKING
/* !!! We can't check reg_live_info here because of the fact,
that destination registers of COND_EXEC's may be dead
before scheduling (while they should be alive). Don't know why. */
/*check_reg_live ();*/
#endif
}
sbitmap_free (large_region_blocks);
bitmap_clear (&ebb_head);
bitmap_clear (&ebb_tail);
/* Reposition the prologue and epilogue notes in case we moved the
prologue/epilogue insns. */
@ -644,3 +685,77 @@ schedule_ebbs (void)
sched_finish ();
}
/* INSN has been added to/removed from current ebb. */
static void
add_remove_insn (rtx insn ATTRIBUTE_UNUSED, int remove_p)
{
if (!remove_p)
n_insns++;
else
n_insns--;
}
/* BB was added to ebb after AFTER. */
static void
add_block1 (basic_block bb, basic_block after)
{
/* Recovery blocks are always bounded by BARRIERS,
therefore, they always form single block EBB,
therefore, we can use rec->index to identify such EBBs. */
if (after == EXIT_BLOCK_PTR)
bitmap_set_bit (&dont_calc_deps, bb->index);
else if (after == last_bb)
last_bb = bb;
}
/* Return next block in ebb chain. For parameter meaning please refer to
sched-int.h: struct sched_info: advance_target_bb. */
static basic_block
advance_target_bb (basic_block bb, rtx insn)
{
if (insn)
{
if (BLOCK_FOR_INSN (insn) != bb
&& control_flow_insn_p (insn)
&& !RECOVERY_BLOCK (insn)
&& !RECOVERY_BLOCK (BB_END (bb)))
{
gcc_assert (!control_flow_insn_p (BB_END (bb))
&& NOTE_INSN_BASIC_BLOCK_P (BB_HEAD (bb->next_bb)));
return bb;
}
else
return 0;
}
else if (bb != last_bb)
return bb->next_bb;
else
gcc_unreachable ();
}
/* Fix internal data after interblock movement of jump instruction.
For parameter meaning please refer to
sched-int.h: struct sched_info: fix_recovery_cfg. */
static void
fix_recovery_cfg (int bbi ATTRIBUTE_UNUSED, int jump_bbi, int jump_bb_nexti)
{
gcc_assert (last_bb->index != bbi);
if (jump_bb_nexti == last_bb->index)
last_bb = BASIC_BLOCK (jump_bbi);
}
#ifdef ENABLE_CHECKING
/* Return non zero, if BB is first or last (depending of LEAF_P) block in
current ebb. For more information please refer to
sched-int.h: struct sched_info: region_head_or_leaf_p. */
static int
ebb_head_or_leaf_p (basic_block bb, int leaf_p)
{
if (!leaf_p)
return bitmap_bit_p (&ebb_head, bb->index);
else
return bitmap_bit_p (&ebb_tail, bb->index);
}
#endif /* ENABLE_CHECKING */

View File

@ -148,10 +148,12 @@ struct sched_info
int (*can_schedule_ready_p) (rtx);
/* Return nonzero if there are more insns that should be scheduled. */
int (*schedule_more_p) (void);
/* Called after an insn has all its dependencies resolved. Return nonzero
if it should be moved to the ready list or the queue, or zero if we
should silently discard it. */
int (*new_ready) (rtx);
/* Called after an insn has all its hard dependencies resolved.
Adjusts status of instruction (which is passed through second parameter)
to indicate if instruction should be moved to the ready list or the
queue, or if it should silently discard it (until next resolved
dependence). */
ds_t (*new_ready) (rtx, ds_t);
/* Compare priority of two insns. Return a positive number if the second
insn is to be preferred for scheduling, and a negative one if the first
is to be preferred. Zero if they are equally good. */
@ -187,11 +189,73 @@ struct sched_info
/* Maximum priority that has been assigned to an insn. */
int sched_max_insns_priority;
/* Hooks to support speculative scheduling. */
/* Called to notify frontend that instruction is being added (second
parameter == 0) or removed (second parameter == 1). */
void (*add_remove_insn) (rtx, int);
/* Called to notify frontend that instruction is being scheduled.
The first parameter - instruction to scheduled, the second parameter -
last scheduled instruction. */
void (*begin_schedule_ready) (rtx, rtx);
/* Called to notify frontend, that new basic block is being added.
The first parameter - new basic block.
The second parameter - block, after which new basic block is being added,
or EXIT_BLOCK_PTR, if recovery block is being added,
or NULL, if standalone block is being added. */
void (*add_block) (basic_block, basic_block);
/* If the second parameter is not NULL, return nonnull value, if the
basic block should be advanced.
If the second parameter is NULL, return the next basic block in EBB.
The first parameter is the current basic block in EBB. */
basic_block (*advance_target_bb) (basic_block, rtx);
/* Called after blocks were rearranged due to movement of jump instruction.
The first parameter - index of basic block, in which jump currently is.
The second parameter - index of basic block, in which jump used
to be.
The third parameter - index of basic block, that follows the second
parameter. */
void (*fix_recovery_cfg) (int, int, int);
#ifdef ENABLE_CHECKING
/* If the second parameter is zero, return nonzero, if block is head of the
region.
If the second parameter is nonzero, return nonzero, if block is leaf of
the region.
global_live_at_start should not change in region heads and
global_live_at_end should not change in region leafs due to scheduling. */
int (*region_head_or_leaf_p) (basic_block, int);
#endif
/* ??? FIXME: should use straight bitfields inside sched_info instead of
this flag field. */
unsigned int flags;
};
/* This structure holds description of the properties for speculative
scheduling. */
struct spec_info_def
{
/* Holds types of allowed speculations: BEGIN_{DATA|CONTROL},
BE_IN_{DATA_CONTROL}. */
int mask;
/* A dump file for additional information on speculative scheduling. */
FILE *dump;
/* Minimal cumulative weakness of speculative instruction's
dependencies, so that insn will be scheduled. */
dw_t weakness_cutoff;
/* Flags from the enum SPEC_SCHED_FLAGS. */
int flags;
};
typedef struct spec_info_def *spec_info_t;
extern struct sched_info *current_sched_info;
/* Indexed by INSN_UID, the collection of all data associated with
@ -256,9 +320,26 @@ struct haifa_insn_data
/* Nonzero if instruction has internal dependence
(e.g. add_dependence was invoked with (insn == elem)). */
unsigned int has_internal_dep : 1;
/* What speculations are neccessary to apply to schedule the instruction. */
ds_t todo_spec;
/* What speculations were already applied. */
ds_t done_spec;
/* What speculations are checked by this instruction. */
ds_t check_spec;
/* Recovery block for speculation checks. */
basic_block recovery_block;
/* Original pattern of the instruction. */
rtx orig_pat;
};
extern struct haifa_insn_data *h_i_d;
/* Used only if (current_sched_info->flags & USE_GLAT) != 0.
These regsets store global_live_at_{start, end} information
for each basic block. */
extern regset *glat_start, *glat_end;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
@ -272,6 +353,11 @@ extern struct haifa_insn_data *h_i_d;
#define INSN_COST(INSN) (h_i_d[INSN_UID (INSN)].cost)
#define INSN_REG_WEIGHT(INSN) (h_i_d[INSN_UID (INSN)].reg_weight)
#define HAS_INTERNAL_DEP(INSN) (h_i_d[INSN_UID (INSN)].has_internal_dep)
#define TODO_SPEC(INSN) (h_i_d[INSN_UID (INSN)].todo_spec)
#define DONE_SPEC(INSN) (h_i_d[INSN_UID (INSN)].done_spec)
#define CHECK_SPEC(INSN) (h_i_d[INSN_UID (INSN)].check_spec)
#define RECOVERY_BLOCK(INSN) (h_i_d[INSN_UID (INSN)].recovery_block)
#define ORIG_PAT(INSN) (h_i_d[INSN_UID (INSN)].orig_pat)
/* DEP_STATUS of the link incapsulates information, that is needed for
speculative scheduling. Namely, it is 4 integers in the range
@ -400,9 +486,27 @@ enum SCHED_FLAGS {
/* Perform data or control (or both) speculation.
Results in generation of data and control speculative dependencies.
Requires USE_DEPS_LIST set. */
DO_SPECULATION = USE_DEPS_LIST << 1
DO_SPECULATION = USE_DEPS_LIST << 1,
SCHED_RGN = DO_SPECULATION << 1,
SCHED_EBB = SCHED_RGN << 1,
/* Detach register live information from basic block headers.
This is necessary to invoke functions, that change CFG (e.g. split_edge).
Requires USE_GLAT. */
DETACH_LIFE_INFO = SCHED_EBB << 1,
/* Save register live information from basic block headers to
glat_{start, end} arrays. */
USE_GLAT = DETACH_LIFE_INFO << 1
};
enum SPEC_SCHED_FLAGS {
COUNT_SPEC_IN_CRITICAL_PATH = 1,
PREFER_NON_DATA_SPEC = COUNT_SPEC_IN_CRITICAL_PATH << 1,
PREFER_NON_CONTROL_SPEC = PREFER_NON_DATA_SPEC << 1
};
#define NOTE_NOT_BB_P(NOTE) (NOTE_P (NOTE) && (NOTE_LINE_NUMBER (NOTE) \
!= NOTE_INSN_BASIC_BLOCK))
extern FILE *sched_dump;
extern int sched_verbose;
@ -500,16 +604,19 @@ extern void compute_forward_dependences (rtx, rtx);
extern rtx find_insn_list (rtx, rtx);
extern void init_dependency_caches (int);
extern void free_dependency_caches (void);
extern void extend_dependency_caches (int, bool);
extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx,
enum reg_note, ds_t);
extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
extern void delete_back_forw_dep (rtx, rtx);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
/* Functions in haifa-sched.c. */
extern int haifa_classify_insn (rtx);
extern void get_block_head_tail (int, rtx *, rtx *);
extern void get_ebb_head_tail (basic_block, basic_block, rtx *, rtx *);
extern int no_real_insns_p (rtx, rtx);
extern void rm_line_notes (rtx, rtx);
@ -521,10 +628,18 @@ extern void rm_other_notes (rtx, rtx);
extern int insn_cost (rtx, rtx, rtx);
extern int set_priorities (rtx, rtx);
extern void schedule_block (int, int);
extern void schedule_block (basic_block *, int);
extern void sched_init (void);
extern void sched_finish (void);
extern int try_ready (rtx);
extern void * xrecalloc (void *, size_t, size_t, size_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern void attach_life_info (void);
#ifdef ENABLE_CHECKING
extern void check_reg_live (void);
#endif
#endif /* GCC_SCHED_INT_H */

View File

@ -96,8 +96,15 @@ static bool sched_is_disabled_for_current_region_p (void);
control flow graph edges, in the 'up' direction. */
typedef struct
{
int rgn_nr_blocks; /* Number of blocks in region. */
int rgn_blocks; /* cblocks in the region (actually index in rgn_bb_table). */
/* Number of extended basic blocks in region. */
int rgn_nr_blocks;
/* cblocks in the region (actually index in rgn_bb_table). */
int rgn_blocks;
/* Dependencies for this region are already computed. Basically, indicates,
that this is a recovery block. */
unsigned int dont_calc_deps : 1;
/* This region has at least one non-trivial ebb. */
unsigned int has_real_ebb : 1;
}
region;
@ -125,6 +132,8 @@ static int min_spec_prob;
#define RGN_NR_BLOCKS(rgn) (rgn_table[rgn].rgn_nr_blocks)
#define RGN_BLOCKS(rgn) (rgn_table[rgn].rgn_blocks)
#define RGN_DONT_CALC_DEPS(rgn) (rgn_table[rgn].dont_calc_deps)
#define RGN_HAS_REAL_EBB(rgn) (rgn_table[rgn].has_real_ebb)
#define BLOCK_TO_BB(block) (block_to_bb[block])
#define CONTAINING_RGN(block) (containing_rgn[block])
@ -140,8 +149,15 @@ extern void debug_live (int, int);
static int current_nr_blocks;
static int current_blocks;
/* The mapping from bb to block. */
#define BB_TO_BLOCK(bb) (rgn_bb_table[current_blocks + (bb)])
static int rgn_n_insns;
/* The mapping from ebb to block. */
/* ebb_head [i] - is index in rgn_bb_table, while
EBB_HEAD (i) - is basic block index.
BASIC_BLOCK (EBB_HEAD (i)) - head of ebb. */
#define BB_TO_BLOCK(ebb) (rgn_bb_table[ebb_head[ebb]])
#define EBB_FIRST_BB(ebb) BASIC_BLOCK (BB_TO_BLOCK (ebb))
#define EBB_LAST_BB(ebb) BASIC_BLOCK (rgn_bb_table[ebb_head[ebb + 1] - 1])
/* Target info declarations.
@ -244,6 +260,12 @@ static edgeset *pot_split;
/* For every bb, a set of its ancestor edges. */
static edgeset *ancestor_edges;
/* Array of EBBs sizes. Currently we can get a ebb only through
splitting of currently scheduling block, therefore, we don't need
ebb_head array for every region, its sufficient to hold it only
for current one. */
static int *ebb_head;
static void compute_dom_prob_ps (int);
#define INSN_PROBABILITY(INSN) (SRC_PROB (BLOCK_TO_BB (BLOCK_NUM (INSN))))
@ -381,13 +403,12 @@ debug_regions (void)
rgn_table[rgn].rgn_nr_blocks);
fprintf (sched_dump, ";;\tbb/block: ");
for (bb = 0; bb < rgn_table[rgn].rgn_nr_blocks; bb++)
{
current_blocks = RGN_BLOCKS (rgn);
/* We don't have ebb_head initialized yet, so we can't use
BB_TO_BLOCK (). */
current_blocks = RGN_BLOCKS (rgn);
gcc_assert (bb == BLOCK_TO_BB (BB_TO_BLOCK (bb)));
fprintf (sched_dump, " %d/%d ", bb, BB_TO_BLOCK (bb));
}
for (bb = 0; bb < rgn_table[rgn].rgn_nr_blocks; bb++)
fprintf (sched_dump, " %d/%d ", bb, rgn_bb_table[current_blocks + bb]);
fprintf (sched_dump, "\n\n");
}
@ -409,6 +430,8 @@ find_single_block_region (void)
rgn_bb_table[nr_regions] = bb->index;
RGN_NR_BLOCKS (nr_regions) = 1;
RGN_BLOCKS (nr_regions) = nr_regions;
RGN_DONT_CALC_DEPS (nr_regions) = 0;
RGN_HAS_REAL_EBB (nr_regions) = 0;
CONTAINING_RGN (bb->index) = nr_regions;
BLOCK_TO_BB (bb->index) = 0;
nr_regions++;
@ -852,6 +875,8 @@ find_rgns (void)
rgn_bb_table[idx] = bb->index;
RGN_NR_BLOCKS (nr_regions) = num_bbs;
RGN_BLOCKS (nr_regions) = idx++;
RGN_DONT_CALC_DEPS (nr_regions) = 0;
RGN_HAS_REAL_EBB (nr_regions) = 0;
CONTAINING_RGN (bb->index) = nr_regions;
BLOCK_TO_BB (bb->index) = count = 0;
@ -921,6 +946,8 @@ find_rgns (void)
rgn_bb_table[idx] = bb->index;
RGN_NR_BLOCKS (nr_regions) = 1;
RGN_BLOCKS (nr_regions) = idx++;
RGN_DONT_CALC_DEPS (nr_regions) = 0;
RGN_HAS_REAL_EBB (nr_regions) = 0;
CONTAINING_RGN (bb->index) = nr_regions++;
BLOCK_TO_BB (bb->index) = 0;
}
@ -1152,6 +1179,8 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
degree[bbn] = -1;
rgn_bb_table[idx] = bbn;
RGN_BLOCKS (nr_regions) = idx++;
RGN_DONT_CALC_DEPS (nr_regions) = 0;
RGN_HAS_REAL_EBB (nr_regions) = 0;
CONTAINING_RGN (bbn) = nr_regions;
BLOCK_TO_BB (bbn) = 0;
@ -1205,6 +1234,8 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
{
RGN_BLOCKS (nr_regions) = idx;
RGN_NR_BLOCKS (nr_regions) = 1;
RGN_DONT_CALC_DEPS (nr_regions) = 0;
RGN_HAS_REAL_EBB (nr_regions) = 0;
nr_regions++;
}
@ -1254,6 +1285,9 @@ compute_dom_prob_ps (int bb)
edge_iterator in_ei;
edge in_edge;
/* We shouldn't have any real ebbs yet. */
gcc_assert (ebb_head [bb] == bb + current_blocks);
if (IS_RGN_ENTRY (bb))
{
SET_BIT (dom[bb], 0);
@ -1519,8 +1553,14 @@ check_live_1 (int src, rtx x)
{
basic_block b = candidate_table[src].split_bbs.first_member[i];
if (REGNO_REG_SET_P (b->il.rtl->global_live_at_start,
regno + j))
/* We can have split blocks, that were recently generated.
such blocks are always outside current region. */
gcc_assert (glat_start[b->index]
|| CONTAINING_RGN (b->index)
!= CONTAINING_RGN (BB_TO_BLOCK (src)));
if (!glat_start[b->index]
|| REGNO_REG_SET_P (glat_start[b->index],
regno + j))
{
return 0;
}
@ -1534,7 +1574,11 @@ check_live_1 (int src, rtx x)
{
basic_block b = candidate_table[src].split_bbs.first_member[i];
if (REGNO_REG_SET_P (b->il.rtl->global_live_at_start, regno))
gcc_assert (glat_start[b->index]
|| CONTAINING_RGN (b->index)
!= CONTAINING_RGN (BB_TO_BLOCK (src)));
if (!glat_start[b->index]
|| REGNO_REG_SET_P (glat_start[b->index], regno))
{
return 0;
}
@ -1593,8 +1637,7 @@ update_live_1 (int src, rtx x)
{
basic_block b = candidate_table[src].update_bbs.first_member[i];
SET_REGNO_REG_SET (b->il.rtl->global_live_at_start,
regno + j);
SET_REGNO_REG_SET (glat_start[b->index], regno + j);
}
}
}
@ -1604,7 +1647,7 @@ update_live_1 (int src, rtx x)
{
basic_block b = candidate_table[src].update_bbs.first_member[i];
SET_REGNO_REG_SET (b->il.rtl->global_live_at_start, regno);
SET_REGNO_REG_SET (glat_start[b->index], regno);
}
}
}
@ -1880,25 +1923,35 @@ static int sched_target_n_insns;
static int target_n_insns;
/* The number of insns from the entire region scheduled so far. */
static int sched_n_insns;
/* Nonzero if the last scheduled insn was a jump. */
static int last_was_jump;
/* Implementations of the sched_info functions for region scheduling. */
static void init_ready_list (void);
static int can_schedule_ready_p (rtx);
static int new_ready (rtx);
static void begin_schedule_ready (rtx, rtx);
static ds_t new_ready (rtx, ds_t);
static int schedule_more_p (void);
static const char *rgn_print_insn (rtx, int);
static int rgn_rank (rtx, rtx);
static int contributes_to_priority (rtx, rtx);
static void compute_jump_reg_dependencies (rtx, regset, regset, regset);
/* Functions for speculative scheduling. */
static void add_remove_insn (rtx, int);
static void extend_regions (void);
static void add_block1 (basic_block, basic_block);
static void fix_recovery_cfg (int, int, int);
static basic_block advance_target_bb (basic_block, rtx);
static void check_dead_notes1 (int, sbitmap);
#ifdef ENABLE_CHECKING
static int region_head_or_leaf_p (basic_block, int);
#endif
/* Return nonzero if there are more insns that should be scheduled. */
static int
schedule_more_p (void)
{
return ! last_was_jump && sched_target_n_insns < target_n_insns;
return sched_target_n_insns < target_n_insns;
}
/* Add all insns that are initially ready to the ready list READY. Called
@ -1915,7 +1968,6 @@ init_ready_list (void)
target_n_insns = 0;
sched_target_n_insns = 0;
sched_n_insns = 0;
last_was_jump = 0;
/* Print debugging information. */
if (sched_verbose >= 5)
@ -1946,6 +1998,8 @@ init_ready_list (void)
{
try_ready (insn);
target_n_insns++;
gcc_assert (!(TODO_SPEC (insn) & BEGIN_CONTROL));
}
/* Add to ready list all 'ready' insns in valid source blocks.
@ -1958,7 +2012,8 @@ init_ready_list (void)
rtx src_next_tail;
rtx tail, head;
get_block_head_tail (BB_TO_BLOCK (bb_src), &head, &tail);
get_ebb_head_tail (EBB_FIRST_BB (bb_src), EBB_LAST_BB (bb_src),
&head, &tail);
src_next_tail = NEXT_INSN (tail);
src_head = head;
@ -1974,18 +2029,29 @@ init_ready_list (void)
static int
can_schedule_ready_p (rtx insn)
{
if (JUMP_P (insn))
last_was_jump = 1;
/* An interblock motion? */
if (INSN_BB (insn) != target_bb
&& IS_SPECULATIVE_INSN (insn)
&& !check_live (insn, INSN_BB (insn)))
return 0;
else
return 1;
}
/* Updates counter and other information. Splitted from can_schedule_ready_p ()
because when we schedule insn speculatively then insn passed to
can_schedule_ready_p () differs from the one passed to
begin_schedule_ready (). */
static void
begin_schedule_ready (rtx insn, rtx last ATTRIBUTE_UNUSED)
{
/* An interblock motion? */
if (INSN_BB (insn) != target_bb)
{
basic_block b1;
if (IS_SPECULATIVE_INSN (insn))
{
if (!check_live (insn, INSN_BB (insn)))
return 0;
gcc_assert (check_live (insn, INSN_BB (insn)));
update_live (insn, INSN_BB (insn));
/* For speculative load, mark insns fed by it. */
@ -1995,32 +2061,6 @@ can_schedule_ready_p (rtx insn)
nr_spec++;
}
nr_inter++;
/* Update source block boundaries. */
b1 = BLOCK_FOR_INSN (insn);
if (insn == BB_HEAD (b1) && insn == BB_END (b1))
{
/* We moved all the insns in the basic block.
Emit a note after the last insn and update the
begin/end boundaries to point to the note. */
rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
BB_HEAD (b1) = note;
BB_END (b1) = note;
}
else if (insn == BB_END (b1))
{
/* We took insns from the end of the basic block,
so update the end of block boundary so that it
points to the first insn we did not move. */
BB_END (b1) = PREV_INSN (insn);
}
else if (insn == BB_HEAD (b1))
{
/* We took insns from the start of the basic block,
so update the start of block boundary so that
it points to the first insn we did not move. */
BB_HEAD (b1) = NEXT_INSN (insn);
}
}
else
{
@ -2028,28 +2068,44 @@ can_schedule_ready_p (rtx insn)
sched_target_n_insns++;
}
sched_n_insns++;
return 1;
}
/* Called after INSN has all its dependencies resolved. Return nonzero
if it should be moved to the ready list or the queue, or zero if we
should silently discard it. */
static int
new_ready (rtx next)
/* Called after INSN has all its hard dependencies resolved and the speculation
of type TS is enough to overcome them all.
Return nonzero if it should be moved to the ready list or the queue, or zero
if we should silently discard it. */
static ds_t
new_ready (rtx next, ds_t ts)
{
/* For speculative insns, before inserting to ready/queue,
check live, exception-free, and issue-delay. */
if (INSN_BB (next) != target_bb
&& (!IS_VALID (INSN_BB (next))
if (INSN_BB (next) != target_bb)
{
int not_ex_free = 0;
/* For speculative insns, before inserting to ready/queue,
check live, exception-free, and issue-delay. */
if (!IS_VALID (INSN_BB (next))
|| CANT_MOVE (next)
|| (IS_SPECULATIVE_INSN (next)
&& ((recog_memoized (next) >= 0
&& min_insn_conflict_delay (curr_state, next, next) > 3)
&& min_insn_conflict_delay (curr_state, next, next)
> PARAM_VALUE (PARAM_MAX_SCHED_INSN_CONFLICT_DELAY))
|| RECOVERY_BLOCK (next)
|| !check_live (next, INSN_BB (next))
|| !is_exception_free (next, INSN_BB (next), target_bb)))))
return 0;
return 1;
|| (not_ex_free = !is_exception_free (next, INSN_BB (next),
target_bb)))))
{
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
&& current_sched_info->flags & DO_SPECULATION)
/* Here we got new control-speculative instruction. */
ts = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
else
ts = (ts & ~SPECULATIVE) | HARD_DEP;
}
}
return ts;
}
/* Return a string that contains the insn uid and optionally anything else
@ -2112,7 +2168,8 @@ rgn_rank (rtx insn1, rtx insn2)
static int
contributes_to_priority (rtx next, rtx insn)
{
return BLOCK_NUM (next) == BLOCK_NUM (insn);
/* NEXT and INSN reside in one ebb. */
return BLOCK_TO_BB (BLOCK_NUM (next)) == BLOCK_TO_BB (BLOCK_NUM (insn));
}
/* INSN is a JUMP_INSN, COND_SET is the set of registers that are
@ -2148,7 +2205,18 @@ static struct sched_info region_sched_info =
NULL, NULL,
0, 0, 0,
0
add_remove_insn,
begin_schedule_ready,
add_block1,
advance_target_bb,
fix_recovery_cfg,
#ifdef ENABLE_CHECKING
region_head_or_leaf_p,
#endif
SCHED_RGN | USE_GLAT
#ifdef ENABLE_CHECKING
| DETACH_LIFE_INFO
#endif
};
/* Determine if PAT sets a CLASS_LIKELY_SPILLED_P register. */
@ -2447,7 +2515,8 @@ compute_block_backward_dependences (int bb)
tmp_deps = bb_deps[bb];
/* Do the analysis for this block. */
get_block_head_tail (BB_TO_BLOCK (bb), &head, &tail);
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
sched_analyze (&tmp_deps, head, tail);
add_branch_dependences (head, tail);
@ -2489,7 +2558,8 @@ debug_dependencies (void)
rtx next_tail;
rtx insn;
get_block_head_tail (BB_TO_BLOCK (bb), &head, &tail);
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
next_tail = NEXT_INSN (tail);
fprintf (sched_dump, "\n;; --- Region Dependences --- b %d bb %d \n",
BB_TO_BLOCK (bb), bb);
@ -2576,48 +2646,68 @@ schedule_region (int rgn)
edge_iterator ei;
edge e;
int bb;
int rgn_n_insns = 0;
int sched_rgn_n_insns = 0;
rgn_n_insns = 0;
/* Set variables for the current region. */
current_nr_blocks = RGN_NR_BLOCKS (rgn);
current_blocks = RGN_BLOCKS (rgn);
/* See comments in add_block1, for what reasons we allocate +1 element. */
ebb_head = xrealloc (ebb_head, (current_nr_blocks + 1) * sizeof (*ebb_head));
for (bb = 0; bb <= current_nr_blocks; bb++)
ebb_head[bb] = current_blocks + bb;
/* Don't schedule region that is marked by
NOTE_DISABLE_SCHED_OF_BLOCK. */
if (sched_is_disabled_for_current_region_p ())
return;
init_deps_global ();
/* Initializations for region data dependence analysis. */
bb_deps = XNEWVEC (struct deps, current_nr_blocks);
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
/* Compute LOG_LINKS. */
for (bb = 0; bb < current_nr_blocks; bb++)
compute_block_backward_dependences (bb);
/* Compute INSN_DEPEND. */
for (bb = current_nr_blocks - 1; bb >= 0; bb--)
if (!RGN_DONT_CALC_DEPS (rgn))
{
rtx head, tail;
get_block_head_tail (BB_TO_BLOCK (bb), &head, &tail);
init_deps_global ();
compute_forward_dependences (head, tail);
/* Initializations for region data dependence analysis. */
bb_deps = XNEWVEC (struct deps, current_nr_blocks);
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
if (targetm.sched.dependencies_evaluation_hook)
targetm.sched.dependencies_evaluation_hook (head, tail);
/* Compute LOG_LINKS. */
for (bb = 0; bb < current_nr_blocks; bb++)
compute_block_backward_dependences (bb);
/* Compute INSN_DEPEND. */
for (bb = current_nr_blocks - 1; bb >= 0; bb--)
{
rtx head, tail;
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
compute_forward_dependences (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
targetm.sched.dependencies_evaluation_hook (head, tail);
}
free_pending_lists ();
finish_deps_global ();
free (bb_deps);
}
else
/* This is a recovery block. It is always a single block region. */
gcc_assert (current_nr_blocks == 1);
/* Set priorities. */
current_sched_info->sched_max_insns_priority = 0;
for (bb = 0; bb < current_nr_blocks; bb++)
{
rtx head, tail;
get_block_head_tail (BB_TO_BLOCK (bb), &head, &tail);
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
rgn_n_insns += set_priorities (head, tail);
}
@ -2660,18 +2750,36 @@ schedule_region (int rgn)
/* Compute probabilities, dominators, split_edges. */
for (bb = 0; bb < current_nr_blocks; bb++)
compute_dom_prob_ps (bb);
/* Cleanup ->aux used for EDGE_TO_BIT mapping. */
/* We don't need them anymore. But we want to avoid dublication of
aux fields in the newly created edges. */
FOR_EACH_BB (block)
{
if (CONTAINING_RGN (block->index) != rgn)
continue;
FOR_EACH_EDGE (e, ei, block->succs)
e->aux = NULL;
}
}
/* Now we can schedule all blocks. */
for (bb = 0; bb < current_nr_blocks; bb++)
{
basic_block first_bb, last_bb, curr_bb;
rtx head, tail;
int b = BB_TO_BLOCK (bb);
get_block_head_tail (b, &head, &tail);
first_bb = EBB_FIRST_BB (bb);
last_bb = EBB_LAST_BB (bb);
get_ebb_head_tail (first_bb, last_bb, &head, &tail);
if (no_real_insns_p (head, tail))
continue;
{
gcc_assert (first_bb == last_bb);
continue;
}
current_sched_info->prev_head = PREV_INSN (head);
current_sched_info->next_tail = NEXT_INSN (tail);
@ -2696,26 +2804,29 @@ schedule_region (int rgn)
if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
remove_note (head, note);
}
else
/* This means that first block in ebb is empty.
It looks to me as an impossible thing. There at least should be
a recovery check, that caused the splitting. */
gcc_unreachable ();
/* Remove remaining note insns from the block, save them in
note_list. These notes are restored at the end of
schedule_block (). */
rm_other_notes (head, tail);
unlink_bb_notes (first_bb, last_bb);
target_bb = bb;
gcc_assert (flag_schedule_interblock || current_nr_blocks == 1);
current_sched_info->queue_must_finish_empty = current_nr_blocks == 1;
schedule_block (b, rgn_n_insns);
curr_bb = first_bb;
schedule_block (&curr_bb, rgn_n_insns);
gcc_assert (EBB_FIRST_BB (bb) == first_bb);
sched_rgn_n_insns += sched_n_insns;
/* Update target block boundaries. */
if (head == BB_HEAD (BASIC_BLOCK (b)))
BB_HEAD (BASIC_BLOCK (b)) = current_sched_info->head;
if (tail == BB_END (BASIC_BLOCK (b)))
BB_END (BASIC_BLOCK (b)) = current_sched_info->tail;
/* Clean up. */
if (current_nr_blocks > 1)
{
@ -2734,29 +2845,16 @@ schedule_region (int rgn)
for (bb = 0; bb < current_nr_blocks; bb++)
{
rtx head, tail;
get_block_head_tail (BB_TO_BLOCK (bb), &head, &tail);
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
restore_line_notes (head, tail);
}
}
/* Done with this region. */
free_pending_lists ();
finish_deps_global ();
free (bb_deps);
if (current_nr_blocks > 1)
{
/* Cleanup ->aux used for EDGE_TO_BIT mapping. */
FOR_EACH_BB (block)
{
if (CONTAINING_RGN (block->index) != rgn)
continue;
FOR_EACH_EDGE (e, ei, block->succs)
e->aux = NULL;
}
free (prob);
sbitmap_vector_free (dom);
sbitmap_vector_free (pot_split);
@ -2778,10 +2876,11 @@ init_regions (void)
int rgn;
nr_regions = 0;
rgn_table = XNEWVEC (region, n_basic_blocks);
rgn_bb_table = XNEWVEC (int, n_basic_blocks);
block_to_bb = XNEWVEC (int, last_basic_block);
containing_rgn = XNEWVEC (int, last_basic_block);
rgn_table = 0;
rgn_bb_table = 0;
block_to_bb = 0;
containing_rgn = 0;
extend_regions ();
/* Compute regions for scheduling. */
if (reload_completed
@ -2806,6 +2905,8 @@ init_regions (void)
to using the cfg code in flow.c. */
free_dominance_info (CDI_DOMINATORS);
}
RGN_BLOCKS (nr_regions) = RGN_BLOCKS (nr_regions - 1) +
RGN_NR_BLOCKS (nr_regions - 1);
if (CHECK_DEAD_NOTES)
@ -2814,15 +2915,8 @@ init_regions (void)
deaths_in_region = XNEWVEC (int, nr_regions);
/* Remove all death notes from the subroutine. */
for (rgn = 0; rgn < nr_regions; rgn++)
{
int b;
check_dead_notes1 (rgn, blocks);
sbitmap_zero (blocks);
for (b = RGN_NR_BLOCKS (rgn) - 1; b >= 0; --b)
SET_BIT (blocks, rgn_bb_table[RGN_BLOCKS (rgn) + b]);
deaths_in_region[rgn] = count_or_remove_death_notes (blocks, 1);
}
sbitmap_free (blocks);
}
else
@ -2858,9 +2952,15 @@ schedule_insns (void)
init_regions ();
/* EBB_HEAD is a region-scope sctructure. But we realloc it for
each region to save time/memory/something else. */
ebb_head = 0;
/* Schedule every region in the subroutine. */
for (rgn = 0; rgn < nr_regions; rgn++)
schedule_region (rgn);
free(ebb_head);
/* Update life analysis for the subroutine. Do single block regions
first so that we can verify that live_at_start didn't change. Then
@ -2875,8 +2975,11 @@ schedule_insns (void)
that live_at_start should change at region heads. Not sure what the
best way to test for this kind of thing... */
if (current_sched_info->flags & DETACH_LIFE_INFO)
/* this flag can be set either by the target or by ENABLE_CHECKING. */
attach_life_info ();
allocate_reg_life_data ();
compute_bb_for_insn ();
any_large_regions = 0;
large_region_blocks = sbitmap_alloc (last_basic_block);
@ -2891,8 +2994,13 @@ schedule_insns (void)
we've possibly done interblock scheduling that affects global liveness.
For regions consisting of single blocks we need to do only local
liveness. */
for (rgn = 0; rgn < nr_regions; rgn++)
if (RGN_NR_BLOCKS (rgn) > 1)
for (rgn = 0; rgn < nr_regions; rgn++)
if (RGN_NR_BLOCKS (rgn) > 1
/* Or the only block of this region has been splitted. */
|| RGN_HAS_REAL_EBB (rgn)
/* New blocks (e.g. recovery blocks) should be processed
as parts of large regions. */
|| !glat_start[rgn_bb_table[RGN_BLOCKS (rgn)]])
any_large_regions = 1;
else
{
@ -2904,16 +3012,21 @@ schedule_insns (void)
regs_ever_live, which should not change after reload. */
update_life_info (blocks, UPDATE_LIFE_LOCAL,
(reload_completed ? PROP_DEATH_NOTES
: PROP_DEATH_NOTES | PROP_REG_INFO));
: (PROP_DEATH_NOTES | PROP_REG_INFO)));
if (any_large_regions)
{
update_life_info (large_region_blocks, UPDATE_LIFE_GLOBAL,
PROP_DEATH_NOTES | PROP_REG_INFO);
(reload_completed ? PROP_DEATH_NOTES
: (PROP_DEATH_NOTES | PROP_REG_INFO)));
#ifdef ENABLE_CHECKING
check_reg_live ();
#endif
}
if (CHECK_DEAD_NOTES)
{
/* Verify the counts of basic block notes in single the basic block
/* Verify the counts of basic block notes in single basic block
regions. */
for (rgn = 0; rgn < nr_regions; rgn++)
if (RGN_NR_BLOCKS (rgn) == 1)
@ -2960,6 +3073,209 @@ schedule_insns (void)
sbitmap_free (blocks);
sbitmap_free (large_region_blocks);
}
/* INSN has been added to/removed from current region. */
static void
add_remove_insn (rtx insn, int remove_p)
{
if (!remove_p)
rgn_n_insns++;
else
rgn_n_insns--;
if (INSN_BB (insn) == target_bb)
{
if (!remove_p)
target_n_insns++;
else
target_n_insns--;
}
}
/* Extend internal data structures. */
static void
extend_regions (void)
{
rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks);
rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks);
block_to_bb = XRESIZEVEC (int, block_to_bb, last_basic_block);
containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
}
/* BB was added to ebb after AFTER. */
static void
add_block1 (basic_block bb, basic_block after)
{
extend_regions ();
if (after == 0 || after == EXIT_BLOCK_PTR)
{
int i;
i = RGN_BLOCKS (nr_regions);
/* I - first free position in rgn_bb_table. */
rgn_bb_table[i] = bb->index;
RGN_NR_BLOCKS (nr_regions) = 1;
RGN_DONT_CALC_DEPS (nr_regions) = after == EXIT_BLOCK_PTR;
RGN_HAS_REAL_EBB (nr_regions) = 0;
CONTAINING_RGN (bb->index) = nr_regions;
BLOCK_TO_BB (bb->index) = 0;
nr_regions++;
RGN_BLOCKS (nr_regions) = i + 1;
if (CHECK_DEAD_NOTES)
{
sbitmap blocks = sbitmap_alloc (last_basic_block);
deaths_in_region = xrealloc (deaths_in_region, nr_regions *
sizeof (*deaths_in_region));
check_dead_notes1 (nr_regions - 1, blocks);
sbitmap_free (blocks);
}
}
else
{
int i, pos;
/* We need to fix rgn_table, block_to_bb, containing_rgn
and ebb_head. */
BLOCK_TO_BB (bb->index) = BLOCK_TO_BB (after->index);
/* We extend ebb_head to one more position to
easily find the last position of the last ebb in
the current region. Thus, ebb_head[BLOCK_TO_BB (after) + 1]
is _always_ valid for access. */
i = BLOCK_TO_BB (after->index) + 1;
for (pos = ebb_head[i]; rgn_bb_table[pos] != after->index; pos--);
pos++;
gcc_assert (pos > ebb_head[i - 1]);
/* i - ebb right after "AFTER". */
/* ebb_head[i] - VALID. */
/* Source position: ebb_head[i]
Destination posistion: ebb_head[i] + 1
Last position:
RGN_BLOCKS (nr_regions) - 1
Number of elements to copy: (last_position) - (source_position) + 1
*/
memmove (rgn_bb_table + pos + 1,
rgn_bb_table + pos,
((RGN_BLOCKS (nr_regions) - 1) - (pos) + 1)
* sizeof (*rgn_bb_table));
rgn_bb_table[pos] = bb->index;
for (; i <= current_nr_blocks; i++)
ebb_head [i]++;
i = CONTAINING_RGN (after->index);
CONTAINING_RGN (bb->index) = i;
RGN_HAS_REAL_EBB (i) = 1;
for (++i; i <= nr_regions; i++)
RGN_BLOCKS (i)++;
/* We don't need to call check_dead_notes1 () because this new block
is just a split of the old. We don't want to count anything twice. */
}
}
/* Fix internal data after interblock movement of jump instruction.
For parameter meaning please refer to
sched-int.h: struct sched_info: fix_recovery_cfg. */
static void
fix_recovery_cfg (int bbi, int check_bbi, int check_bb_nexti)
{
int old_pos, new_pos, i;
BLOCK_TO_BB (check_bb_nexti) = BLOCK_TO_BB (bbi);
for (old_pos = ebb_head[BLOCK_TO_BB (check_bbi) + 1] - 1;
rgn_bb_table[old_pos] != check_bb_nexti;
old_pos--);
gcc_assert (old_pos > ebb_head[BLOCK_TO_BB (check_bbi)]);
for (new_pos = ebb_head[BLOCK_TO_BB (bbi) + 1] - 1;
rgn_bb_table[new_pos] != bbi;
new_pos--);
new_pos++;
gcc_assert (new_pos > ebb_head[BLOCK_TO_BB (bbi)]);
gcc_assert (new_pos < old_pos);
memmove (rgn_bb_table + new_pos + 1,
rgn_bb_table + new_pos,
(old_pos - new_pos) * sizeof (*rgn_bb_table));
rgn_bb_table[new_pos] = check_bb_nexti;
for (i = BLOCK_TO_BB (bbi) + 1; i <= BLOCK_TO_BB (check_bbi); i++)
ebb_head[i]++;
}
/* Return next block in ebb chain. For parameter meaning please refer to
sched-int.h: struct sched_info: advance_target_bb. */
static basic_block
advance_target_bb (basic_block bb, rtx insn)
{
if (insn)
return 0;
gcc_assert (BLOCK_TO_BB (bb->index) == target_bb
&& BLOCK_TO_BB (bb->next_bb->index) == target_bb);
return bb->next_bb;
}
/* Count and remove death notes in region RGN, which consists of blocks
with indecies in BLOCKS. */
static void
check_dead_notes1 (int rgn, sbitmap blocks)
{
int b;
sbitmap_zero (blocks);
for (b = RGN_NR_BLOCKS (rgn) - 1; b >= 0; --b)
SET_BIT (blocks, rgn_bb_table[RGN_BLOCKS (rgn) + b]);
deaths_in_region[rgn] = count_or_remove_death_notes (blocks, 1);
}
#ifdef ENABLE_CHECKING
/* Return non zero, if BB is head or leaf (depending of LEAF_P) block in
current region. For more information please refer to
sched-int.h: struct sched_info: region_head_or_leaf_p. */
static int
region_head_or_leaf_p (basic_block bb, int leaf_p)
{
if (!leaf_p)
return bb->index == rgn_bb_table[RGN_BLOCKS (CONTAINING_RGN (bb->index))];
else
{
int i;
edge e;
edge_iterator ei;
i = CONTAINING_RGN (bb->index);
FOR_EACH_EDGE (e, ei, bb->succs)
if (CONTAINING_RGN (e->dest->index) == i
/* except self-loop. */
&& e->dest != bb)
return 0;
return 1;
}
}
#endif /* ENABLE_CHECKING */
#endif
static bool

View File

@ -288,6 +288,14 @@ Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD 0
#define TARGET_SCHED_DFA_NEW_CYCLE 0
#define TARGET_SCHED_IS_COSTLY_DEPENDENCE 0
#define TARGET_SCHED_ADJUST_COST_2 0
#define TARGET_SCHED_H_I_D_EXTENDED 0
#define TARGET_SCHED_SPECULATE_INSN 0
#define TARGET_SCHED_NEEDS_BLOCK_P 0
#define TARGET_SCHED_GEN_CHECK 0
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC 0
#define TARGET_SCHED_SET_SCHED_FLAGS 0
#define TARGET_SCHED \
{TARGET_SCHED_ADJUST_COST, \
@ -308,7 +316,14 @@ Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD, \
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD, \
TARGET_SCHED_DFA_NEW_CYCLE, \
TARGET_SCHED_IS_COSTLY_DEPENDENCE}
TARGET_SCHED_IS_COSTLY_DEPENDENCE, \
TARGET_SCHED_ADJUST_COST_2, \
TARGET_SCHED_H_I_D_EXTENDED, \
TARGET_SCHED_SPECULATE_INSN, \
TARGET_SCHED_NEEDS_BLOCK_P, \
TARGET_SCHED_GEN_CHECK, \
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC, \
TARGET_SCHED_SET_SCHED_FLAGS}
#define TARGET_VECTORIZE_BUILTIN_MASK_FOR_LOAD 0

View File

@ -51,6 +51,7 @@ Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#include "insn-modes.h"
struct stdarg_info;
struct spec_info_def;
/* The struct used by the secondary_reload target hook. */
typedef struct secondary_reload_info
@ -306,6 +307,58 @@ struct gcc_target
between the already scheduled insn (first parameter) and the
the second insn (second parameter). */
bool (* is_costly_dependence) (rtx, rtx, rtx, int, int);
/* Given the current cost, COST, of an insn, INSN, calculate and
return a new cost based on its relationship to DEP_INSN through the
dependence of type DEP_TYPE. The default is to make no adjustment. */
int (* adjust_cost_2) (rtx insn, int, rtx def_insn, int cost);
/* The following member value is a pointer to a function called
by the insn scheduler. This hook is called to notify the backend
that new instructions were emitted. */
void (* h_i_d_extended) (void);
/* The following member value is a pointer to a function called
by the insn scheduler.
The first parameter is an instruction, the second parameter is the type
of the requested speculation, and the third parameter is a pointer to the
speculative pattern of the corresponding type (set if return value == 1).
It should return
-1, if there is no pattern, that will satisfy the requested speculation
type,
0, if current pattern satisfies the requested speculation type,
1, if pattern of the instruction should be changed to the newly
generated one. */
int (* speculate_insn) (rtx, HOST_WIDE_INT, rtx *);
/* The following member value is a pointer to a function called
by the insn scheduler. It should return true if the check instruction
corresponding to the instruction passed as the parameter needs a
recovery block. */
bool (* needs_block_p) (rtx);
/* The following member value is a pointer to a function called
by the insn scheduler. It should return a pattern for the check
instruction.
The first parameter is a speculative instruction, the second parameter
is the label of the corresponding recovery block (or null, if it is a
simple check). If the mutation of the check is requested (e.g. from
ld.c to chk.a), the third parameter is true - in this case the first
parameter is the previous check. */
rtx (* gen_check) (rtx, rtx, bool);
/* The following member value is a pointer to a function controlling
what insns from the ready insn queue will be considered for the
multipass insn scheduling. If the hook returns zero for the insn
passed as the parameter, the insn will not be chosen to be
issued. This hook is used to discard speculative instructions,
that stand at the first position of the ready list. */
bool (* first_cycle_multipass_dfa_lookahead_guard_spec) (rtx);
/* The following member value is a pointer to a function that provides
information about the speculation capabilities of the target.
The parameter is a pointer to spec_info variable. */
void (* set_sched_flags) (struct spec_info_def *);
} sched;
/* Functions relating to vectorization. */