basic-block.h (flow_delete_insn, [...]): Kill.

* basic-block.h (flow_delete_insn, flow_delete_insn_chain): Kill.
	* cfg.c (delete_insn): Rename from ....; use remove_insn; do not
	remove some labels.
	(flow_delete_insn): This one.
	(delete_insn_chain): Rename from ...; do not care labels.
	(flow_delete_insn_chain): ... this one.
	(flow_delete_block): Remove the insns one BB has been expunged.
	(merge_blocks_nomove): Likewise.
	(try_redirect_by_replacing_jump): Use delete_insn[_chain]; do not care
	updating BB boundaries.
	(tidy_fallthru_edge): Likewise.
	(commit_one_edge_insertion): Likewise.
	* cfgbuild.c (find_basic_block): Likewise.
	(find_basic_blocks_1): Likewise.
	* cfgcleanup.c (merge_blocks_move_predecessor_nojumps): Likewise.
	(try_crossjump_to_edge): Likewise.
	(try_optimize_cfg): Likewise.
	* cse.c (delete_trivially_dead_insns): Likewise.
	* df.c (df_insn_delete): Likewise.
	* doloop.c (doloop_modify): Use delete_related_insns.
	* emit-rtl.c (try_split): Likewise.
	(remove_insn): Update BB boundaries.
	* expect.c (connect_post_landing_pads): Use delete_related_insns.
	* flow.c (delete_dead_jumptables): Use delete_insn[_chain]; do not care
	updating BB boundaries.
	(propagate_block_delete_insn): Likewise.
	(propagate_block_delete_libcall): Likewise.
	* function.c (delete_handlers): Use delete_related_insns.
	(thread_prologue_and_epilogue_insns): Likewise.
	* gcse.c (delete_null_pointer_checks): Use delete_related_insns.
	* genpeep.c (gen_peephole): Use delete_related_insns.
	* ifcvt.c (noce_process_if_block): Use delete_insn; do not care updating
	BB boundaries.
	(find_cond_trap): Likewise.
	* integrate.c (save_for_inline): Use delete_related_insns.
	(copy_insn_list): Likewise.
	* jump.c (pruge_linie_number_notes): Likewise.
	(duplicate_loop_exit_test): Likewise.
	(delete_computation): Likewise.
	(delete_related_insn): Rename from ...; use delete_insn
	(delete_insn): ... this one.
	(redirect_jump): Use delete_related_insns.
	* loop.c (scan_loop): Likewise.
	(move_movables): Likewise.
	(find_and_verify_loops): Likewise.
	(check_dbra_loop): Likewise.
	* recog.c (peephole2_optimize): Likewise.
	* reg-stack.c (delete_insn_for_stacker): Remove.
	(move_for_stack_reg): Use delete_insn.
	* regmove.c (combine_stack_adjustments_for_block): Likewise.
	* reload1.c (delete_address_reloads): Use delete_related_insns.
	(fixup_abnormal_edges): Use delete_insn.
	* recog.c (emit_delay_sequence): Use delete_related_insns.
	(delete_from-delay_slot): Likewise.
	(delete_scheduled_jump): likewise.
	(optimize_skip): Likewise.
	(try_merge_delay_insns): Likewise.
	(full_simple_delay_slots): Likewise.
	(fill_slots_from_thread): Likewise.
	(relax_delay_slots): Likewise.
	(make_return_insns): Likewise.
	(dbr_schedule): Likewise.
	* rtl.h (delete_insn): Rename to delete_related_insns.
	(delete_insn, delete_insn_chain): New prototypes.
	* ssa-ccp (sse_fast_dce):  Remove deleting of DEF, as it is done
	by df_insn_delete already.
	* ssa-dce.c (delete_insn_bb): Use delete_insn.
	* ssa.c (convert_from_ssa): Use delete_related_insns.
	* unroll.c (unroll_loop): Likewise.
	(calculate_giv_inc): Likewise.
	(copy_loop_body): Likewise.

	* i386-protos.h (ix86_libcall_value, ix86_function_value,
	ix86_function_arg_regno_p, ix86_function_arg_boundary,
	ix86_return_in_memory, ix86_function_value): Declare.
	* i386.c (x86_64_int_parameter_registers, x86_64_int_return_registers):
	new static valurables.
	(x86_64_reg_class): New enum
	(x86_64_reg_class_name): New array.
	(classify_argument, examine_argument, construct_container,
	 merge_classes): New static functions.
	(optimization_options): Enable flag_omit_frame_pointer and disable
	flag_pcc_struct_return on 64bit.
	(ix86_libcall_value, ix86_function_value,
	ix86_function_arg_regno_p, ix86_function_arg_boundary,
	ix86_return_in_memory, ix86_function_value): New global functions.
	(init_cumulative_args): Refuse regparm on x86_64, set maybe_vaarg.
	(function_arg_advance): Handle x86_64 passing conventions.
	(function_arg): Likewise.
	* i386.h (FUNCTION_ARG_BOUNDARY): New macro.
	(RETURN_IN_MEMORY): Move offline.
	(FUNCTION_VALUE, LIBCALL_VALUE): Likewise.
	(FUNCTION_VALUE_REGNO_P): New macro.
	(FUNCTION_ARG_REGNO_P): Move offline.
	(struct ix86_args): Add maybe_vaarg.
	* next.h (FUNCTION_VALUE_REGNO_P): Delete.
	* unix.h (FUNCTION_VALUE_REGNO_P): Delete.

From-SVN: r45726
This commit is contained in:
Jan Hubicka 2001-09-21 14:55:18 +02:00 committed by Jan Hubicka
parent f2d3c02aa0
commit 53c170316f
33 changed files with 1014 additions and 375 deletions

View File

@ -1,3 +1,103 @@
Fri Sep 21 14:24:29 CEST 2001 Jan Hubicka <jh@suse.cz>
* basic-block.h (flow_delete_insn, flow_delete_insn_chain): Kill.
* cfg.c (delete_insn): Rename from ....; use remove_insn; do not
remove some labels.
(flow_delete_insn): This one.
(delete_insn_chain): Rename from ...; do not care labels.
(flow_delete_insn_chain): ... this one.
(flow_delete_block): Remove the insns one BB has been expunged.
(merge_blocks_nomove): Likewise.
(try_redirect_by_replacing_jump): Use delete_insn[_chain]; do not care
updating BB boundaries.
(tidy_fallthru_edge): Likewise.
(commit_one_edge_insertion): Likewise.
* cfgbuild.c (find_basic_block): Likewise.
(find_basic_blocks_1): Likewise.
* cfgcleanup.c (merge_blocks_move_predecessor_nojumps): Likewise.
(try_crossjump_to_edge): Likewise.
(try_optimize_cfg): Likewise.
* cse.c (delete_trivially_dead_insns): Likewise.
* df.c (df_insn_delete): Likewise.
* doloop.c (doloop_modify): Use delete_related_insns.
* emit-rtl.c (try_split): Likewise.
(remove_insn): Update BB boundaries.
* expect.c (connect_post_landing_pads): Use delete_related_insns.
* flow.c (delete_dead_jumptables): Use delete_insn[_chain]; do not care
updating BB boundaries.
(propagate_block_delete_insn): Likewise.
(propagate_block_delete_libcall): Likewise.
* function.c (delete_handlers): Use delete_related_insns.
(thread_prologue_and_epilogue_insns): Likewise.
* gcse.c (delete_null_pointer_checks): Use delete_related_insns.
* genpeep.c (gen_peephole): Use delete_related_insns.
* ifcvt.c (noce_process_if_block): Use delete_insn; do not care updating
BB boundaries.
(find_cond_trap): Likewise.
* integrate.c (save_for_inline): Use delete_related_insns.
(copy_insn_list): Likewise.
* jump.c (pruge_linie_number_notes): Likewise.
(duplicate_loop_exit_test): Likewise.
(delete_computation): Likewise.
(delete_related_insn): Rename from ...; use delete_insn
(delete_insn): ... this one.
(redirect_jump): Use delete_related_insns.
* loop.c (scan_loop): Likewise.
(move_movables): Likewise.
(find_and_verify_loops): Likewise.
(check_dbra_loop): Likewise.
* recog.c (peephole2_optimize): Likewise.
* reg-stack.c (delete_insn_for_stacker): Remove.
(move_for_stack_reg): Use delete_insn.
* regmove.c (combine_stack_adjustments_for_block): Likewise.
* reload1.c (delete_address_reloads): Use delete_related_insns.
(fixup_abnormal_edges): Use delete_insn.
* recog.c (emit_delay_sequence): Use delete_related_insns.
(delete_from-delay_slot): Likewise.
(delete_scheduled_jump): likewise.
(optimize_skip): Likewise.
(try_merge_delay_insns): Likewise.
(full_simple_delay_slots): Likewise.
(fill_slots_from_thread): Likewise.
(relax_delay_slots): Likewise.
(make_return_insns): Likewise.
(dbr_schedule): Likewise.
* rtl.h (delete_insn): Rename to delete_related_insns.
(delete_insn, delete_insn_chain): New prototypes.
* ssa-ccp (sse_fast_dce): Remove deleting of DEF, as it is done
by df_insn_delete already.
* ssa-dce.c (delete_insn_bb): Use delete_insn.
* ssa.c (convert_from_ssa): Use delete_related_insns.
* unroll.c (unroll_loop): Likewise.
(calculate_giv_inc): Likewise.
(copy_loop_body): Likewise.
* i386-protos.h (ix86_libcall_value, ix86_function_value,
ix86_function_arg_regno_p, ix86_function_arg_boundary,
ix86_return_in_memory, ix86_function_value): Declare.
* i386.c (x86_64_int_parameter_registers, x86_64_int_return_registers):
new static valurables.
(x86_64_reg_class): New enum
(x86_64_reg_class_name): New array.
(classify_argument, examine_argument, construct_container,
merge_classes): New static functions.
(optimization_options): Enable flag_omit_frame_pointer and disable
flag_pcc_struct_return on 64bit.
(ix86_libcall_value, ix86_function_value,
ix86_function_arg_regno_p, ix86_function_arg_boundary,
ix86_return_in_memory, ix86_function_value): New global functions.
(init_cumulative_args): Refuse regparm on x86_64, set maybe_vaarg.
(function_arg_advance): Handle x86_64 passing conventions.
(function_arg): Likewise.
* i386.h (FUNCTION_ARG_BOUNDARY): New macro.
(RETURN_IN_MEMORY): Move offline.
(FUNCTION_VALUE, LIBCALL_VALUE): Likewise.
(FUNCTION_VALUE_REGNO_P): New macro.
(FUNCTION_ARG_REGNO_P): Move offline.
(struct ix86_args): Add maybe_vaarg.
* next.h (FUNCTION_VALUE_REGNO_P): Delete.
* unix.h (FUNCTION_VALUE_REGNO_P): Delete.
2001-09-21 Hartmut Penner <hpenner@de.ibm.com>
* s390.md: Changed attributes for scheduling.

View File

@ -303,8 +303,6 @@ extern void remove_fake_edges PARAMS ((void));
extern void add_noreturn_fake_exit_edges PARAMS ((void));
extern void connect_infinite_loops_to_exit PARAMS ((void));
extern int flow_call_edges_add PARAMS ((sbitmap));
extern rtx flow_delete_insn PARAMS ((rtx));
extern void flow_delete_insn_chain PARAMS ((rtx, rtx));
extern edge cached_make_edge PARAMS ((sbitmap *, basic_block,
basic_block, int));
extern edge make_edge PARAMS ((basic_block,

118
gcc/cfg.c
View File

@ -27,7 +27,7 @@ Software Foundation, 59 Temple Place - Suite 330, Boston, MA
- Initialization/deallocation
init_flow, clear_edges
- CFG aware instruction chain manipulation
flow_delete_insn, flow_delete_insn_chain
delete_insn, delete_insn_chain
- Basic block manipulation
create_basic_block, flow_delete_block, split_block, merge_blocks_nomove
- Infrastructure to determine quickly basic block for instruction.
@ -242,26 +242,35 @@ can_delete_label_p (label)
/* Delete INSN by patching it out. Return the next insn. */
rtx
flow_delete_insn (insn)
delete_insn (insn)
rtx insn;
{
rtx prev = PREV_INSN (insn);
rtx next = NEXT_INSN (insn);
rtx note;
PREV_INSN (insn) = NULL_RTX;
NEXT_INSN (insn) = NULL_RTX;
INSN_DELETED_P (insn) = 1;
if (prev)
NEXT_INSN (prev) = next;
if (next)
PREV_INSN (next) = prev;
else
set_last_insn (prev);
bool really_delete = true;
if (GET_CODE (insn) == CODE_LABEL)
remove_node_from_expr_list (insn, &nonlocal_goto_handler_labels);
{
/* Some labels can't be directly removed from the INSN chain, as they
might be references via variables, constant pool etc.
Convert them to the special NOTE_INSN_DELETED_LABEL note. */
if (! can_delete_label_p (insn))
{
const char *name = LABEL_NAME (insn);
really_delete = false;
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED_LABEL;
NOTE_SOURCE_FILE (insn) = name;
}
remove_node_from_expr_list (insn, &nonlocal_goto_handler_labels);
}
if (really_delete)
{
remove_insn (insn);
INSN_DELETED_P (insn) = 1;
}
/* If deleting a jump, decrement the use count of the label. Deleting
the label itself should happen in the normal course of block merging. */
@ -295,7 +304,7 @@ flow_delete_insn (insn)
that must be paired. */
void
flow_delete_insn_chain (start, finish)
delete_insn_chain (start, finish)
rtx start, finish;
{
/* Unchain the insns one by one. It would be quicker to delete all
@ -309,16 +318,8 @@ flow_delete_insn_chain (start, finish)
next = NEXT_INSN (start);
if (GET_CODE (start) == NOTE && !can_delete_note_p (start))
;
else if (GET_CODE (start) == CODE_LABEL
&& ! can_delete_label_p (start))
{
const char *name = LABEL_NAME (start);
PUT_CODE (start, NOTE);
NOTE_LINE_NUMBER (start) = NOTE_INSN_DELETED_LABEL;
NOTE_SOURCE_FILE (start) = name;
}
else
next = flow_delete_insn (start);
next = delete_insn (start);
if (start == finish)
break;
@ -510,19 +511,18 @@ flow_delete_block (b)
end = tmp;
/* Selectively delete the entire chain. */
flow_delete_insn_chain (insn, end);
b->head = NULL;
delete_insn_chain (insn, end);
/* Remove the edges into and out of this block. Note that there may
indeed be edges in, if we are removing an unreachable loop. */
{
while (b->pred != NULL)
remove_edge (b->pred);
while (b->succ != NULL)
remove_edge (b->succ);
while (b->pred != NULL)
remove_edge (b->pred);
while (b->succ != NULL)
remove_edge (b->succ);
b->pred = NULL;
b->succ = NULL;
}
b->pred = NULL;
b->succ = NULL;
/* Remove the basic block from the array, and compact behind it. */
expunge_block (b);
@ -958,10 +958,6 @@ merge_blocks_nomove (a, b)
else if (GET_CODE (NEXT_INSN (a_end)) == BARRIER)
del_first = NEXT_INSN (a_end);
/* Delete everything marked above as well as crap that might be
hanging out between the two blocks. */
flow_delete_insn_chain (del_first, del_last);
/* Normally there should only be one successor of A and that is B, but
partway though the merge of blocks for conditional_execution we'll
be merging a TEST block with THEN and ELSE successors. Free the
@ -977,6 +973,12 @@ merge_blocks_nomove (a, b)
/* B hasn't quite yet ceased to exist. Attempt to prevent mishap. */
b->pred = b->succ = NULL;
expunge_block (b);
/* Delete everything marked above as well as crap that might be
hanging out between the two blocks. */
delete_insn_chain (del_first, del_last);
/* Reassociate the insns of B with A. */
if (!b_empty)
{
@ -993,8 +995,6 @@ merge_blocks_nomove (a, b)
a_end = b_end;
}
a->end = a_end;
expunge_block (b);
}
/* Return label in the head of basic block. Create one if it doesn't exist. */
@ -1055,13 +1055,12 @@ try_redirect_by_replacing_jump (e, target)
/* See if we can create the fallthru edge. */
if (can_fallthru (src, target))
{
src->end = PREV_INSN (kill_from);
if (rtl_dump_file)
fprintf (rtl_dump_file, "Removing jump %i.\n", INSN_UID (insn));
fallthru = 1;
/* Selectivly unlink whole insn chain. */
flow_delete_insn_chain (kill_from, PREV_INSN (target->head));
delete_insn_chain (kill_from, PREV_INSN (target->head));
}
/* If this already is simplejump, redirect it. */
else if (simplejump_p (insn))
@ -1079,14 +1078,14 @@ try_redirect_by_replacing_jump (e, target)
rtx target_label = block_label (target);
rtx barrier;
src->end = emit_jump_insn_before (gen_jump (target_label), kill_from);
emit_jump_insn_after (gen_jump (target_label), kill_from);
JUMP_LABEL (src->end) = target_label;
LABEL_NUSES (target_label)++;
if (rtl_dump_file)
fprintf (rtl_dump_file, "Replacing insn %i by jump %i\n",
INSN_UID (insn), INSN_UID (src->end));
flow_delete_insn_chain (kill_from, insn);
delete_insn_chain (kill_from, insn);
barrier = next_nonnote_insn (src->end);
if (!barrier || GET_CODE (barrier) != BARRIER)
@ -1108,11 +1107,7 @@ try_redirect_by_replacing_jump (e, target)
the potential of changing the code between -g and not -g. */
while (GET_CODE (e->src->end) == NOTE
&& NOTE_LINE_NUMBER (e->src->end) >= 0)
{
rtx prev = PREV_INSN (e->src->end);
flow_delete_insn (e->src->end);
e->src->end = prev;
}
delete_insn (e->src->end);
if (e->dest != target)
redirect_edge_succ (e, target);
@ -1387,28 +1382,17 @@ tidy_fallthru_edge (e, b, c)
q = PREV_INSN (q);
#endif
if (b->head == q)
{
PUT_CODE (q, NOTE);
NOTE_LINE_NUMBER (q) = NOTE_INSN_DELETED;
NOTE_SOURCE_FILE (q) = 0;
}
else
{
q = PREV_INSN (q);
q = PREV_INSN (q);
/* We don't want a block to end on a line-number note since that has
the potential of changing the code between -g and not -g. */
while (GET_CODE (q) == NOTE && NOTE_LINE_NUMBER (q) >= 0)
q = PREV_INSN (q);
}
b->end = q;
/* We don't want a block to end on a line-number note since that has
the potential of changing the code between -g and not -g. */
while (GET_CODE (q) == NOTE && NOTE_LINE_NUMBER (q) >= 0)
q = PREV_INSN (q);
}
/* Selectively unlink the sequence. */
if (q != PREV_INSN (c->head))
flow_delete_insn_chain (NEXT_INSN (q), PREV_INSN (c->head));
delete_insn_chain (NEXT_INSN (q), PREV_INSN (c->head));
e->flags |= EDGE_FALLTHRU;
}
@ -1692,7 +1676,7 @@ commit_one_edge_insertion (e)
emit_barrier_after (last);
if (before)
flow_delete_insn (before);
delete_insn (before);
}
else if (GET_CODE (last) == JUMP_INSN)
abort ();

View File

@ -442,7 +442,7 @@ find_basic_blocks_1 (f)
if (bb_note == NULL_RTX)
bb_note = insn;
else
next = flow_delete_insn (insn);
next = delete_insn (insn);
}
break;
}
@ -581,7 +581,7 @@ find_basic_blocks_1 (f)
if (head != NULL_RTX)
create_basic_block_structure (i++, head, end, bb_note);
else if (bb_note)
flow_delete_insn (bb_note);
delete_insn (bb_note);
if (i != n_basic_blocks)
abort ();

View File

@ -277,7 +277,7 @@ merge_blocks_move_predecessor_nojumps (a, b)
barrier = next_nonnote_insn (a->end);
if (GET_CODE (barrier) != BARRIER)
abort ();
flow_delete_insn (barrier);
delete_insn (barrier);
/* Move block and loop notes out of the chain so that we do not
disturb their order.
@ -337,7 +337,7 @@ merge_blocks_move_successor_nojumps (a, b)
/* There had better have been a barrier there. Delete it. */
if (barrier && GET_CODE (barrier) == BARRIER)
flow_delete_insn (barrier);
delete_insn (barrier);
/* Move block and loop notes out of the chain so that we do not
disturb their order.
@ -901,12 +901,12 @@ try_crossjump_to_edge (mode, e1, e2)
/* Emit the jump insn. */
label = block_label (redirect_to);
src1->end = emit_jump_insn_before (gen_jump (label), newpos1);
emit_jump_insn_after (gen_jump (label), src1->end);
JUMP_LABEL (src1->end) = label;
LABEL_NUSES (label)++;
/* Delete the now unreachable instructions. */
flow_delete_insn_chain (newpos1, last);
delete_insn_chain (newpos1, last);
/* Make sure there is a barrier after the new jump. */
last = next_nonnote_insn (src1->end);
@ -1078,7 +1078,7 @@ try_optimize_cfg (mode)
{
rtx label = b->head;
b->head = NEXT_INSN (b->head);
flow_delete_insn_chain (label, label);
delete_insn_chain (label, label);
if (rtl_dump_file)
fprintf (rtl_dump_file, "Deleted label in block %i.\n",
b->index);

View File

@ -135,6 +135,11 @@ extern enum machine_mode ix86_fp_compare_mode PARAMS ((enum rtx_code));
extern int x86_64_sign_extended_value PARAMS ((rtx));
extern int x86_64_zero_extended_value PARAMS ((rtx));
extern rtx ix86_libcall_value PARAMS ((enum machine_mode));
extern bool ix86_function_value_regno_p PARAMS ((int));
extern bool ix86_function_arg_regno_p PARAMS ((int));
extern int ix86_function_arg_boundary PARAMS ((enum machine_mode, tree));
extern int ix86_return_in_memory PARAMS ((tree));
extern rtx ix86_force_to_memory PARAMS ((enum machine_mode, rtx));
extern void ix86_free_from_memory PARAMS ((enum machine_mode));
@ -160,6 +165,7 @@ extern void init_cumulative_args PARAMS ((CUMULATIVE_ARGS *, tree, rtx));
extern rtx function_arg PARAMS ((CUMULATIVE_ARGS *, enum machine_mode, tree, int));
extern void function_arg_advance PARAMS ((CUMULATIVE_ARGS *, enum machine_mode,
tree, int));
extern rtx ix86_function_value PARAMS ((tree));
extern void ix86_init_builtins PARAMS ((void));
extern void ix86_init_mmx_sse_builtins PARAMS ((void));
extern rtx ix86_expand_builtin PARAMS ((tree, rtx, rtx, enum machine_mode, int));

View File

@ -404,6 +404,12 @@ int const dbx_register_map[FIRST_PSEUDO_REGISTER] =
-1, -1, -1, -1, -1, -1, -1, -1, /* extended SSE registers */
};
static int x86_64_int_parameter_registers[6] = {5 /*RDI*/, 4 /*RSI*/,
1 /*RDX*/, 2 /*RCX*/,
FIRST_REX_INT_REG /*R8 */,
FIRST_REX_INT_REG + 1 /*R9 */};
static int x86_64_int_return_registers[4] = {0 /*RAX*/, 1 /*RDI*/, 5, 4};
/* The "default" register map used in 64bit mode. */
int const dbx64_register_map[FIRST_PSEUDO_REGISTER] =
{
@ -668,6 +674,40 @@ static void ix86_svr3_asm_out_constructor PARAMS ((rtx, int));
static void sco_asm_named_section PARAMS ((const char *, unsigned int));
static void sco_asm_out_constructor PARAMS ((rtx, int));
#endif
/* Register class used for passing given 64bit part of the argument.
These represent classes as documented by the PS ABI, with the exception
of SSESF, SSEDF classes, that are basically SSE class, just gcc will
use SF or DFmode move instead of DImode to avoid reformating penalties.
Similary we play games with INTEGERSI_CLASS to use cheaper SImode moves
whenever possible (upper half does contain padding).
*/
enum x86_64_reg_class
{
X86_64_NO_CLASS,
X86_64_INTEGER_CLASS,
X86_64_INTEGERSI_CLASS,
X86_64_SSE_CLASS,
X86_64_SSESF_CLASS,
X86_64_SSEDF_CLASS,
X86_64_SSEUP_CLASS,
X86_64_X87_CLASS,
X86_64_X87UP_CLASS,
X86_64_MEMORY_CLASS
};
const char * const x86_64_reg_class_name[] =
{"no", "integer", "integerSI", "sse", "sseSF", "sseDF", "sseup", "x87", "x87up", "no"};
#define MAX_CLASSES 4
static int classify_argument PARAMS ((enum machine_mode, tree,
enum x86_64_reg_class [MAX_CLASSES],
int));
static int examine_argument PARAMS ((enum machine_mode, tree, int, int *,
int *));
static rtx construct_container PARAMS ((enum machine_mode, tree, int, int, int,
int *, int));
static enum x86_64_reg_class merge_classes PARAMS ((enum x86_64_reg_class,
enum x86_64_reg_class));
/* Initialize the GCC target structure. */
#undef TARGET_ATTRIBUTE_TABLE
@ -974,6 +1014,10 @@ optimization_options (level, size)
if (level > 1)
flag_schedule_insns = 0;
#endif
if (TARGET_64BIT && optimize >= 1)
flag_omit_frame_pointer = 1;
if (TARGET_64BIT)
flag_pcc_struct_return = 0;
}
/* Table of valid machine attributes. */
@ -1236,6 +1280,25 @@ ix86_return_pops_args (fundecl, funtype, size)
/* Argument support functions. */
/* Return true when register may be used to pass function parameters. */
bool
ix86_function_arg_regno_p (regno)
int regno;
{
int i;
if (!TARGET_64BIT)
return regno < REGPARM_MAX || (TARGET_SSE && SSE_REGNO_P (regno));
if (SSE_REGNO_P (regno) && TARGET_SSE)
return true;
/* RAX is used as hidden argument to va_arg functions. */
if (!regno)
return true;
for (i = 0; i < REGPARM_MAX; i++)
if (regno == x86_64_int_parameter_registers[i])
return true;
return false;
}
/* Initialize a variable CUM of type CUMULATIVE_ARGS
for a call to a function whose data type is FNTYPE.
For a library call, FNTYPE is 0. */
@ -1267,13 +1330,15 @@ init_cumulative_args (cum, fntype, libname)
/* Set up the number of registers to use for passing arguments. */
cum->nregs = ix86_regparm;
if (fntype)
cum->sse_nregs = SSE_REGPARM_MAX;
if (fntype && !TARGET_64BIT)
{
tree attr = lookup_attribute ("regparm", TYPE_ATTRIBUTES (fntype));
if (attr)
cum->nregs = TREE_INT_CST_LOW (TREE_VALUE (TREE_VALUE (attr)));
}
cum->maybe_vaarg = false;
/* Determine if this function has variable arguments. This is
indicated by the last argument being 'void_type_mode' if there
@ -1287,9 +1352,16 @@ init_cumulative_args (cum, fntype, libname)
{
next_param = TREE_CHAIN (param);
if (next_param == 0 && TREE_VALUE (param) != void_type_node)
cum->nregs = 0;
{
if (!TARGET_64BIT)
cum->nregs = 0;
cum->maybe_vaarg = true;
}
}
}
if ((!fntype && !libname)
|| (fntype && !TYPE_ARG_TYPES (fntype)))
cum->maybe_vaarg = 1;
if (TARGET_DEBUG_ARG)
fprintf (stderr, ", nregs=%d )\n", cum->nregs);
@ -1297,6 +1369,444 @@ init_cumulative_args (cum, fntype, libname)
return;
}
/* x86-64 register passing impleemntation. See x86-64 ABI for details. Goal
of this code is to classify each 8bytes of incomming argument by the register
class and assign registers accordingly. */
/* Return the union class of CLASS1 and CLASS2.
See the x86-64 PS ABI for details. */
static enum x86_64_reg_class
merge_classes (class1, class2)
enum x86_64_reg_class class1, class2;
{
/* Rule #1: If both classes are equal, this is the resulting class. */
if (class1 == class2)
return class1;
/* Rule #2: If one of the classes is NO_CLASS, the resulting class is
the other class. */
if (class1 == X86_64_NO_CLASS)
return class2;
if (class2 == X86_64_NO_CLASS)
return class1;
/* Rule #3: If one of the classes is MEMORY, the result is MEMORY. */
if (class1 == X86_64_MEMORY_CLASS || class2 == X86_64_MEMORY_CLASS)
return X86_64_MEMORY_CLASS;
/* Rule #4: If one of the classes is INTEGER, the result is INTEGER. */
if ((class1 == X86_64_INTEGERSI_CLASS && class2 == X86_64_SSESF_CLASS)
|| (class2 == X86_64_INTEGERSI_CLASS && class1 == X86_64_SSESF_CLASS))
return X86_64_INTEGERSI_CLASS;
if (class1 == X86_64_INTEGER_CLASS || class1 == X86_64_INTEGERSI_CLASS
|| class2 == X86_64_INTEGER_CLASS || class2 == X86_64_INTEGERSI_CLASS)
return X86_64_INTEGER_CLASS;
/* Rule #5: If one of the classes is X87 or X87UP class, MEMORY is used. */
if (class1 == X86_64_X87_CLASS || class1 == X86_64_X87UP_CLASS
|| class2 == X86_64_X87_CLASS || class2 == X86_64_X87UP_CLASS)
return X86_64_MEMORY_CLASS;
/* Rule #6: Otherwise class SSE is used. */
return X86_64_SSE_CLASS;
}
/* Classify the argument of type TYPE and mode MODE.
CLASSES will be filled by the register class used to pass each word
of the operand. The number of words is returned. In case the parameter
should be passed in memory, 0 is returned. As a special case for zero
sized containers, classes[0] will be NO_CLASS and 1 is returned.
BIT_OFFSET is used internally for handling records and specifies offset
of the offset in bits modulo 256 to avoid overflow cases.
See the x86-64 PS ABI for details.
*/
static int
classify_argument (mode, type, classes, bit_offset)
enum machine_mode mode;
tree type;
enum x86_64_reg_class classes[MAX_CLASSES];
int bit_offset;
{
int bytes =
(mode == BLKmode) ? int_size_in_bytes (type) : (int) GET_MODE_SIZE (mode);
int words = (bytes + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
if (type && AGGREGATE_TYPE_P (type))
{
int i;
tree field;
enum x86_64_reg_class subclasses[MAX_CLASSES];
/* On x86-64 we pass structures larger than 16 bytes on the stack. */
if (bytes > 16)
return 0;
for (i = 0; i < words; i++)
classes[i] = X86_64_NO_CLASS;
/* Zero sized arrays or structures are NO_CLASS. We return 0 to
signalize memory class, so handle it as special case. */
if (!words)
{
classes[0] = X86_64_NO_CLASS;
return 1;
}
/* Classify each field of record and merge classes. */
if (TREE_CODE (type) == RECORD_TYPE)
{
for (field = TYPE_FIELDS (type); field; field = TREE_CHAIN (field))
{
if (TREE_CODE (field) == FIELD_DECL)
{
int num;
/* Bitfields are always classified as integer. Handle them
early, since later code would consider them to be
misaligned integers. */
if (DECL_BIT_FIELD (field))
{
for (i = int_bit_position (field) / 8 / 8;
i < (int_bit_position (field)
+ tree_low_cst (DECL_SIZE (field), 0)
+ 63) / 8 / 8; i++)
classes[i] =
merge_classes (X86_64_INTEGER_CLASS,
classes[i]);
}
else
{
num = classify_argument (TYPE_MODE (TREE_TYPE (field)),
TREE_TYPE (field), subclasses,
(int_bit_position (field)
+ bit_offset) % 256);
if (!num)
return 0;
for (i = 0; i < num; i++)
{
int pos =
(int_bit_position (field) + bit_offset) / 8 / 8;
classes[i + pos] =
merge_classes (subclasses[i], classes[i + pos]);
}
}
}
}
}
/* Arrays are handled as small records. */
else if (TREE_CODE (type) == ARRAY_TYPE)
{
int num;
num = classify_argument (TYPE_MODE (TREE_TYPE (type)),
TREE_TYPE (type), subclasses, bit_offset);
if (!num)
return 0;
/* The partial classes are now full classes. */
if (subclasses[0] == X86_64_SSESF_CLASS && bytes != 4)
subclasses[0] = X86_64_SSE_CLASS;
if (subclasses[0] == X86_64_INTEGERSI_CLASS && bytes != 4)
subclasses[0] = X86_64_INTEGER_CLASS;
for (i = 0; i < words; i++)
classes[i] = subclasses[i % num];
}
/* Unions are similar to RECORD_TYPE but offset is always 0. */
else if (TREE_CODE (type) == UNION_TYPE)
{
for (field = TYPE_FIELDS (type); field; field = TREE_CHAIN (field))
{
if (TREE_CODE (field) == FIELD_DECL)
{
int num;
num = classify_argument (TYPE_MODE (TREE_TYPE (field)),
TREE_TYPE (field), subclasses,
bit_offset);
if (!num)
return 0;
for (i = 0; i < num; i++)
classes[i] = merge_classes (subclasses[i], classes[i]);
}
}
}
else
abort ();
/* Final merger cleanup. */
for (i = 0; i < words; i++)
{
/* If one class is MEMORY, everything should be passed in
memory. */
if (classes[i] == X86_64_MEMORY_CLASS)
return 0;
/* The X86_64_SSEUP_CLASS should be always preceeded by
X86_64_SSE_CLASS. */
if (classes[i] == X86_64_SSEUP_CLASS
&& (i == 0 || classes[i - 1] != X86_64_SSE_CLASS))
classes[i] = X86_64_SSE_CLASS;
/* X86_64_X87UP_CLASS should be preceeded by X86_64_X87_CLASS. */
if (classes[i] == X86_64_X87UP_CLASS
&& (i == 0 || classes[i - 1] != X86_64_X87_CLASS))
classes[i] = X86_64_SSE_CLASS;
}
return words;
}
/* Compute alignment needed. We align all types to natural boundaries with
exception of XFmode that is aligned to 64bits. */
if (mode != VOIDmode && mode != BLKmode)
{
int mode_alignment = GET_MODE_BITSIZE (mode);
if (mode == XFmode)
mode_alignment = 128;
else if (mode == XCmode)
mode_alignment = 256;
/* Missalignmed fields are always returned in memory. */
if (bit_offset % mode_alignment)
return 0;
}
/* Classification of atomic types. */
switch (mode)
{
case DImode:
case SImode:
case HImode:
case QImode:
case CSImode:
case CHImode:
case CQImode:
if (bit_offset + GET_MODE_BITSIZE (mode) <= 32)
classes[0] = X86_64_INTEGERSI_CLASS;
else
classes[0] = X86_64_INTEGER_CLASS;
return 1;
case CDImode:
case TImode:
classes[0] = classes[1] = X86_64_INTEGER_CLASS;
return 2;
case CTImode:
classes[0] = classes[1] = X86_64_INTEGER_CLASS;
classes[2] = classes[3] = X86_64_INTEGER_CLASS;
return 4;
case SFmode:
if (!(bit_offset % 64))
classes[0] = X86_64_SSESF_CLASS;
else
classes[0] = X86_64_SSE_CLASS;
return 1;
case DFmode:
classes[0] = X86_64_SSEDF_CLASS;
return 1;
case TFmode:
classes[0] = X86_64_X87_CLASS;
classes[1] = X86_64_X87UP_CLASS;
return 2;
case TCmode:
classes[0] = X86_64_X87_CLASS;
classes[1] = X86_64_X87UP_CLASS;
classes[2] = X86_64_X87_CLASS;
classes[3] = X86_64_X87UP_CLASS;
return 4;
case DCmode:
classes[0] = X86_64_SSEDF_CLASS;
classes[1] = X86_64_SSEDF_CLASS;
return 2;
case SCmode:
classes[0] = X86_64_SSE_CLASS;
return 1;
case BLKmode:
return 0;
default:
abort ();
}
}
/* Examine the argument and return set number of register required in each
class. Return 0 ifif parameter should be passed in memory. */
static int
examine_argument (mode, type, in_return, int_nregs, sse_nregs)
enum machine_mode mode;
tree type;
int *int_nregs, *sse_nregs;
int in_return;
{
enum x86_64_reg_class class[MAX_CLASSES];
int n = classify_argument (mode, type, class, 0);
*int_nregs = 0;
*sse_nregs = 0;
if (!n)
return 0;
for (n--; n >= 0; n--)
switch (class[n])
{
case X86_64_INTEGER_CLASS:
case X86_64_INTEGERSI_CLASS:
(*int_nregs)++;
break;
case X86_64_SSE_CLASS:
case X86_64_SSESF_CLASS:
case X86_64_SSEDF_CLASS:
(*sse_nregs)++;
break;
case X86_64_NO_CLASS:
case X86_64_SSEUP_CLASS:
break;
case X86_64_X87_CLASS:
case X86_64_X87UP_CLASS:
if (!in_return)
return 0;
break;
case X86_64_MEMORY_CLASS:
abort ();
}
return 1;
}
/* Construct container for the argument used by GCC interface. See
FUNCTION_ARG for the detailed description. */
static rtx
construct_container (mode, type, in_return, nintregs, nsseregs, intreg, sse_regno)
enum machine_mode mode;
tree type;
int in_return;
int nintregs, nsseregs;
int *intreg, sse_regno;
{
enum machine_mode tmpmode;
int bytes =
(mode == BLKmode) ? int_size_in_bytes (type) : (int) GET_MODE_SIZE (mode);
enum x86_64_reg_class class[MAX_CLASSES];
int n;
int i;
int nexps = 0;
int needed_sseregs, needed_intregs;
rtx exp[MAX_CLASSES];
rtx ret;
n = classify_argument (mode, type, class, 0);
if (TARGET_DEBUG_ARG)
{
if (!n)
fprintf (stderr, "Memory class\n");
else
{
fprintf (stderr, "Classes:");
for (i = 0; i < n; i++)
{
fprintf (stderr, " %s", x86_64_reg_class_name[class[i]]);
}
fprintf (stderr, "\n");
}
}
if (!n)
return NULL;
if (!examine_argument (mode, type, in_return, &needed_intregs, &needed_sseregs))
return NULL;
if (needed_intregs > nintregs || needed_sseregs > nsseregs)
return NULL;
/* First construct simple cases. Avoid SCmode, since we want to use
single register to pass this type. */
if (n == 1 && mode != SCmode)
switch (class[0])
{
case X86_64_INTEGER_CLASS:
case X86_64_INTEGERSI_CLASS:
return gen_rtx_REG (mode, intreg[0]);
case X86_64_SSE_CLASS:
case X86_64_SSESF_CLASS:
case X86_64_SSEDF_CLASS:
return gen_rtx_REG (mode, SSE_REGNO (sse_regno));
case X86_64_X87_CLASS:
return gen_rtx_REG (mode, FIRST_STACK_REG);
case X86_64_NO_CLASS:
/* Zero sized array, struct or class. */
return NULL;
default:
abort ();
}
if (n == 2 && class[0] == X86_64_SSE_CLASS && class[1] == X86_64_SSEUP_CLASS)
return gen_rtx_REG (TImode, SSE_REGNO (sse_regno));
if (n == 2
&& class[0] == X86_64_X87_CLASS && class[1] == X86_64_X87UP_CLASS)
return gen_rtx_REG (TFmode, FIRST_STACK_REG);
if (n == 2 && class[0] == X86_64_INTEGER_CLASS
&& class[1] == X86_64_INTEGER_CLASS
&& (mode == CDImode || mode == TImode)
&& intreg[0] + 1 == intreg[1])
return gen_rtx_REG (mode, intreg[0]);
if (n == 4
&& class[0] == X86_64_X87_CLASS && class[1] == X86_64_X87UP_CLASS
&& class[2] == X86_64_X87_CLASS && class[3] == X86_64_X87UP_CLASS)
return gen_rtx_REG (TCmode, FIRST_STACK_REG);
/* Otherwise figure out the entries of the PARALLEL. */
for (i = 0; i < n; i++)
{
switch (class[i])
{
case X86_64_NO_CLASS:
break;
case X86_64_INTEGER_CLASS:
case X86_64_INTEGERSI_CLASS:
/* Merge TImodes on aligned occassions here too. */
if (i * 8 + 8 > bytes)
tmpmode = mode_for_size ((bytes - i * 8) * BITS_PER_UNIT, MODE_INT, 0);
else if (class[i] == X86_64_INTEGERSI_CLASS)
tmpmode = SImode;
else
tmpmode = DImode;
/* We've requested 24 bytes we don't have mode for. Use DImode. */
if (tmpmode == BLKmode)
tmpmode = DImode;
exp [nexps++] = gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (tmpmode, *intreg),
GEN_INT (i*8));
intreg++;
break;
case X86_64_SSESF_CLASS:
exp [nexps++] = gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (SFmode,
SSE_REGNO (sse_regno)),
GEN_INT (i*8));
sse_regno++;
break;
case X86_64_SSEDF_CLASS:
exp [nexps++] = gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (DFmode,
SSE_REGNO (sse_regno)),
GEN_INT (i*8));
sse_regno++;
break;
case X86_64_SSE_CLASS:
if (i < n && class[i + 1] == X86_64_SSEUP_CLASS)
tmpmode = TImode, i++;
else
tmpmode = DImode;
exp [nexps++] = gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (tmpmode,
SSE_REGNO (sse_regno)),
GEN_INT (i*8));
sse_regno++;
break;
default:
abort ();
}
}
ret = gen_rtx_PARALLEL (mode, rtvec_alloc (nexps));
for (i = 0; i < nexps; i++)
XVECEXP (ret, 0, i) = exp [i];
return ret;
}
/* Update the data in CUM to advance over an argument
of mode MODE and data type TYPE.
(TYPE is null for libcalls where that information may not be available.) */
@ -1316,27 +1826,45 @@ function_arg_advance (cum, mode, type, named)
fprintf (stderr,
"function_adv (sz=%d, wds=%2d, nregs=%d, mode=%s, named=%d)\n\n",
words, cum->words, cum->nregs, GET_MODE_NAME (mode), named);
if (TARGET_SSE && mode == TImode)
if (TARGET_64BIT)
{
cum->sse_words += words;
cum->sse_nregs -= 1;
cum->sse_regno += 1;
if (cum->sse_nregs <= 0)
int int_nregs, sse_nregs;
if (!examine_argument (mode, type, 0, &int_nregs, &sse_nregs))
cum->words += words;
else if (sse_nregs <= cum->sse_nregs && int_nregs <= cum->nregs)
{
cum->sse_nregs = 0;
cum->sse_regno = 0;
cum->nregs -= int_nregs;
cum->sse_nregs -= sse_nregs;
cum->regno += int_nregs;
cum->sse_regno += sse_nregs;
}
else
cum->words += words;
}
else
{
cum->words += words;
cum->nregs -= words;
cum->regno += words;
if (cum->nregs <= 0)
if (TARGET_SSE && mode == TImode)
{
cum->nregs = 0;
cum->regno = 0;
cum->sse_words += words;
cum->sse_nregs -= 1;
cum->sse_regno += 1;
if (cum->sse_nregs <= 0)
{
cum->sse_nregs = 0;
cum->sse_regno = 0;
}
}
else
{
cum->words += words;
cum->nregs -= words;
cum->regno += words;
if (cum->nregs <= 0)
{
cum->nregs = 0;
cum->regno = 0;
}
}
}
return;
@ -1367,28 +1895,44 @@ function_arg (cum, mode, type, named)
(mode == BLKmode) ? int_size_in_bytes (type) : (int) GET_MODE_SIZE (mode);
int words = (bytes + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
/* Handle an hidden AL argument containing number of registers for varargs
x86-64 functions. For i386 ABI just return constm1_rtx to avoid
any AL settings. */
if (mode == VOIDmode)
return constm1_rtx;
switch (mode)
{
/* For now, pass fp/complex values on the stack. */
default:
break;
case BLKmode:
case DImode:
case SImode:
case HImode:
case QImode:
if (words <= cum->nregs)
ret = gen_rtx_REG (mode, cum->regno);
break;
case TImode:
if (cum->sse_nregs)
ret = gen_rtx_REG (mode, cum->sse_regno);
break;
if (TARGET_64BIT)
return GEN_INT (cum->maybe_vaarg
? (cum->sse_nregs < 0
? SSE_REGPARM_MAX
: cum->sse_regno)
: -1);
else
return constm1_rtx;
}
if (TARGET_64BIT)
ret = construct_container (mode, type, 0, cum->nregs, cum->sse_nregs,
&x86_64_int_parameter_registers [cum->regno],
cum->sse_regno);
else
switch (mode)
{
/* For now, pass fp/complex values on the stack. */
default:
break;
case BLKmode:
case DImode:
case SImode:
case HImode:
case QImode:
if (words <= cum->nregs)
ret = gen_rtx_REG (mode, cum->regno);
break;
case TImode:
if (cum->sse_nregs)
ret = gen_rtx_REG (mode, cum->sse_regno);
break;
}
if (TARGET_DEBUG_ARG)
{
@ -1406,6 +1950,119 @@ function_arg (cum, mode, type, named)
return ret;
}
/* Gives the alignment boundary, in bits, of an argument with the specified mode
and type. */
int
ix86_function_arg_boundary (mode, type)
enum machine_mode mode;
tree type;
{
int align;
if (!TARGET_64BIT)
return PARM_BOUNDARY;
if (type)
align = TYPE_ALIGN (type);
else
align = GET_MODE_ALIGNMENT (mode);
if (align < PARM_BOUNDARY)
align = PARM_BOUNDARY;
if (align > 128)
align = 128;
return align;
}
/* Return true if N is a possible register number of function value. */
bool
ix86_function_value_regno_p (regno)
int regno;
{
if (!TARGET_64BIT)
{
return ((regno) == 0
|| ((regno) == FIRST_FLOAT_REG && TARGET_FLOAT_RETURNS_IN_80387)
|| ((regno) == FIRST_SSE_REG && TARGET_SSE));
}
return ((regno) == 0 || (regno) == FIRST_FLOAT_REG
|| ((regno) == FIRST_SSE_REG && TARGET_SSE)
|| ((regno) == FIRST_FLOAT_REG && TARGET_FLOAT_RETURNS_IN_80387));
}
/* Define how to find the value returned by a function.
VALTYPE is the data type of the value (as a tree).
If the precise function being called is known, FUNC is its FUNCTION_DECL;
otherwise, FUNC is 0. */
rtx
ix86_function_value (valtype)
tree valtype;
{
if (TARGET_64BIT)
{
rtx ret = construct_container (TYPE_MODE (valtype), valtype, 1,
REGPARM_MAX, SSE_REGPARM_MAX,
x86_64_int_return_registers, 0);
/* For zero sized structures, construct_continer return NULL, but we need
to keep rest of compiler happy by returning meaningfull value. */
if (!ret)
ret = gen_rtx_REG (TYPE_MODE (valtype), 0);
return ret;
}
else
return gen_rtx_REG (TYPE_MODE (valtype), VALUE_REGNO (TYPE_MODE (valtype)));
}
/* Return false ifif type is returned in memory. */
int
ix86_return_in_memory (type)
tree type;
{
int needed_intregs, needed_sseregs;
if (TARGET_64BIT)
{
return !examine_argument (TYPE_MODE (type), type, 1,
&needed_intregs, &needed_sseregs);
}
else
{
if (TYPE_MODE (type) == BLKmode
|| (VECTOR_MODE_P (TYPE_MODE (type))
&& int_size_in_bytes (type) == 8)
|| (int_size_in_bytes (type) > 12 && TYPE_MODE (type) != TImode
&& TYPE_MODE (type) != TFmode
&& !VECTOR_MODE_P (TYPE_MODE (type))))
return 1;
return 0;
}
}
/* Define how to find the value returned by a library function
assuming the value has mode MODE. */
rtx
ix86_libcall_value (mode)
enum machine_mode mode;
{
if (TARGET_64BIT)
{
switch (mode)
{
case SFmode:
case SCmode:
case DFmode:
case DCmode:
return gen_rtx_REG (mode, FIRST_SSE_REG);
case TFmode:
case TCmode:
return gen_rtx_REG (mode, FIRST_FLOAT_REG);
default:
return gen_rtx_REG (mode, 0);
}
}
else
return gen_rtx_REG (mode, VALUE_REGNO (mode));
}
/* Return nonzero if OP is general operand representable on x86_64. */

View File

@ -720,6 +720,12 @@ extern int ix86_arch;
#define LOCAL_ALIGNMENT(TYPE, ALIGN) ix86_local_alignment (TYPE, ALIGN)
/* If defined, a C expression that gives the alignment boundary, in
bits, of an argument with the specified mode and type. If it is
not defined, `PARM_BOUNDARY' is used for all arguments. */
#define FUNCTION_ARG_BOUNDARY(MODE,TYPE) ix86_function_arg_boundary (MODE, TYPE)
/* Set this non-zero if move instructions will actually fail to work
when given unaligned data. */
#define STRICT_ALIGNMENT 0
@ -1062,10 +1068,7 @@ extern int ix86_arch;
`DEFAULT_PCC_STRUCT_RETURN' to indicate this. */
#define RETURN_IN_MEMORY(TYPE) \
((TYPE_MODE (TYPE) == BLKmode) \
|| (VECTOR_MODE_P (TYPE_MODE (TYPE)) && int_size_in_bytes (TYPE) == 8)\
|| (int_size_in_bytes (TYPE) > 12 && TYPE_MODE (TYPE) != TImode \
&& TYPE_MODE (TYPE) != TFmode && ! VECTOR_MODE_P (TYPE_MODE (TYPE))))
ix86_return_in_memory (TYPE)
/* Define the classes of registers for register constraints in the
@ -1517,14 +1520,16 @@ enum reg_class
If the precise function being called is known, FUNC is its FUNCTION_DECL;
otherwise, FUNC is 0. */
#define FUNCTION_VALUE(VALTYPE, FUNC) \
gen_rtx_REG (TYPE_MODE (VALTYPE), \
VALUE_REGNO (TYPE_MODE (VALTYPE)))
ix86_function_value (VALTYPE)
#define FUNCTION_VALUE_REGNO_P(N) \
ix86_function_value_regno_p (N)
/* Define how to find the value returned by a library function
assuming the value has mode MODE. */
#define LIBCALL_VALUE(MODE) \
gen_rtx_REG (MODE, VALUE_REGNO (MODE))
ix86_libcall_value (MODE)
/* Define the size of the result block used for communication between
untyped_call and untyped_return. The block contains a DImode value
@ -1533,7 +1538,7 @@ enum reg_class
#define APPLY_RESULT_SIZE (8+108)
/* 1 if N is a possible register number for function argument passing. */
#define FUNCTION_ARG_REGNO_P(N) ((N) < REGPARM_MAX)
#define FUNCTION_ARG_REGNO_P(N) ix86_function_arg_regno_p (N)
/* Define a data type for recording info about an argument list
during the scan of that argument list. This data type should
@ -1548,6 +1553,7 @@ typedef struct ix86_args {
int sse_words; /* # sse words passed so far */
int sse_nregs; /* # sse registers available for passing */
int sse_regno; /* next available sse register number */
int maybe_vaarg; /* true for calls to possibly vardic fncts. */
} CUMULATIVE_ARGS;
/* Initialize a variable CUM of type CUMULATIVE_ARGS

View File

@ -42,11 +42,6 @@ Boston, MA 02111-1307, USA. */
((MODE) == SFmode || (MODE) == DFmode || (MODE) == XFmode \
? FIRST_FLOAT_REG : 0)
/* 1 if N is a possible register number for a function value. */
#undef FUNCTION_VALUE_REGNO_P
#define FUNCTION_VALUE_REGNO_P(N) ((N) == 0 || (N)== FIRST_FLOAT_REG)
#ifdef REAL_VALUE_TO_TARGET_LONG_DOUBLE
#undef ASM_OUTPUT_LONG_DOUBLE
#define ASM_OUTPUT_LONG_DOUBLE(FILE,VALUE) \

View File

@ -77,11 +77,6 @@ Boston, MA 02111-1307, USA. */
: (MODE) == TImode || VECTOR_MODE_P (MODE) ? FIRST_SSE_REG \
: 0)
/* 1 if N is a possible register number for a function value. */
#define FUNCTION_VALUE_REGNO_P(N) \
((N) == 0 || ((N)== FIRST_FLOAT_REG && TARGET_FLOAT_RETURNS_IN_80387))
/* Output code to add DELTA to the first argument, and then jump to FUNCTION.
Used for C++ multiple inheritance. */
#define ASM_OUTPUT_MI_THUNK(FILE, THUNK_FNDECL, DELTA, FUNCTION) \

View File

@ -7646,7 +7646,7 @@ delete_trivially_dead_insns (insns, nreg, preserve_basic_blocks)
if (! live_insn)
{
count_reg_usage (insn, counts, NULL_RTX, -1);
delete_insn (insn);
delete_related_insns (insn);
}
if (find_reg_note (insn, REG_LIBCALL, NULL_RTX))
@ -7687,9 +7687,7 @@ delete_trivially_dead_insns (insns, nreg, preserve_basic_blocks)
if (! live_insn)
{
count_reg_usage (insn, counts, NULL_RTX, -1);
if (insn == bb->end)
bb->end = PREV_INSN (insn);
flow_delete_insn (insn);
delete_insn (insn);
}
if (find_reg_note (insn, REG_LIBCALL, NULL_RTX))

View File

@ -2593,13 +2593,9 @@ df_insn_delete (df, bb, insn)
/* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
if (insn == bb->head)
abort ();
if (insn == bb->end)
bb->end = PREV_INSN (insn);
/* Delete the insn. */
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
NOTE_SOURCE_FILE (insn) = 0;
delete_insn (insn);
df_insn_modify (df, bb, insn);

View File

@ -423,7 +423,7 @@ doloop_modify (loop, iterations, iterations_max,
/* Discard original jump to continue loop. The original compare
result may still be live, so it cannot be discarded explicitly. */
delete_insn (jump_insn);
delete_related_insns (jump_insn);
counter_reg = XEXP (condition, 0);
if (GET_CODE (counter_reg) == PLUS)

View File

@ -2613,7 +2613,7 @@ try_split (pat, trial, last)
tem = emit_insn_after (seq, trial);
delete_insn (trial);
delete_related_insns (trial);
if (has_barrier)
emit_barrier_after (tem);
@ -2873,6 +2873,8 @@ remove_insn (insn)
{
rtx next = NEXT_INSN (insn);
rtx prev = PREV_INSN (insn);
basic_block bb;
if (prev)
{
NEXT_INSN (prev) = next;
@ -2921,6 +2923,21 @@ remove_insn (insn)
if (stack == 0)
abort ();
}
if (basic_block_for_insn
&& (unsigned int)INSN_UID (insn) < basic_block_for_insn->num_elements
&& (bb = BLOCK_FOR_INSN (insn)))
{
if (bb->head == insn)
{
/* Never ever delete the basic block note without deleting whole basic
block. */
if (GET_CODE (insn) == NOTE)
abort ();
bb->head = next;
}
if (bb->end == insn)
bb->end = prev;
}
}
/* Delete all insns made since FROM.

View File

@ -1850,7 +1850,7 @@ connect_post_landing_pads ()
seq = get_insns ();
end_sequence ();
emit_insns_before (seq, region->resume);
flow_delete_insn (region->resume);
delete_insn (region->resume);
}
}

View File

@ -771,7 +771,7 @@ delete_noop_moves (f)
next = NEXT_INSN (insn);
if (INSN_P (insn) && noop_move_p (insn))
{
/* Do not call flow_delete_insn here to not confuse backward
/* Do not call delete_insn here to not confuse backward
pointers of LIBCALL block. */
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
@ -802,8 +802,8 @@ delete_dead_jumptables ()
{
if (rtl_dump_file)
fprintf (rtl_dump_file, "Dead jumptable %i removed\n", INSN_UID (insn));
flow_delete_insn (NEXT_INSN (insn));
flow_delete_insn (insn);
delete_insn (NEXT_INSN (insn));
delete_insn (insn);
next = NEXT_INSN (next);
}
}
@ -1323,6 +1323,7 @@ propagate_block_delete_insn (bb, insn)
rtx insn;
{
rtx inote = find_reg_note (insn, REG_LABEL, NULL_RTX);
bool purge = false;
/* If the insn referred to a label, and that label was attached to
an ADDR_VEC, it's safe to delete the ADDR_VEC. In fact, it's
@ -1360,16 +1361,15 @@ propagate_block_delete_insn (bb, insn)
for (i = 0; i < len; i++)
LABEL_NUSES (XEXP (XVECEXP (pat, diff_vec_p, i), 0))--;
flow_delete_insn (next);
delete_insn (next);
}
}
if (bb->end == insn)
{
bb->end = PREV_INSN (insn);
purge_dead_edges (bb);
}
flow_delete_insn (insn);
purge = true;
delete_insn (insn);
if (purge)
purge_dead_edges (bb);
}
/* Delete dead libcalls for propagate_block. Return the insn
@ -1383,10 +1383,7 @@ propagate_block_delete_libcall (bb, insn, note)
rtx first = XEXP (note, 0);
rtx before = PREV_INSN (first);
if (insn == bb->end)
bb->end = before;
flow_delete_insn_chain (first, insn);
delete_insn_chain (first, insn);
return before;
}

View File

@ -4140,7 +4140,7 @@ delete_handlers ()
|| (nonlocal_goto_stack_level != 0
&& reg_mentioned_p (nonlocal_goto_stack_level,
PATTERN (insn))))
delete_insn (insn);
delete_related_insns (insn);
}
}
}
@ -7300,7 +7300,7 @@ thread_prologue_and_epilogue_insns (f)
if (simplejump_p (jump))
{
emit_return_into_block (bb, epilogue_line_note);
flow_delete_insn (jump);
delete_insn (jump);
}
/* If we have a conditional jump, we can try to replace

View File

@ -5456,7 +5456,7 @@ delete_null_pointer_checks (f)
if (delete_list)
{
for (i = 0; i < VARRAY_ACTIVE_SIZE (delete_list); i++)
delete_insn (VARRAY_RTX (delete_list, i));
delete_related_insns (VARRAY_RTX (delete_list, i));
VARRAY_FREE (delete_list);
}
@ -6836,7 +6836,7 @@ replace_store_insn (reg, del, bb)
fprintf(gcse_file, "\n");
}
delete_insn (del);
delete_related_insns (del);
}

View File

@ -119,7 +119,7 @@ gen_peephole (peep)
printf (" if (want_jump && GET_CODE (ins1) != JUMP_INSN)\n");
printf (" {\n");
printf (" rtx insn2 = emit_jump_insn_before (PATTERN (ins1), ins1);\n");
printf (" delete_insn (ins1);\n");
printf (" delete_related_insns (ins1);\n");
printf (" ins1 = ins2;\n");
printf (" }\n");
#endif

View File

@ -1765,9 +1765,7 @@ noce_process_if_block (test_bb, then_bb, else_bb, join_bb)
success:
/* The original sets may now be killed. */
if (insn_a == then_bb->end)
then_bb->end = PREV_INSN (insn_a);
flow_delete_insn (insn_a);
delete_insn (insn_a);
/* Several special cases here: First, we may have reused insn_b above,
in which case insn_b is now NULL. Second, we want to delete insn_b
@ -1776,17 +1774,12 @@ noce_process_if_block (test_bb, then_bb, else_bb, join_bb)
the TEST block, it may in fact be loading data needed for the comparison.
We'll let life_analysis remove the insn if it's really dead. */
if (insn_b && else_bb)
{
if (insn_b == else_bb->end)
else_bb->end = PREV_INSN (insn_b);
flow_delete_insn (insn_b);
}
delete_insn (insn_b);
/* The new insns will have been inserted before cond_earliest. We should
be able to remove the jump with impunity, but the condition itself may
have been modified by gcse to be shared across basic blocks. */
test_bb->end = PREV_INSN (jump);
flow_delete_insn (jump);
delete_insn (jump);
/* If we used a temporary, fix it up now. */
if (orig_x != x)
@ -2189,11 +2182,9 @@ find_cond_trap (test_bb, then_edge, else_edge)
emit_insn_before (seq, cond_earliest);
test_bb->end = PREV_INSN (jump);
flow_delete_insn (jump);
delete_insn (jump);
trap_bb->end = PREV_INSN (trap);
flow_delete_insn (trap);
delete_insn (trap);
/* Merge the blocks! */
if (trap_bb != then_bb && ! else_bb)

View File

@ -460,7 +460,7 @@ save_for_inline (fndecl)
for basic_block structures on already freed obstack. */
for (insn = get_insns (); insn ; insn = NEXT_INSN (insn))
if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) == NOTE_INSN_BASIC_BLOCK)
delete_insn (insn);
delete_related_insns (insn);
/* If there are insns that copy parms from the stack into pseudo registers,
those insns are not copied. `expand_inline_function' must
@ -1492,13 +1492,13 @@ copy_insn_list (insns, map, static_chain_value)
#ifdef HAVE_cc0
/* If the previous insn set cc0 for us, delete it. */
if (only_sets_cc0_p (PREV_INSN (copy)))
delete_insn (PREV_INSN (copy));
delete_related_insns (PREV_INSN (copy));
#endif
/* If this is now a no-op, delete it. */
if (map->last_pc_value == pc_rtx)
{
delete_insn (copy);
delete_related_insns (copy);
copy = 0;
}
else

View File

@ -182,7 +182,7 @@ purge_line_number_notes (f)
&& NOTE_SOURCE_FILE (insn) == NOTE_SOURCE_FILE (last_note)
&& NOTE_LINE_NUMBER (insn) == NOTE_LINE_NUMBER (last_note))
{
delete_insn (insn);
delete_related_insns (insn);
continue;
}
@ -529,7 +529,7 @@ duplicate_loop_exit_test (loop_start)
/* Mark the exit code as the virtual top of the converted loop. */
emit_note_before (NOTE_INSN_LOOP_VTOP, exitcode);
delete_insn (next_nonnote_insn (loop_start));
delete_related_insns (next_nonnote_insn (loop_start));
/* Clean up. */
if (reg_map)
@ -1710,24 +1710,24 @@ delete_computation (insn)
delete_prior_computation (note, insn);
}
delete_insn (insn);
delete_related_insns (insn);
}
/* Delete insn INSN from the chain of insns and update label ref counts.
May delete some following insns as a consequence; may even delete
a label elsewhere and insns that follow it.
/* Delete insn INSN from the chain of insns and update label ref counts
and delete insns now unreachable.
Returns the first insn after INSN that was not deleted. */
Returns the first insn after INSN that was not deleted.
Usage of this instruction is deprecated. Use delete_insn instead and
subsequent cfg_cleanup pass to delete unreachable code if needed. */
rtx
delete_insn (insn)
delete_related_insns (insn)
register rtx insn;
{
register rtx next = NEXT_INSN (insn);
register rtx prev = PREV_INSN (insn);
register int was_code_label = (GET_CODE (insn) == CODE_LABEL);
register int dont_really_delete = 0;
rtx note;
rtx next = NEXT_INSN (insn), prev = PREV_INSN (insn);
while (next && INSN_DELETED_P (next))
next = NEXT_INSN (next);
@ -1736,58 +1736,13 @@ delete_insn (insn)
if (INSN_DELETED_P (insn))
return next;
if (was_code_label)
remove_node_from_expr_list (insn, &nonlocal_goto_handler_labels);
/* Don't delete user-declared labels. When optimizing, convert them
to special NOTEs instead. When not optimizing, leave them alone. */
if (was_code_label && LABEL_NAME (insn) != 0)
{
if (optimize)
{
const char *name = LABEL_NAME (insn);
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED_LABEL;
NOTE_SOURCE_FILE (insn) = name;
}
dont_really_delete = 1;
}
else
/* Mark this insn as deleted. */
INSN_DELETED_P (insn) = 1;
delete_insn (insn);
/* If instruction is followed by a barrier,
delete the barrier too. */
if (next != 0 && GET_CODE (next) == BARRIER)
{
INSN_DELETED_P (next) = 1;
next = NEXT_INSN (next);
}
/* Patch out INSN (and the barrier if any) */
if (! dont_really_delete)
{
if (prev)
{
NEXT_INSN (prev) = next;
if (GET_CODE (prev) == INSN && GET_CODE (PATTERN (prev)) == SEQUENCE)
NEXT_INSN (XVECEXP (PATTERN (prev), 0,
XVECLEN (PATTERN (prev), 0) - 1)) = next;
}
if (next)
{
PREV_INSN (next) = prev;
if (GET_CODE (next) == INSN && GET_CODE (PATTERN (next)) == SEQUENCE)
PREV_INSN (XVECEXP (PATTERN (next), 0, 0)) = prev;
}
if (prev && NEXT_INSN (prev) == 0)
set_last_insn (prev);
}
delete_insn (next);
/* If deleting a jump, decrement the count of the label,
and delete the label if it is now unused. */
@ -1796,12 +1751,12 @@ delete_insn (insn)
{
rtx lab = JUMP_LABEL (insn), lab_next;
if (--LABEL_NUSES (lab) == 0)
if (LABEL_NUSES (lab) == 0)
{
/* This can delete NEXT or PREV,
either directly if NEXT is JUMP_LABEL (INSN),
or indirectly through more levels of jumps. */
delete_insn (lab);
delete_related_insns (lab);
/* I feel a little doubtful about this loop,
but I see no clean and sure alternative way
@ -1820,7 +1775,7 @@ delete_insn (insn)
We may not be able to kill the label immediately preceeding
just yet, as it might be referenced in code leading up to
the tablejump. */
delete_insn (lab_next);
delete_related_insns (lab_next);
}
}
@ -1835,8 +1790,8 @@ delete_insn (insn)
int len = XVECLEN (pat, diff_vec_p);
for (i = 0; i < len; i++)
if (--LABEL_NUSES (XEXP (XVECEXP (pat, diff_vec_p, i), 0)) == 0)
delete_insn (XEXP (XVECEXP (pat, diff_vec_p, i), 0));
if (LABEL_NUSES (XEXP (XVECEXP (pat, diff_vec_p, i), 0)) == 0)
delete_related_insns (XEXP (XVECEXP (pat, diff_vec_p, i), 0));
while (next && INSN_DELETED_P (next))
next = NEXT_INSN (next);
return next;
@ -1848,8 +1803,8 @@ delete_insn (insn)
if (REG_NOTE_KIND (note) == REG_LABEL
/* This could also be a NOTE_INSN_DELETED_LABEL note. */
&& GET_CODE (XEXP (note, 0)) == CODE_LABEL)
if (--LABEL_NUSES (XEXP (note, 0)) == 0)
delete_insn (XEXP (note, 0));
if (LABEL_NUSES (XEXP (note, 0)) == 0)
delete_related_insns (XEXP (note, 0));
while (prev && (INSN_DELETED_P (prev) || GET_CODE (prev) == NOTE))
prev = PREV_INSN (prev);
@ -1863,7 +1818,7 @@ delete_insn (insn)
&& GET_CODE (NEXT_INSN (insn)) == JUMP_INSN
&& (GET_CODE (PATTERN (NEXT_INSN (insn))) == ADDR_VEC
|| GET_CODE (PATTERN (NEXT_INSN (insn))) == ADDR_DIFF_VEC))
next = delete_insn (NEXT_INSN (insn));
next = delete_related_insns (NEXT_INSN (insn));
/* If INSN was a label, delete insns following it if now unreachable. */
@ -1886,7 +1841,7 @@ delete_insn (insn)
deletion of unreachable code, after a different label.
As long as the value from this recursive call is correct,
this invocation functions correctly. */
next = delete_insn (next);
next = delete_related_insns (next);
}
}
@ -2128,7 +2083,7 @@ redirect_jump (jump, nlabel, delete_unused)
emit_note_after (NOTE_INSN_FUNCTION_END, nlabel);
if (olabel && --LABEL_NUSES (olabel) == 0 && delete_unused)
delete_insn (olabel);
delete_related_insns (olabel);
return 1;
}

View File

@ -1047,7 +1047,7 @@ scan_loop (loop, flags)
if (update_end && GET_CODE (update_end) == CODE_LABEL
&& --LABEL_NUSES (update_end) == 0)
delete_insn (update_end);
delete_related_insns (update_end);
}
@ -1774,7 +1774,7 @@ move_movables (loop, movables, threshold, insn_count)
= gen_rtx_EXPR_LIST (VOIDmode, r1,
gen_rtx_EXPR_LIST (VOIDmode, r2,
regs_may_share));
delete_insn (m->insn);
delete_related_insns (m->insn);
if (new_start == 0)
new_start = i1;
@ -1805,11 +1805,11 @@ move_movables (loop, movables, threshold, insn_count)
{
temp = XEXP (temp, 0);
while (temp != p)
temp = delete_insn (temp);
temp = delete_related_insns (temp);
}
temp = p;
p = delete_insn (p);
p = delete_related_insns (p);
/* simplify_giv_expr expects that it can walk the insns
at m->insn forwards and see this old sequence we are
@ -1936,7 +1936,7 @@ move_movables (loop, movables, threshold, insn_count)
if (temp == fn_address_insn)
fn_address_insn = i1;
REG_NOTES (i1) = REG_NOTES (temp);
delete_insn (temp);
delete_related_insns (temp);
}
if (new_start == 0)
new_start = first;
@ -2031,7 +2031,7 @@ move_movables (loop, movables, threshold, insn_count)
}
temp = p;
delete_insn (p);
delete_related_insns (p);
p = NEXT_INSN (p);
/* simplify_giv_expr expects that it can walk the insns
@ -2110,9 +2110,9 @@ move_movables (loop, movables, threshold, insn_count)
{
for (temp = XEXP (temp, 0); temp != m1->insn;
temp = NEXT_INSN (temp))
delete_insn (temp);
delete_related_insns (temp);
}
delete_insn (m1->insn);
delete_related_insns (m1->insn);
/* Any other movable that loads the same register
MUST be moved. */
@ -2799,7 +2799,7 @@ find_and_verify_loops (f, loops)
if (JUMP_LABEL (insn) != 0
&& (next_real_insn (JUMP_LABEL (insn))
== next_real_insn (insn)))
delete_insn (insn);
delete_related_insns (insn);
}
/* Continue the loop after where the conditional
@ -2809,7 +2809,7 @@ find_and_verify_loops (f, loops)
insn = NEXT_INSN (cond_label);
if (--LABEL_NUSES (cond_label) == 0)
delete_insn (cond_label);
delete_related_insns (cond_label);
/* This loop will be continued with NEXT_INSN (insn). */
insn = PREV_INSN (insn);
@ -7628,7 +7628,7 @@ check_dbra_loop (loop, insn_count)
end_sequence ();
p = loop_insn_emit_before (loop, 0, bl->biv->insn, tem);
delete_insn (bl->biv->insn);
delete_related_insns (bl->biv->insn);
/* Update biv info to reflect its new status. */
bl->biv->insn = p;
@ -7656,9 +7656,9 @@ check_dbra_loop (loop, insn_count)
loop_insn_sink (loop, gen_move_insn (reg, final_value));
/* Delete compare/branch at end of loop. */
delete_insn (PREV_INSN (loop_end));
delete_related_insns (PREV_INSN (loop_end));
if (compare_and_branch == 2)
delete_insn (first_compare);
delete_related_insns (first_compare);
/* Add new compare/branch insn at end of loop. */
start_sequence ();

View File

@ -3128,7 +3128,7 @@ peephole2_optimize (dump_file)
/* Replace the old sequence with the new. */
try = emit_insn_after (try, peep2_insn_data[i].insn);
flow_delete_insn_chain (insn, peep2_insn_data[i].insn);
delete_insn_chain (insn, peep2_insn_data[i].insn);
#ifdef HAVE_conditional_execution
/* With conditional execution, we cannot back up the

View File

@ -245,7 +245,6 @@ static void replace_reg PARAMS ((rtx *, int));
static void remove_regno_note PARAMS ((rtx, enum reg_note,
unsigned int));
static int get_hard_regnum PARAMS ((stack, rtx));
static void delete_insn_for_stacker PARAMS ((rtx));
static rtx emit_pop_insn PARAMS ((rtx, stack, rtx,
enum emit_where));
static void emit_swap_insn PARAMS ((rtx, stack, rtx));
@ -907,19 +906,6 @@ get_hard_regnum (regstack, reg)
return i >= 0 ? (FIRST_STACK_REG + regstack->top - i) : -1;
}
/* Delete INSN from the RTL. Mark the insn, but don't remove it from
the chain of insns. Doing so could confuse block_begin and block_end
if this were the only insn in the block. */
static void
delete_insn_for_stacker (insn)
rtx insn;
{
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
NOTE_SOURCE_FILE (insn) = 0;
}
/* Emit an insn to pop virtual register REG before or after INSN.
REGSTACK is the stack state after INSN and is updated to reflect this
@ -1114,7 +1100,7 @@ move_for_stack_reg (insn, regstack, pat)
{
emit_pop_insn (insn, regstack, src, EMIT_AFTER);
delete_insn_for_stacker (insn);
delete_insn (insn);
return;
}
@ -1123,7 +1109,7 @@ move_for_stack_reg (insn, regstack, pat)
SET_HARD_REG_BIT (regstack->reg_set, REGNO (dest));
CLEAR_HARD_REG_BIT (regstack->reg_set, REGNO (src));
delete_insn_for_stacker (insn);
delete_insn (insn);
return;
}
@ -1140,7 +1126,7 @@ move_for_stack_reg (insn, regstack, pat)
if (find_regno_note (insn, REG_UNUSED, REGNO (dest)))
emit_pop_insn (insn, regstack, dest, EMIT_AFTER);
delete_insn_for_stacker (insn);
delete_insn (insn);
return;
}

View File

@ -2408,7 +2408,7 @@ combine_stack_adjustments_for_block (bb)
-last_sp_adjust))
{
/* It worked! */
flow_delete_insn (last_sp_set);
delete_insn (last_sp_set);
last_sp_set = insn;
last_sp_adjust += this_adjust;
free_csa_memlist (memlist);
@ -2450,7 +2450,7 @@ combine_stack_adjustments_for_block (bb)
{
if (last_sp_set == bb->head)
bb->head = NEXT_INSN (last_sp_set);
flow_delete_insn (last_sp_set);
delete_insn (last_sp_set);
free_csa_memlist (memlist);
memlist = NULL;
@ -2487,12 +2487,9 @@ combine_stack_adjustments_for_block (bb)
break;
if (pending_delete)
flow_delete_insn (pending_delete);
delete_insn (pending_delete);
}
if (pending_delete)
{
bb->end = PREV_INSN (pending_delete);
flow_delete_insn (pending_delete);
}
delete_insn (pending_delete);
}

View File

@ -7744,8 +7744,8 @@ delete_address_reloads (dead_insn, current_insn)
|| (INTVAL (XEXP (SET_SRC (set), 1))
!= -INTVAL (XEXP (SET_SRC (set2), 1))))
return;
delete_insn (prev);
delete_insn (next);
delete_related_insns (prev);
delete_related_insns (next);
}
/* Subfunction of delete_address_reloads: process registers found in X. */
@ -9519,7 +9519,7 @@ fixup_abnormal_edges ()
if (INSN_P (insn))
{
insert_insn_on_edge (PATTERN (insn), e);
flow_delete_insn (insn);
delete_insn (insn);
}
insn = next;
}

View File

@ -455,7 +455,7 @@ emit_delay_sequence (insn, list, length)
We will put the BARRIER back in later. */
if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == BARRIER)
{
delete_insn (NEXT_INSN (insn));
delete_related_insns (NEXT_INSN (insn));
last = get_last_insn ();
had_barrier = 1;
}
@ -590,7 +590,7 @@ delete_from_delay_slot (insn)
list, and rebuild the delay list if non-empty. */
prev = PREV_INSN (seq_insn);
trial = XVECEXP (seq, 0, 0);
delete_insn (seq_insn);
delete_related_insns (seq_insn);
add_insn_after (trial, prev);
if (GET_CODE (trial) == JUMP_INSN
@ -651,14 +651,14 @@ delete_scheduled_jump (insn)
|| FIND_REG_INC_NOTE (trial, 0))
return;
if (PREV_INSN (NEXT_INSN (trial)) == trial)
delete_insn (trial);
delete_related_insns (trial);
else
delete_from_delay_slot (trial);
}
}
#endif
delete_insn (insn);
delete_related_insns (insn);
}
/* Counters for delay-slot filling. */
@ -762,7 +762,7 @@ optimize_skip (insn)
delay_list = add_to_delay_list (trial, NULL_RTX);
next_trial = next_active_insn (trial);
update_block (trial, trial);
delete_insn (trial);
delete_related_insns (trial);
/* Also, if we are targeting an unconditional
branch, thread our jump to the target of that branch. Don't
@ -1510,7 +1510,7 @@ try_merge_delay_insns (insn, thread)
if (trial == thread)
thread = next_active_insn (thread);
delete_insn (trial);
delete_related_insns (trial);
INSN_FROM_TARGET_P (next_to_match) = 0;
}
else
@ -1603,7 +1603,7 @@ try_merge_delay_insns (insn, thread)
else
{
update_block (XEXP (merged_insns, 0), thread);
delete_insn (XEXP (merged_insns, 0));
delete_related_insns (XEXP (merged_insns, 0));
}
}
@ -2191,7 +2191,7 @@ fill_simple_delay_slots (non_jumps_p)
delay_list = gen_rtx_INSN_LIST (VOIDmode,
trial, delay_list);
update_block (trial, trial);
delete_insn (trial);
delete_related_insns (trial);
if (slots_to_fill == ++slots_filled)
break;
continue;
@ -2329,7 +2329,7 @@ fill_simple_delay_slots (non_jumps_p)
link_cc0_insns (trial);
#endif
delete_insn (trial);
delete_related_insns (trial);
if (slots_to_fill == ++slots_filled)
break;
continue;
@ -2488,7 +2488,7 @@ fill_simple_delay_slots (non_jumps_p)
current_function_epilogue_delay_list);
mark_end_of_function_resources (trial, 1);
update_block (trial, trial);
delete_insn (trial);
delete_related_insns (trial);
/* Clear deleted bit so final.c will output the insn. */
INSN_DELETED_P (trial) = 0;
@ -2636,7 +2636,7 @@ fill_slots_from_thread (insn, condition, thread, opposite_thread, likely,
new_thread = thread;
}
delete_insn (trial);
delete_related_insns (trial);
}
else
{
@ -2710,7 +2710,7 @@ fill_slots_from_thread (insn, condition, thread, opposite_thread, likely,
if (new_thread == trial)
new_thread = thread;
}
delete_insn (trial);
delete_related_insns (trial);
}
else
new_thread = next_active_insn (trial);
@ -2869,7 +2869,7 @@ fill_slots_from_thread (insn, condition, thread, opposite_thread, likely,
if (recog_memoized (ninsn) < 0
|| (extract_insn (ninsn), ! constrain_operands (1)))
{
delete_insn (ninsn);
delete_related_insns (ninsn);
return 0;
}
@ -2882,7 +2882,7 @@ fill_slots_from_thread (insn, condition, thread, opposite_thread, likely,
if (new_thread == trial)
new_thread = thread;
}
delete_insn (trial);
delete_related_insns (trial);
}
else
new_thread = next_active_insn (trial);
@ -3128,7 +3128,7 @@ relax_delay_slots (first)
if (invert_jump (insn, label, 1))
{
delete_insn (next);
delete_related_insns (next);
next = insn;
}
@ -3136,7 +3136,7 @@ relax_delay_slots (first)
--LABEL_NUSES (label);
if (--LABEL_NUSES (target_label) == 0)
delete_insn (target_label);
delete_related_insns (target_label);
continue;
}
@ -3212,7 +3212,7 @@ relax_delay_slots (first)
INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
trial = PREV_INSN (insn);
delete_insn (insn);
delete_related_insns (insn);
emit_insn_after (pat, trial);
delete_scheduled_jump (delay_insn);
continue;
@ -3325,7 +3325,7 @@ relax_delay_slots (first)
INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
trial = PREV_INSN (insn);
delete_insn (insn);
delete_related_insns (insn);
emit_insn_after (pat, trial);
delete_scheduled_jump (delay_insn);
continue;
@ -3340,7 +3340,7 @@ relax_delay_slots (first)
&& XVECLEN (pat, 0) == 2
&& rtx_equal_p (PATTERN (next), PATTERN (XVECEXP (pat, 0, 1))))
{
delete_insn (insn);
delete_related_insns (insn);
continue;
}
@ -3384,12 +3384,12 @@ relax_delay_slots (first)
INSN_FROM_TARGET_P (slot) = ! INSN_FROM_TARGET_P (slot);
}
delete_insn (next);
delete_related_insns (next);
next = insn;
}
if (old_label && --LABEL_NUSES (old_label) == 0)
delete_insn (old_label);
delete_related_insns (old_label);
continue;
}
}
@ -3508,7 +3508,7 @@ make_return_insns (first)
{
rtx prev = PREV_INSN (insn);
delete_insn (insn);
delete_related_insns (insn);
for (i = 1; i < XVECLEN (pat, 0); i++)
prev = emit_insn_after (PATTERN (XVECEXP (pat, 0, i)), prev);
@ -3527,7 +3527,7 @@ make_return_insns (first)
/* Now delete REAL_RETURN_LABEL if we never used it. Then try to fill any
new delay slots we have created. */
if (--LABEL_NUSES (real_return_label) == 0)
delete_insn (real_return_label);
delete_related_insns (real_return_label);
fill_simple_delay_slots (1);
fill_simple_delay_slots (0);
@ -3637,14 +3637,14 @@ dbr_schedule (first, file)
if (GET_CODE (insn) == INSN && GET_CODE (PATTERN (insn)) == USE
&& INSN_P (XEXP (PATTERN (insn), 0)))
next = delete_insn (insn);
next = delete_related_insns (insn);
}
/* If we made an end of function label, indicate that it is now
safe to delete it by undoing our prior adjustment to LABEL_NUSES.
If it is now unused, delete it. */
if (end_of_function_label && --LABEL_NUSES (end_of_function_label) == 0)
delete_insn (end_of_function_label);
delete_related_insns (end_of_function_label);
#ifdef HAVE_return
if (HAVE_return && end_of_function_label != 0)

View File

@ -1288,7 +1288,7 @@ extern void cleanup_barriers PARAMS ((void));
/* In jump.c */
extern void squeeze_notes PARAMS ((rtx *, rtx *));
extern rtx delete_insn PARAMS ((rtx));
extern rtx delete_related_insns PARAMS ((rtx));
extern void delete_jump PARAMS ((rtx));
extern void delete_barrier PARAMS ((rtx));
extern rtx get_label_before PARAMS ((rtx));
@ -1775,6 +1775,8 @@ int force_line_numbers PARAMS ((void));
void restore_line_number_status PARAMS ((int old_value));
extern void renumber_insns PARAMS ((FILE *));
extern void remove_unnecessary_notes PARAMS ((void));
extern rtx delete_insn PARAMS ((rtx));
extern void delete_insn_chain PARAMS ((rtx, rtx));
/* In combine.c */
extern int combine_instructions PARAMS ((rtx, unsigned int));

View File

@ -1208,32 +1208,6 @@ ssa_fast_dce (df)
deleted. */
df_insn_delete (df, BLOCK_FOR_INSN (def), def);
if (PHI_NODE_P (def))
{
if (def == BLOCK_FOR_INSN (def)->head
&& def == BLOCK_FOR_INSN (def)->end)
{
/* Delete it. */
PUT_CODE (def, NOTE);
NOTE_LINE_NUMBER (def) = NOTE_INSN_DELETED;
}
else if (def == BLOCK_FOR_INSN (def)->head)
{
BLOCK_FOR_INSN (def)->head = NEXT_INSN (def);
flow_delete_insn (def);
}
else if (def == BLOCK_FOR_INSN (def)->end)
{
BLOCK_FOR_INSN (def)->end = PREV_INSN (def);
flow_delete_insn (def);
}
else
flow_delete_insn (def);
}
else
{
flow_delete_insn (def);
}
VARRAY_RTX (ssa_definition, reg) = NULL;
}
}

View File

@ -468,7 +468,6 @@ static void
delete_insn_bb (insn)
rtx insn;
{
basic_block bb;
if (!insn)
abort ();
@ -480,20 +479,6 @@ delete_insn_bb (insn)
if (! INSN_P (insn))
return;
bb = BLOCK_FOR_INSN (insn);
if (!bb)
abort ();
if (bb->head == bb->end)
{
/* Delete the insn by converting it to a note. */
PUT_CODE (insn, NOTE);
NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
return;
}
else if (insn == bb->head)
bb->head = NEXT_INSN (insn);
else if (insn == bb->end)
bb->end = PREV_INSN (insn);
delete_insn (insn);
}

View File

@ -2186,7 +2186,7 @@ convert_from_ssa()
{
if (insn == BLOCK_END (bb))
BLOCK_END (bb) = PREV_INSN (insn);
insn = delete_insn (insn);
insn = delete_related_insns (insn);
}
/* Since all the phi nodes come at the beginning of the
block, if we find an ordinary insn, we can stop looking

View File

@ -355,7 +355,7 @@ unroll_loop (loop, insn_count, strength_reduce_p)
rtx ujump = ujump_to_loop_cont (loop->start, loop->cont);
if (ujump)
delete_insn (ujump);
delete_related_insns (ujump);
/* If number of iterations is exactly 1, then eliminate the compare and
branch at the end of the loop since they will never be taken.
@ -367,31 +367,31 @@ unroll_loop (loop, insn_count, strength_reduce_p)
if (GET_CODE (last_loop_insn) == BARRIER)
{
/* Delete the jump insn. This will delete the barrier also. */
delete_insn (PREV_INSN (last_loop_insn));
delete_related_insns (PREV_INSN (last_loop_insn));
}
else if (GET_CODE (last_loop_insn) == JUMP_INSN)
{
#ifdef HAVE_cc0
rtx prev = PREV_INSN (last_loop_insn);
#endif
delete_insn (last_loop_insn);
delete_related_insns (last_loop_insn);
#ifdef HAVE_cc0
/* The immediately preceding insn may be a compare which must be
deleted. */
if (only_sets_cc0_p (prev))
delete_insn (prev);
delete_related_insns (prev);
#endif
}
/* Remove the loop notes since this is no longer a loop. */
if (loop->vtop)
delete_insn (loop->vtop);
delete_related_insns (loop->vtop);
if (loop->cont)
delete_insn (loop->cont);
delete_related_insns (loop->cont);
if (loop_start)
delete_insn (loop_start);
delete_related_insns (loop_start);
if (loop_end)
delete_insn (loop_end);
delete_related_insns (loop_end);
return;
}
@ -1291,16 +1291,16 @@ unroll_loop (loop, insn_count, strength_reduce_p)
&& ! (GET_CODE (insn) == CODE_LABEL && LABEL_NAME (insn))
&& ! (GET_CODE (insn) == NOTE
&& NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED_LABEL))
insn = delete_insn (insn);
insn = delete_related_insns (insn);
else
insn = NEXT_INSN (insn);
}
/* Can now delete the 'safety' label emitted to protect us from runaway
delete_insn calls. */
delete_related_insns calls. */
if (INSN_DELETED_P (safety_label))
abort ();
delete_insn (safety_label);
delete_related_insns (safety_label);
/* If exit_label exists, emit it after the loop. Doing the emit here
forces it to have a higher INSN_UID than any insn in the unrolled loop.
@ -1315,13 +1315,13 @@ unroll_loop (loop, insn_count, strength_reduce_p)
{
/* Remove the loop notes since this is no longer a loop. */
if (loop->vtop)
delete_insn (loop->vtop);
delete_related_insns (loop->vtop);
if (loop->cont)
delete_insn (loop->cont);
delete_related_insns (loop->cont);
if (loop_start)
delete_insn (loop_start);
delete_related_insns (loop_start);
if (loop_end)
delete_insn (loop_end);
delete_related_insns (loop_end);
}
if (map->const_equiv_varray)
@ -1562,7 +1562,7 @@ calculate_giv_inc (pattern, src_insn, regno)
/* The last insn emitted is not needed, so delete it to avoid confusing
the second cse pass. This insn sets the giv unnecessarily. */
delete_insn (get_last_insn ());
delete_related_insns (get_last_insn ());
}
/* Verify that we have a constant as the second operand of the plus. */
@ -1601,7 +1601,7 @@ calculate_giv_inc (pattern, src_insn, regno)
src_insn = PREV_INSN (src_insn);
increment = SET_SRC (PATTERN (src_insn));
/* Don't need the last insn anymore. */
delete_insn (get_last_insn ());
delete_related_insns (get_last_insn ());
if (GET_CODE (second_part) != CONST_INT
|| GET_CODE (increment) != CONST_INT)
@ -1620,7 +1620,7 @@ calculate_giv_inc (pattern, src_insn, regno)
/* The insn loading the constant into a register is no longer needed,
so delete it. */
delete_insn (get_last_insn ());
delete_related_insns (get_last_insn ());
}
if (increment_total)
@ -1644,7 +1644,7 @@ calculate_giv_inc (pattern, src_insn, regno)
src_insn = PREV_INSN (src_insn);
pattern = PATTERN (src_insn);
delete_insn (get_last_insn ());
delete_related_insns (get_last_insn ());
goto retry;
}
@ -2148,7 +2148,7 @@ copy_loop_body (loop, copy_start, copy_end, map, exit_label, last_iteration,
#ifdef HAVE_cc0
/* If the previous insn set cc0 for us, delete it. */
if (only_sets_cc0_p (PREV_INSN (copy)))
delete_insn (PREV_INSN (copy));
delete_related_insns (PREV_INSN (copy));
#endif
/* If this is now a no-op, delete it. */
@ -2159,7 +2159,7 @@ copy_loop_body (loop, copy_start, copy_end, map, exit_label, last_iteration,
instruction in the loop. */
if (JUMP_LABEL (copy))
LABEL_NUSES (JUMP_LABEL (copy))++;
delete_insn (copy);
delete_related_insns (copy);
if (JUMP_LABEL (copy))
LABEL_NUSES (JUMP_LABEL (copy))--;
copy = 0;
@ -2954,7 +2954,7 @@ find_splittable_givs (loop, bl, unroll_type, increment, unroll_number)
/* We can't use bl->initial_value to compute the initial
value, because the loop may have been preconditioned.
We must calculate it from NEW_REG. */
delete_insn (PREV_INSN (loop->start));
delete_related_insns (PREV_INSN (loop->start));
start_sequence ();
ret = force_operand (v->new_reg, tem);