The following addresses IVOPTs rewriting expressions in its
strip_offset without caring for definedness of overflow. Rather
than the earlier attempt of just using the proper
split_constant_offset from data-ref analysis the following adjusts
IVOPTs helper trying to minimize changes from this fix, possibly
easing backports.
PR tree-optimization/110243
PR tree-optimization/111336
* tree-ssa-loop-ivopts.cc (strip_offset_1): Rewrite
operations with undefined behavior on overflow to
unsigned arithmetic.
* gcc.dg/torture/pr110243.c: New testcase.
* gcc.dg/torture/pr111336.c: Likewise.
The following fixes the assert in vectorizable_simd_clone_call to
assert we have a vector type during transform. Whether we have
one during analysis depends on whether another SLP user decided
on the type of a constant/external already. When we end up with
a mismatch in desire the updating will fail and make vectorization
fail.
PR tree-optimization/111891
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Fix
assert.
* gfortran.dg/pr111891.f90: New testcase.
Accept the architecture configure option and resolve build failures. This is
enough to build binaries, but I've not got a device to test it on, so there
are probably runtime issues to fix. The cache control instructions might be
unsafe (or too conservative), and the kernel metadata might be off. Vector
reductions will need to be reworked for RDNA2. In principle, it would be
better to use wavefrontsize32 for this architecture, but that would mean
switching everything to allow SImode masks, so wavefrontsize64 it is.
The multilib is not included in the default configuration so either configure
--with-arch=gfx1030 or include it in --with-multilib-list=gfx1030,....
The majority of this patch has no effect on other devices, but changing from
using scalar writes for the exit value to vector writes means we don't need
the scalar cache write-back instruction anywhere (which doesn't exist in RDNA2).
gcc/ChangeLog:
* config.gcc: Allow --with-arch=gfx1030.
* config/gcn/gcn-hsa.h (NO_XNACK): gfx1030 does not support xnack.
(ASM_SPEC): gfx1030 needs -mattr=+wavefrontsize64 set.
* config/gcn/gcn-opts.h (enum processor_type): Add PROCESSOR_GFX1030.
(TARGET_GFX1030): New.
(TARGET_RDNA2): New.
* config/gcn/gcn-valu.md (@dpp_move<mode>): Disable for RDNA2.
(addc<mode>3<exec_vcc>): Add RDNA2 syntax variant.
(subc<mode>3<exec_vcc>): Likewise.
(<convop><mode><vndi>2_exec): Add RDNA2 alternatives.
(vec_cmp<mode>di): Likewise.
(vec_cmp<u><mode>di): Likewise.
(vec_cmp<mode>di_exec): Likewise.
(vec_cmp<u><mode>di_exec): Likewise.
(vec_cmp<mode>di_dup): Likewise.
(vec_cmp<mode>di_dup_exec): Likewise.
(reduc_<reduc_op>_scal_<mode>): Disable for RDNA2.
(*<reduc_op>_dpp_shr_<mode>): Likewise.
(*plus_carry_dpp_shr_<mode>): Likewise.
(*plus_carry_in_dpp_shr_<mode>): Likewise.
* config/gcn/gcn.cc (gcn_option_override): Recognise gfx1030.
(gcn_global_address_p): RDNA2 only allows smaller offsets.
(gcn_addr_space_legitimate_address_p): Likewise.
(gcn_omp_device_kind_arch_isa): Recognise gfx1030.
(gcn_expand_epilogue): Use VGPRs instead of SGPRs.
(output_file_start): Configure gfx1030.
* config/gcn/gcn.h (TARGET_CPU_CPP_BUILTINS): Add __RDNA2__;
(ASSEMBLER_DIALECT): New.
* config/gcn/gcn.md (rdna): New define_attr.
(enabled): Use "rdna" attribute.
(gcn_return): Remove s_dcache_wb.
(addcsi3_scalar): Add RDNA2 syntax variant.
(addcsi3_scalar_zero): Likewise.
(addptrdi3): Likewise.
(mulsi3): v_mul_lo_i32 should be v_mul_lo_u32 on all ISA.
(*memory_barrier): Add RDNA2 syntax variant.
(atomic_load<mode>): Add RDNA2 cache control variants, and disable
scalar atomics for RDNA2.
(atomic_store<mode>): Likewise.
(atomic_exchange<mode>): Likewise.
* config/gcn/gcn.opt (gpu_type): Add gfx1030.
* config/gcn/mkoffload.cc (EF_AMDGPU_MACH_AMDGCN_GFX1030): New.
(main): Recognise -march=gfx1030.
* config/gcn/t-omp-device: Add gfx1030 isa.
libgcc/ChangeLog:
* config/gcn/amdgcn_veclib.h (CDNA3_PLUS): Set false for __RDNA2__.
libgomp/ChangeLog:
* plugin/plugin-gcn.c (EF_AMDGPU_MACH_AMDGCN_GFX1030): New.
(isa_hsa_name): Recognise gfx1030.
(isa_code): Likewise.
* team.c (defined): Remove s_endpgm.
The following restricts moving variable shifts to when they are
always executed in the loop as we currently do not have an efficient
way to rewrite them to something that is unconditionally
well-defined and value range analysis will otherwise compute
invalid ranges for the shift operand.
PR tree-optimization/111000
* stor-layout.h (element_precision): Move ..
* tree.h (element_precision): .. here.
* tree-ssa-loop-im.cc (movement_possibility_1): Restrict
motion of shifts and rotates.
* gcc.dg/torture/pr111000.c: New testcase.
This patch introduces an optional hardening pass to catch unexpected
execution flows. Functions are transformed so that basic blocks set a
bit in an automatic array, and (non-exceptional) function exit edges
check that the bits in the array represent an expected execution path
in the CFG.
Functions with multiple exit edges, or with too many blocks, call an
out-of-line checker builtin implemented in libgcc. For simpler
functions, the verification is performed in-line.
-fharden-control-flow-redundancy enables the pass for eligible
functions, --param hardcfr-max-blocks sets a block count limit for
functions to be eligible, and --param hardcfr-max-inline-blocks
tunes the "too many blocks" limit for in-line verification.
-fhardcfr-skip-leaf makes leaf functions non-eligible.
Additional -fhardcfr-check-* options are added to enable checking at
exception escape points, before potential sibcalls, hereby dubbed
returning calls, and before noreturn calls and exception raises. A
notable case is the distinction between noreturn calls expected to
throw and those expected to terminate or loop forever: the default
setting for -fhardcfr-check-noreturn-calls, no-xthrow, performs
checking before the latter, but the former only gets checking in the
exception handler. GCC can only tell between them by explicit marking
noreturn functions expected to raise with the newly-introduced
expected_throw attribute, and corresponding ECF_XTHROW flag.
for gcc/ChangeLog
* tree-core.h (ECF_XTHROW): New macro.
* tree.cc (set_call_expr): Add expected_throw attribute when
ECF_XTHROW is set.
(build_common_builtin_node): Add ECF_XTHROW to
__cxa_end_cleanup and _Unwind_Resume or _Unwind_SjLj_Resume.
* calls.cc (flags_from_decl_or_type): Check for expected_throw
attribute to set ECF_XTHROW.
* gimple.cc (gimple_build_call_from_tree): Propagate
ECF_XTHROW from decl flags to gimple call...
(gimple_call_flags): ... and back.
* gimple.h (GF_CALL_XTHROW): New gf_mask flag.
(gimple_call_set_expected_throw): New.
(gimple_call_expected_throw_p): New.
* Makefile.in (OBJS): Add gimple-harden-control-flow.o.
* builtins.def (BUILT_IN___HARDCFR_CHECK): New.
* common.opt (fharden-control-flow-redundancy): New.
(-fhardcfr-check-returning-calls): New.
(-fhardcfr-check-exceptions): New.
(-fhardcfr-check-noreturn-calls=*): New.
(Enum hardcfr_check_noreturn_calls): New.
(fhardcfr-skip-leaf): New.
* doc/invoke.texi: Document them.
(hardcfr-max-blocks, hardcfr-max-inline-blocks): New params.
* flag-types.h (enum hardcfr_noret): New.
* gimple-harden-control-flow.cc: New.
* params.opt (-param=hardcfr-max-blocks=): New.
(-param=hradcfr-max-inline-blocks=): New.
* passes.def (pass_harden_control_flow_redundancy): Add.
* tree-pass.h (make_pass_harden_control_flow_redundancy):
Declare.
* doc/extend.texi: Document expected_throw attribute.
for gcc/ada/ChangeLog
* gcc-interface/trans.cc (gigi): Mark __gnat_reraise_zcx with
ECF_XTHROW.
(build_raise_check): Likewise for all rcheck subprograms.
for gcc/c-family/ChangeLog
* c-attribs.cc (handle_expected_throw_attribute): New.
(c_common_attribute_table): Add expected_throw.
for gcc/cp/ChangeLog
* decl.cc (push_throw_library_fn): Mark with ECF_XTHROW.
* except.cc (build_throw): Likewise __cxa_throw,
_ITM_cxa_throw, __cxa_rethrow.
for gcc/testsuite/ChangeLog
* c-c++-common/torture/harden-cfr.c: New.
* c-c++-common/harden-cfr-noret-never-O0.c: New.
* c-c++-common/torture/harden-cfr-noret-never.c: New.
* c-c++-common/torture/harden-cfr-noret-noexcept.c: New.
* c-c++-common/torture/harden-cfr-noret-nothrow.c: New.
* c-c++-common/torture/harden-cfr-noret.c: New.
* c-c++-common/torture/harden-cfr-notail.c: New.
* c-c++-common/torture/harden-cfr-returning.c: New.
* c-c++-common/torture/harden-cfr-tail.c: New.
* c-c++-common/torture/harden-cfr-abrt-always.c: New.
* c-c++-common/torture/harden-cfr-abrt-never.c: New.
* c-c++-common/torture/harden-cfr-abrt-no-xthrow.c: New.
* c-c++-common/torture/harden-cfr-abrt-nothrow.c: New.
* c-c++-common/torture/harden-cfr-abrt.c: New.
* c-c++-common/torture/harden-cfr-always.c: New.
* c-c++-common/torture/harden-cfr-never.c: New.
* c-c++-common/torture/harden-cfr-no-xthrow.c: New.
* c-c++-common/torture/harden-cfr-nothrow.c: New.
* c-c++-common/torture/harden-cfr-bret-always.c: New.
* c-c++-common/torture/harden-cfr-bret-never.c: New.
* c-c++-common/torture/harden-cfr-bret-noopt.c: New.
* c-c++-common/torture/harden-cfr-bret-noret.c: New.
* c-c++-common/torture/harden-cfr-bret-no-xthrow.c: New.
* c-c++-common/torture/harden-cfr-bret-nothrow.c: New.
* c-c++-common/torture/harden-cfr-bret-retcl.c: New.
* c-c++-common/torture/harden-cfr-bret.c: New.
* g++.dg/harden-cfr-throw-always-O0.C: New.
* g++.dg/harden-cfr-throw-returning-O0.C: New.
* g++.dg/torture/harden-cfr-noret-always-no-nothrow.C: New.
* g++.dg/torture/harden-cfr-noret-never-no-nothrow.C: New.
* g++.dg/torture/harden-cfr-noret-no-nothrow.C: New.
* g++.dg/torture/harden-cfr-throw-always.C: New.
* g++.dg/torture/harden-cfr-throw-never.C: New.
* g++.dg/torture/harden-cfr-throw-no-xthrow.C: New.
* g++.dg/torture/harden-cfr-throw-no-xthrow-expected.C: New.
* g++.dg/torture/harden-cfr-throw-nothrow.C: New.
* g++.dg/torture/harden-cfr-throw-nocleanup.C: New.
* g++.dg/torture/harden-cfr-throw-returning.C: New.
* g++.dg/torture/harden-cfr-throw.C: New.
* gcc.dg/torture/harden-cfr-noret-no-nothrow.c: New.
* gcc.dg/torture/harden-cfr-tail-ub.c: New.
* gnat.dg/hardcfr.adb: New.
for libgcc/ChangeLog
* Makefile.in (LIB2ADD): Add hardcfr.c.
* hardcfr.c: New.
This patch tweaks change_insns to also call ::remove_insn to ensure the
underlying RTL insn gets removed from the insn chain in the case of a
deletion.
This avoids leaving NOTE_INSN_DELETED around after deleting insns.
For movement, the RTL insn chain is updated earlier in change_insns with
the call to move_insn. For deletion, it seems reasonable to do it here.
gcc/ChangeLog:
* rtl-ssa/changes.cc (function_info::change_insns): Ensure we call
::remove_insn on deleted insns.
The following amends the {L,R}SHIFT_EXPR documentation with
documentation about the {L,R}ROTATE_EXPR case.
* doc/generic.texi ({L,R}ROTATE_EXPR): Document.
The following makes sure to rewrite all gather/scatter detected by
dataref analysis plus stmts classified as VMAT_GATHER_SCATTER. Maybe
we need to rewrite all refs, the following covers the cases I've
run into now.
* tree-vect-loop.cc (update_epilogue_loop_vinfo): Rewrite
both STMT_VINFO_GATHER_SCATTER_P and VMAT_GATHER_SCATTER
stmt refs.
I went a little bit too simple with implementing SLP gather support
for emulated and builtin based gathers. The following fixes the
conflict that appears when running into .MASK_LOAD where we rely
on vect_get_operand_map and the bolted-on STMT_VINFO_GATHER_SCATTER_P
checking wrecks that. The following properly integrates this with
vect_get_operand_map, adding another special index refering to
the vect_check_gather_scatter analyzed offset.
This unbreaks aarch64 (and hopefully riscv), I'll followup with
more fixes and testsuite coverage for x86 where I think I got
masked gather SLP support wrong.
* tree-vect-slp.cc (off_map, off_op0_map, off_arg2_map,
off_arg3_arg2_map): New.
(vect_get_operand_map): Get flag whether the stmt was
recognized as gather or scatter and use the above
accordingly.
(vect_get_and_check_slp_defs): Adjust.
(vect_build_slp_tree_2): Likewise.
The omp_lock_hint_* parameters were deprecated in favor of
omp_sync_hint_*. While omp.h contained deprecation markers for those,
the omp_lib module only contained them for omp_{g,s}_nested.
Note: The -Wdeprecated-declarations warning will only become active once
openmp_version / _OPENMP is bumped from 201511 (4.5) to 201811 (5.0).
libgomp/ChangeLog:
* omp_lib.f90.in: Tag omp_lock_hint_* as being deprecated when
_OPENMP >= 201811.
1. Remove "m_" prefix as they are not private members.
2. Rename infos -> local_infos, info -> global_info to clarify their meaning.
Pushed as it is obvious.
gcc/ChangeLog:
* config/riscv/riscv-vsetvl.cc (pre_vsetvl::fuse_local_vsetvl_info): Rename variables.
(pre_vsetvl::pre_global_vsetvl_info): Ditto.
(pre_vsetvl::emit_vsetvl): Ditto.
With the patch enabling the vectorization of early-breaks, we'd like to allow
bitfield lowering in such loops, which requires the relaxation of allowing
multiple exits when doing so. In order to avoid a similar issue to PR107275,
the code that rejects loops with certain types of gimple_stmts was hoisted from
'if_convertible_loop_p_1' to 'get_loop_body_in_if_conv_order', to avoid trying
to lower bitfields in loops we are not going to vectorize anyway.
This also ensures 'ifcvt_local_dec' doesn't accidentally remove statements it
shouldn't as it will never come across them. I made sure to add a comment to
make clear that there is a direct connection between the two and if we were to
enable vectorization of any other gimple statement we should make sure both
handle it.
gcc/ChangeLog:
* tree-if-conv.cc (if_convertible_loop_p_1): Move check from here ...
(get_loop_body_if_conv_order): ... to here.
(if_convertible_loop_p): Remove single_exit check.
(tree_if_conversion): Move single_exit check to if-conversion part and
support multiple exits.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-bitfield-read-1-not.c: New test.
* gcc.dg/vect/vect-bitfield-read-2-not.c: New test.
* gcc.dg/vect/vect-bitfield-read-8.c: New test.
* gcc.dg/vect/vect-bitfield-read-9.c: New test.
Co-Authored-By: Andre Vieira <andre.simoesdiasvieira@arm.com>
The bitfield vectorization support does not currently recognize bitfields inside
gconds. This means they can't be used as conditions for early break
vectorization which is a functionality we require.
This adds support for them by explicitly matching and handling gcond as a
source.
Testcases are added in the testsuite update patch as the only way to get there
is with the early break vectorization. See tests:
- vect-early-break_20.c
- vect-early-break_21.c
gcc/ChangeLog:
* tree-vect-patterns.cc (vect_init_pattern_stmt): Copy STMT_VINFO_TYPE
from original statement.
(vect_recog_bitfield_ref_pattern): Support bitfields in gcond.
Co-Authored-By: Andre Vieira <andre.simoesdiasvieira@arm.com>
Hi, all
This patch aims to fix some scan-asm fail of pr89229-{5,6,7}b.c since we emit
scalar vmov{s,d} here, when trying to use x/ymm 16+ w/o avx512vl but with
avx512f+evex512.
If everyone has no objection to the modification of this behavior, then we tend
to solve these failures by modifying these testcases.
BRs,
Lin
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr89229-5b.c: Modify test.
* gcc.target/i386/pr89229-6b.c: Ditto.
* gcc.target/i386/pr89229-7b.c: Ditto.
The need to initialize edge probabilities has made make_eh_edges
undesirably hard to use. I suppose we don't want make_eh_edges to
initialize the probability of the newly-added edge itself, so that the
caller takes care of it, but identifying the added edge in need of
adjustments is inefficient and cumbersome. Change make_eh_edges so
that it returns the added edge.
for gcc/ChangeLog
* tree-eh.cc (make_eh_edges): Return the new edge.
* tree-eh.h (make_eh_edges): Likewise.
This patch adds checks for attempting to change the active member of a
union by methods other than a member access expression.
To be able to properly distinguish `*(&u.a) = ` from `u.a = `, this
patch redoes the solution for c++/59950 to avoid extranneous *&; it
seems that the only case that needed the workaround was when copying
empty classes.
This patch also ensures that constructors for a union field mark that
field as the active member before entering the call itself; this ensures
that modifications of the field within the constructor's body don't
cause false positives (as these will not appear to be member access
expressions). This means that we no longer need to start the lifetime of
empty union members after the constructor body completes.
As a drive-by fix, this patch also ensures that value-initialised unions
are considered to have activated their initial member for the purpose of
checking stores and accesses, which catches some additional mistakes
pre-C++20.
PR c++/101631
PR c++/102286
gcc/cp/ChangeLog:
* call.cc (build_over_call): Fold more indirect refs for trivial
assignment op.
* class.cc (type_has_non_deleted_trivial_default_ctor): Create.
* constexpr.cc (cxx_eval_call_expression): Start lifetime of
union member before entering constructor.
(cxx_eval_component_reference): Check against first member of
value-initialised union.
(cxx_eval_store_expression): Activate member for
value-initialised union. Check for accessing inactive union
member indirectly.
* cp-tree.h (type_has_non_deleted_trivial_default_ctor):
Forward declare.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1y/constexpr-89336-3.C: Fix union initialisation.
* g++.dg/cpp1y/constexpr-union6.C: New test.
* g++.dg/cpp1y/constexpr-union7.C: New test.
* g++.dg/cpp2a/constexpr-union2.C: New test.
* g++.dg/cpp2a/constexpr-union3.C: New test.
* g++.dg/cpp2a/constexpr-union4.C: New test.
* g++.dg/cpp2a/constexpr-union5.C: New test.
* g++.dg/cpp2a/constexpr-union6.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
This patch improves the errors given when casting from void* in C++26 to
include the expected type if the types of the pointed-to objects were
not similar. It also ensures (for all standard modes) that void* casts
are checked even for DECL_ARTIFICIAL declarations, such as
lifetime-extended temporaries, and is only ignored for cases where we
know it's OK (e.g. source_location::current) or have no other choice
(heap-allocated data).
gcc/cp/ChangeLog:
* constexpr.cc (is_std_source_location_current): New.
(cxx_eval_constant_expression): Only ignore cast from void* for
specific cases and improve other diagnostics.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/constexpr-cast4.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Reviewed-by: Marek Polacek <polacek@redhat.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
This patch is an optimization tweak for cp_fold_r. If we cp_fold_r the
COND_EXPR's op0 first, we may be able to evaluate it to a constant if -O.
cp_fold has:
3143 if (callee && DECL_DECLARED_CONSTEXPR_P (callee)
3144 && !flag_no_inline)
...
3151 r = maybe_constant_value (x, /*decl=*/NULL_TREE,
flag_no_inline is 1 for -O0:
1124 if (opts->x_optimize == 0)
1125 {
1126 /* Inlining does not work if not optimizing,
1127 so force it not to be done. */
1128 opts->x_warn_inline = 0;
1129 opts->x_flag_no_inline = 1;
1130 }
but otherwise it's 0 and cp_fold will maybe_constant_value calls to
constexpr functions. And if it doesn't, then folding the COND_EXPR
will keep both arms, and we can avoid calling maybe_constant_value.
gcc/cp/ChangeLog:
* cp-gimplify.cc (cp_fold_r): Don't call maybe_constant_value.
This patch enables the compiler to use inbranch simdclones when generating
masked loops in autovectorization.
gcc/ChangeLog:
* omp-simd-clone.cc (simd_clone_adjust_argument_types): Make function
compatible with mask parameters in clone.
* tree-vect-stmts.cc (vect_build_all_ones_mask): Allow vector boolean
typed masks.
(vectorizable_simd_clone_call): Enable the use of masked clones in
fully masked loops.
When analyzing a loop and choosing a simdclone to use it is possible to choose
a simdclone that cannot be used 'inbranch' for a loop that can use partial
vectors. This may lead to the vectorizer deciding to use partial vectors which
are not supported for notinbranch simd clones. This patch fixes that by
disabling the use of partial vectors once a notinbranch simd clone has been
selected.
gcc/ChangeLog:
PR tree-optimization/110485
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Disable partial
vectors usage if a notinbranch simdclone has been selected.
gcc/testsuite/ChangeLog:
* gcc.dg/gomp/pr110485.c: New test.
The vect_get_smallest_scalar_type helper function was using any argument to a
simd clone call when trying to determine the smallest scalar type that would be
vectorized. This included the function pointer type in a MASK_CALL for
instance, and would result in the wrong type being selected. Instead this
patch special cases simd_clone_call's and uses only scalar types of the
original function that get transformed into vector types.
gcc/ChangeLog:
* tree-vect-data-refs.cc (vect_get_smallest_scalar_type): Special case
simd clone calls and only use types that are mapped to vectors.
(simd_clone_call_p): New helper function.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-simd-clone-16f.c: Remove unnecessary differentation
between targets with different pointer sizes.
* gcc.dg/vect/vect-simd-clone-17f.c: Likewise.
* gcc.dg/vect/vect-simd-clone-18f.c: Likewise.
Teach parloops how to handle a poly nit and bound e ahead of the changes to
enable non-constant simdlen.
gcc/ChangeLog:
* tree-parloops.cc (try_transform_to_exit_first_loop_alt): Accept
poly NIT and ALT_BOUND.
SVE simd clones require to be compiled with a SVE target enabled or the argument
types will not be created properly. To achieve this we need to copy
DECL_FUNCTION_SPECIFIC_TARGET from the original function declaration to the
clones. I decided it was probably also a good idea to copy
DECL_FUNCTION_SPECIFIC_OPTIMIZATION in case the original function is meant to
be compiled with specific optimization options.
gcc/ChangeLog:
* tree-parloops.cc (create_loop_fn): Copy specific target and
optimization options to clone.
On merge, reuse a merged node's possibly cached hash code only if we are on the
same type of hash and this hash is stateless.
Usage of function pointers or std::function as hash functor will prevent reusing
cached hash code.
libstdc++-v3/ChangeLog
* include/bits/hashtable_policy.h
(_Hash_code_base::_M_hash_code(const _Hash&, const _Hash_node_value<>&)): Remove.
(_Hash_code_base::_M_hash_code<_H2>(const _H2&, const _Hash_node_value<>&)): Remove.
* include/bits/hashtable.h
(_M_src_hash_code<_H2>(const _H2&, const key_type&, const __node_value_type&)): New.
(_M_merge_unique<>, _M_merge_multi<>): Use latter.
* testsuite/23_containers/unordered_map/modifiers/merge.cc
(test04, test05, test06): New test cases.
In the case of convert_argument, we would return the same expression
back rather than error_mark_node after the error message about
trying to convert to an incomplete type. This causes issues in
the gimplfier trying to see if another conversion is needed.
The code here dates back to before the revision history too so
it might be the case it never noticed we should return an error_mark_node.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR c/100532
gcc/c/ChangeLog:
* c-typeck.cc (convert_argument): After erroring out
about an incomplete type return error_mark_node.
gcc/testsuite/ChangeLog:
* gcc.dg/pr100532-1.c: New test.
In a similar way we don't warn about NULL pointer constant conversion to
a different named address we should not warn to a different sso endian
either.
This adds the simple check.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR c/104822
gcc/c/ChangeLog:
* c-typeck.cc (convert_for_assignment): Check for null pointer
before warning about an incompatible scalar storage order.
gcc/testsuite/ChangeLog:
* gcc.dg/sso-18.c: New test.
* gcc.dg/sso-19.c: New test.
While checking another change, I noticed that the new permerror overloads
break gettext with "permerror used incompatibly as both
--keyword=permerror:2 --flag=permerror:2:gcc-internal-format and
--keyword=permerror:3 --flag=permerror:3:gcc-internal-format". So let's
change the name.
gcc/ChangeLog:
* diagnostic-core.h (permerror): Rename new overloads...
(permerror_opt): To this.
* diagnostic.cc: Likewise.
gcc/cp/ChangeLog:
* typeck2.cc (check_narrowing): Adjust.
Since these strings are passed to error_at, they should be marked for
translation with G_, like other diagnostic messages, rather than _, which
forces immediate (redundant) translation. The use of N_ is less
problematic, but also imprecise.
gcc/cp/ChangeLog:
* parser.cc (cp_parser_primary_expression): Use G_.
(cp_parser_using_enum): Likewise.
* decl.cc (identify_goto): Likewise.
SPARK RM 6.1.11 introduces a new aspect Side_Effects to denote
those functions which may have output parameters, write global
variables, raise exceptions and not terminate. This adds support
for this aspect and the corresponding pragma in the frontend.
Handling of this aspect in the frontend is very similar to
the handling of aspect Extensions_Visible: both are Boolean
aspects whose expression should be static, they can be specified
on the same entities, with the same rule of inheritance from
overridden to overriding primitives for tagged types.
There is no impact on code generation.
gcc/ada/
* aspects.ads: Add aspect Side_Effects.
* contracts.adb (Add_Pre_Post_Condition)
(Inherit_Subprogram_Contract): Add support for new contract.
* contracts.ads: Update comments.
* einfo-utils.adb (Get_Pragma): Add support.
* einfo-utils.ads (Prag): Update comment.
* errout.ads: Add explain codes.
* par-prag.adb (Prag): Add support.
* sem_ch13.adb (Analyze_Aspect_Specifications)
(Check_Aspect_At_Freeze_Point): Add support.
* sem_ch6.adb (Analyze_Subprogram_Body_Helper)
(Analyze_Subprogram_Declaration): Call new analysis procedure to
check SPARK legality rules.
(Analyze_SPARK_Subprogram_Specification): New procedure to check
SPARK legality rules. Use an explain code for the error.
(Analyze_Subprogram_Specification): Move checks to new subprogram.
This code was effectively dead, as the kind for parameters was set
to E_Void at this point to detect early references.
* sem_ch6.ads (Analyze_Subprogram_Specification): Add new
procedure.
* sem_prag.adb (Analyze_Depends_In_Decl_Part)
(Analyze_Global_In_Decl_Part): Adapt legality check to apply only
to functions without side-effects.
(Analyze_If_Present): Extract functionality in new procedure
Analyze_If_Present_Internal.
(Analyze_If_Present_Internal): New procedure to analyze given
pragma kind.
(Analyze_Pragmas_If_Present): New procedure to analyze given
pragma kind associated with a declaration.
(Analyze_Pragma): Adapt support for Always_Terminates and
Exceptional_Cases. Add support for Side_Effects. Make sure to call
Analyze_If_Present to ensure pragma Side_Effects is analyzed prior
to analyzing pragmas Global and Depends. Use explain codes for the
errors.
* sem_prag.ads (Analyze_Pragmas_If_Present): Add new procedure.
* sem_util.adb (Is_Function_With_Side_Effects): New query function
to determine if a function is a function with side-effects.
* sem_util.ads (Is_Function_With_Side_Effects): Same.
* snames.ads-tmpl: Declare new names for pragma and aspect.
* doc/gnat_rm/implementation_defined_aspects.rst: Document new aspect.
* doc/gnat_rm/implementation_defined_pragmas.rst: Document new pragma.
* gnat_rm.texi: Regenerate.
Rewrite for loop containing an exit (which violates GNATcheck
rule Exits_From_Conditional_Loops), to use a while loop
which contains the exit criteria in its condition.
Also, move special case of first time through loop, to come
before loop.
gcc/ada/
* libgnat/s-imagef.adb (Set_Image_Fixed): Refactor loop.
Exempt the GNATcheck rule "Unassigned_OUT_Parameters"
with the rationale "the OUT parameter is assigned by component".
gcc/ada/
* libgnat/s-imguti.adb (Set_Decimal_Digits): Add pragma to exempt
Unassigned_OUT_Parameters.
(Set_Floating_Invalid_Value): Likewise
Add documentation for the -Q gnatbind switch in GNAT User's Guide and
improve gnatbind's help output for the switch to emphasize that it adds the
requested number of stacks to the secondary stack pool generated by the
binder.
gcc/ada/
* bindusg.adb (Display): Make it clear -Q adds to the number of
secondary stacks generated by the binder.
* doc/gnat_ugn/building_executable_programs_with_gnat.rst:
Document the -Q gnatbind switch and fix references to old
runtimes.
* gnat-style.texi: Regenerate.
* gnat_rm.texi: Regenerate.
* gnat_ugn.texi: Regenerate.
This patch is intended as a readability improvement. It doesn't
change the behavior of the compiler.
gcc/ada/
* sem_ch3.adb (Constrain_Array): Replace manual list length
computation by call to List_Length.
As noted on the PR, commit r13-1544, the fix for PR53431, did not handle
the specific case of -Wunknown-pragmas, because that warning is issued
during preprocessing, but not by libcpp directly (it comes from the
cb_def_pragma callback). Address that by handling this pragma in
addition to libcpp pragmas during the early pragma handler.
gcc/c-family/ChangeLog:
PR c++/89038
* c-pragma.cc (handle_pragma_diagnostic_impl): Handle
-Wunknown-pragmas during early processing.
gcc/testsuite/ChangeLog:
PR c++/89038
* c-c++-common/cpp/Wunknown-pragmas-1.c: New test.
This PR was fixed by r12-4797 and r12-5454. Add test coverage from the PR
that is not represented elsewhere.
gcc/testsuite/ChangeLog:
PR preprocessor/82335
* c-c++-common/cpp/diagnostic-pragma-3.c: New test.
As the testcase shows, when a PHI node dominates the loop there is no new
definition inside the loop. As such there would be no PHI nodes to update.
When we maintain LCSSA form we create an intermediate node in between the two
loops to thread alongt the value. However later on when we update the second
loop we don't have any PHI nodes to update and so adjust_phi_and_debug_stmts
does nothing. This leaves us with an incorrect phi node. Normally this does
nothing and just gets ignored. But in the case of the vUSE chain we end up
corrupting the chain.
As such whenever a PHI node's argument dominates the loop, we should remove
the newly created PHI node after edge redirection.
The one exception to this is when the loop has been versioned. In such cases
the versioned loop may not use the value but the second loop can.
When this happens and we add the loop guard unless the join block has the PHI
it can't find the original value for use inside the guard block.
The next refactoring in the series moves the formation of the guard block
inside peeling itself. Here we have all the information and wouldn't
need to re-create it later.
gcc/ChangeLog:
PR tree-optimization/111860
* tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg):
Remove PHI nodes that dominate loop.
gcc/testsuite/ChangeLog:
PR tree-optimization/111860
* gcc.dg/vect/pr111860.c: New test.
The following implements SLP vectorization support for gathers
without relying on IFNs being pattern detected (and supported by
the target). That includes support for emulated gathers but also
the legacy x86 builtin path.
PR tree-optimization/111131
* tree-vect-loop.cc (update_epilogue_loop_vinfo): Make
sure to update all gather/scatter stmt DRs, not only those
that eventually got VMAT_GATHER_SCATTER set.
* tree-vect-slp.cc (_slp_oprnd_info::first_gs_info): Add.
(vect_get_and_check_slp_defs): Handle gathers/scatters,
adding the offset as SLP operand and comparing base and scale.
(vect_build_slp_tree_1): Handle gathers.
(vect_build_slp_tree_2): Likewise.
* gcc.dg/vect/vect-gather-1.c: Now expected to vectorize
everywhere.
* gcc.dg/vect/vect-gather-2.c: Expected to not SLP anywhere.
Massage the scale case to more reliably produce a different
one. Scan for the specific messages.
* gcc.dg/vect/vect-gather-3.c: Masked gather is also supported
for AVX2, but not emulated.
* gcc.dg/vect/vect-gather-4.c: Expected to not SLP anywhere.
Massage to more properly ensure this.
* gcc.dg/vect/tsvc/vect-tsvc-s353.c: Expect to vectorize
everywhere.
The following moves the builtin decl gather vectorization path along
the internal function and emulated gather vectorization paths,
simplifying the existing function down to generating the call and
required conversions to the actual argument types. This thereby
exposes the unique support of two times larger number of offset
or data vector lanes. It also makes the code path handle SLP
in principle (but SLP build needs adjustments for this, patch coming).
* tree-vect-stmts.cc (vect_build_gather_load_calls): Rename
to ...
(vect_build_one_gather_load_call): ... this. Refactor,
inline widening/narrowing support ...
(vectorizable_load): ... here, do gather vectorization
with builtin decls along other gather vectorization.
The test is trying to check that we don't use q-register stores with
-mstrict-align, so actually check specifically for that.
This is a prerequisite to avoid regressing:
scan-assembler-not "add\tx0, x0, :"
with the upcoming ldp fusion pass, as we change where the ldps are
formed such that a register is used rather than a symbolic (lo_sum)
address for the first load.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/pr71727.c: Adjust scan-assembler-not to
make sure we don't have q-register stores with -mstrict-align.
With the new ldp/stp pass enabled, there is a change in the codegen for
this test as follows:
add x8, sp, 16
ptrue p3.h, mul3
str p3, [x8]
- str x8, [sp, 8]
- str x9, [sp]
+ stp x9, x8, [sp]
ptrue p3.d, vl8
ptrue p2.s, vl7
ptrue p1.h, vl6
i.e. we now form an stp that we were missing previously. This patch
adjusts the scan-assembler such that it should pass whether or not
we form the stp.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/pcs/args_9.c: Adjust scan-assemblers to
allow for stp.
The test is looking for individual stores which are able to be merged
into stp instructions. The test currently passes -fno-schedule-fusion
-fno-peephole2, presumably to prevent these stores from being turned
into stps, but this is no longer sufficient with the new ldp/stp fusion
pass.
As such, we add --param=aarch64-stp-policy=never to prevent stps being
formed.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/lr_free_1.c: Add
--param=aarch64-stp-policy=never to dg-options.