This patch adds support for signed and unsigned, HImode and SImode highpart
multiplications to the nvptx backend.
This patch has been tested on nvptx-none hosted on x86_64-pc-linux-gnu
with a "make" and "make -k check" with no new failures with the
above patch.
2020-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog:
* config/nvptx/nvptx.md (smulhi3_highpart, smulsi3_highpart)
(umulhi3_highpart, umulsi3_highpart): New instructions.
gcc/testsuite/ChangeLog:
* gcc.target/nvptx/mul-hi.c: New test.
* gcc.target/nvptx/umul-hi.c: New test.
In r9-4235 I tried to make sure that the template keyword follows
a nested-name-specifier. :: is a valid nested-name-specifier, so
I also have to check 'globalscope' before giving the error.
gcc/cp/ChangeLog:
PR c++/96082
* parser.c (cp_parser_elaborated_type_specifier): Allow
'template' following ::.
gcc/testsuite/ChangeLog:
PR c++/96082
* g++.dg/template/template-keyword3.C: New test.
If we lower a constant string operation in a Binary_expression,
delete the strings. This is safe because constant strings are always
newly allocated.
This is a hack to use much less memory when compiling the new
time/tzdata package, which has a file that contains the sum of over
13,000 constant strings. We don't do this for numeric expressions
because that could cause us to delete an Iota_expression.
We should have a cleaner approach to memory usage some day.
Fixes PR go/96450
Previously, compiling with -march=armv8.1-m.main would tune for
Cortex-M7.
However, the Cortex-M7 only supports up to Armv7e-M. The Cortex-M55 is
the earliest CPU that supports Armv8.1-M Mainline so is more appropriate.
This also has the effect of changing the branch cost function used, which
will be necessary to correctly prioritise conditional instructions over branches
in the rest of this patch series.
Regression tested on arm-none-eabi.
gcc/ChangeLog
2020-08-04 Omar Tahir <omar.tahir@arm.com>
* config/arm/arm-cpus.in (armv8.1-m.main): Tune for Cortex-M55.
I noticed that we could leak parser->num_template_parameter_lists with
erroneous specializations. We'd increment, notice a problem and then
bail out. This refactors cp_parser_explicit_specialization to avoid
that code path. A couple of tests get different diagnostics because
of the fix. pr39425 then goes to unbounded template instantiation
and exceeds the implementation limit.
gcc/cp/
* parser.c (cp_parser_explicit_specialization): Refactor
to avoid leak of num_template_parameter_lists value.
gcc/testsuite/
* g++.dg/template/pr39425.C: Adjust errors, (unbounded
template recursion).
* g++.old-deja/g++.pt/spec20.C: Remove fallout diagnostics.
Since all FP intrinsics are set by FLAG_FP by default, but not all FP intrinsics
raise FP exceptions or read FPCR register. So we add a global flag FLAG_AUTO_FP
to suppress the flag FLAG_FP.
2020-08-04 Zhiheng Xie <xiezhiheng@huawei.com>
gcc/ChangeLog:
* config/aarch64/aarch64-builtins.c (aarch64_call_properties):
Use FLOAT_MODE_P macro instead of enumerating all floating-point
modes and add global flag FLAG_AUTO_FP.
When looking at the symver attr documentation in html, I found there is no
name to refer to for it.
2020-08-04 Jakub Jelinek <jakub@redhat.com>
* doc/extend.texi (symver): Add @cindex for symver function attribute.
PR rtl-optimization/60473 is code quality regression that has
been cured by improvements to register allocation. For the function
in the test case, GCC 4.4, 4.5 and 4.6 generated very poor code
requiring two mov instructions, and GCC 4.7 and 4.8 (when the PR was
filed) produced better but still poor code with one mov instruction.
Since GCC 4.9 (including current mainline), it generates optimal
code with no mov instructions, matching what used to be generated
in GCC 4.1.
2020-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/testsuite/ChangeLog
PR rtl-optimization/60473
* gcc.target/i386/pr60473.c: New test.
this transformation is quite straightforward, without overflow, 3*X==15 is
the same as X==5 and 3*X==5 cannot happen. Adding a single_use restriction
for the first case didn't seem necessary, although of course it can
slightly increase register pressure in some cases.
2020-08-04 Marc Glisse <marc.glisse@inria.fr>
PR tree-optimization/95433
* match.pd (X * C1 == C2): New transformation.
* gcc.c-torture/execute/pr23135.c: Add -fwrapv to avoid
undefined behavior.
* gcc.dg/tree-ssa/pr95433.c: New file.
Adds code generation for generating a temporary for, and pre-filling
struct and array literals with zeroes before assigning, so that
alignment holes don't cause objects to produce a non-deterministic hash
value. A new field has been added to the expression visitor to track
whether the result is being generated for another literal, so that
memset() is only called once on the top-level literal expression, and
not for nesting struct or arrays.
gcc/d/ChangeLog:
PR d/96153
* d-tree.h (build_expr): Add literalp argument.
* expr.cc (ExprVisitor): Add literalp_ field.
(ExprVisitor::ExprVisitor): Initialize literalp_.
(ExprVisitor::visit (AssignExp *)): Call memset() on blits where RHS
is a struct literal. Elide assignment if initializer is all zeroes.
(ExprVisitor::visit (CastExp *)): Forward literalp_ to generation of
subexpression.
(ExprVisitor::visit (AddrExp *)): Likewise.
(ExprVisitor::visit (ArrayLiteralExp *)): Use memset() to pre-fill
object with zeroes. Set literalp in subexpressions.
(ExprVisitor::visit (StructLiteralExp *)): Likewise.
(ExprVisitor::visit (TupleExp *)): Set literalp in subexpressions.
(ExprVisitor::visit (VectorExp *)): Likewise.
(ExprVisitor::visit (VectorArrayExp *)): Likewise.
(build_expr): Forward literal_p to ExprVisitor.
gcc/testsuite/ChangeLog:
PR d/96153
* gdc.dg/pr96153.d: New test.
Implement TImode shifts in the backend.
The middle-end support that does it for other architectures doesn't work for
GCN because BITS_PER_WORD==32, meaning that TImode is quad-word, not
double-word.
gcc/ChangeLog:
* config/gcn/gcn.md ("<expander>ti3"): New.
This patch preserves the source locations of each node in a member
initializer list so that during processing of the list we can set
input_location appropriately for generally more accurate diagnostic
locations. Since TREE_LIST nodes are tcc_exceptional, they can't have
source locations, so we instead store the location in a dummy
tcc_expression node within the TREE_TYPE of the list node.
gcc/cp/ChangeLog:
PR c++/94024
* init.c (sort_mem_initializers): Preserve TREE_TYPE of the
member initializer list node.
(emit_mem_initializers): Set input_location when performing each
member initialization.
* parser.c (cp_parser_mem_initializer): Attach the source
location of this initializer to a dummy EMPTY_CLASS_EXPR
within the TREE_TYPE of the list node.
* pt.c (tsubst_initializer_list): Preserve TREE_TYPE of the
member initializer list node.
gcc/testsuite/ChangeLog:
PR c++/94024
* g++.dg/diagnostic/mem-init1.C: New test.
This adds a stopgap measure to avoid performing code-hoisting
on mixed type loads when the load we'd insert in the hoisting
position would be a floating point one. This is because certain
targets (hello x87) cannot perform floating point loads without
possibly altering the bit representation and thus cannot be used
in place of integral loads.
2020-08-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/88240
* tree-ssa-sccvn.h (vn_reference_s::punned): New flag.
* tree-ssa-sccvn.c (vn_reference_insert): Initialize punned.
(vn_reference_insert_pieces): Likewise.
(visit_reference_op_call): Likewise.
(visit_reference_op_load): Track whether a ref was punned.
* tree-ssa-pre.c (do_hoist_insertion): Refuse to perform hoist
insertion on punned floating point loads.
* gcc.target/i386/pr88240.c: New testcase.
This is my attempt at reviving the old patch
https://gcc.gnu.org/pipermail/gcc-patches/2019-January/514632.html
I have followed on Kyrill's comment upstream on the link above and I
am using the recommended option iii that he mentioned.
"1) Adjust the copy_limit to 256 bits after checking
AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS in the tuning.
2) Adjust aarch64_copy_one_block_and_progress_pointers to handle
256-bit moves. by iii:
iii) Emit explicit V4SI (or any other 128-bit vector mode) pairs
ldp/stps. This wouldn't need any adjustments to MD patterns,
but would make aarch64_copy_one_block_and_progress_pointers
more complex as it would now have two paths, where one
handles two adjacent memory addresses in one calls."
gcc/ChangeLog:
* config/aarch64/aarch64.c (aarch64_gen_store_pair): Add case
for E_V4SImode.
(aarch64_gen_load_pair): Likewise.
(aarch64_copy_one_block_and_progress_pointers): Handle 256 bit copy.
(aarch64_expand_cpymem): Expand copy_limit to 256bits where
appropriate.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/cpymem-q-reg_1.c: New test.
* gcc.target/aarch64/large_struct_copy_2.c: Update for ldp q regs.
With the pr96628-part1.f90 source and -ftree-slp-vectorize, we run into an
ICE due to the fact that V2DI mode is not handled in nvptx_gen_shuffle.
Fix this by adding handling of V2DI as well as V2SI mode in
nvptx_gen_shuffle.
Build and reg-tested on x86_64 with nvptx accelerator.
gcc/ChangeLog:
PR target/96428
* config/nvptx/nvptx.c (nvptx_gen_shuffle): Handle V2SI/V2DI.
libgomp/ChangeLog:
PR target/96428
* testsuite/libgomp.oacc-fortran/pr96628-part1.f90: New test.
* testsuite/libgomp.oacc-fortran/pr96628-part2.f90: New test.
.VEC_CONVERT is a const internal call, so normally if the lhs is not used,
we'd DCE it far before getting to veclower, but with -O0 (or perhaps
-fno-tree-dce and some other -fno-* options) it can happen.
But as the internal fn needs the lhs to know the type to which the
conversion is done (and I think that is a reasonable representation, having
some magic another argument and having to create constants with that type
looks overkill to me), we just should DCE those calls ourselves.
During veclower, we can't really remove insns, as the callers would be
upset, so this just replaces it with a GIMPLE_NOP.
2020-08-04 Jakub Jelinek <jakub@redhat.com>
PR middle-end/96426
* tree-vect-generic.c (expand_vector_conversion): Replace .VEC_CONVERT
call with GIMPLE_NOP if there is no lhs.
* gcc.c-torture/compile/pr96426.c: New test.
In debug stmts, we are less strict about what is and what is not accepted
there, so this patch just punts on optimization of a debug stmt rather than
ICEing.
2020-08-04 Jakub Jelinek <jakub@redhat.com>
PR debug/96354
* gimple-fold.c (maybe_canonicalize_mem_ref_addr): Add IS_DEBUG
argument. Return false instead of gcc_unreachable if it is true and
get_addr_base_and_unit_offset returns NULL.
(fold_stmt_1) <case GIMPLE_DEBUG>: Adjust caller.
* g++.dg/opt/pr96354.C: New test.
2020-08-04 Jakub Jelinek <jakub@redhat.com>
* omp-expand.c (expand_omp_for_init_counts): For triangular loops
compute number of iterations at runtime more efficiently.
(expand_omp_for_init_vars): Adjust immediate dominators.
(extract_omp_for_update_vars): Likewise.
gcc/ChangeLog:
* vr-values.c (test_for_singularity): Use irange API.
(simplify_using_ranges::simplify_cond_using_ranges_1): Do not
special case VR_RANGE.
Anti ranges of ~[MIN,X] are automatically canonicalized to [X+1,MAX],
at creation time. There is no need to handle them specially.
Tested by adding a gcc_unreachable and bootstrapping/testing.
gcc/ChangeLog:
* builtins.c (determine_block_size): Remove ad-hoc range canonicalization.
v5 update as comments:
1. Move const_rhs out of loop;
2. Iterate from int size for read_mode.
This patch could optimize(works for char/short/int/void*):
6: r119:TI=[r118:DI+0x10]
7: [r118:DI]=r119:TI
8: r121:DI=[r118:DI+0x8]
=>
6: r119:TI=[r118:DI+0x10]
16: r122:DI=r119:TI#8
Final ASM will be as below without partial load after full store(stxv+ld):
ld 10,16(3)
mr 9,3
ld 3,24(3)
std 10,0(9)
std 3,8(9)
blr
It could achieve ~25% performance improvement for typical cases on
Power9. Bootstrap and regression tested on Power9-LE.
For AArch64, one ldr is replaced by mov with this patch:
ldp x2, x3, [x0, 16]
stp x2, x3, [x0]
ldr x0, [x0, 8]
=>
mov x1, x0
ldp x2, x0, [x0, 16]
stp x2, x0, [x1]
gcc/ChangeLog:
2020-08-04 Xionghu Luo <luoxhu@linux.ibm.com>
PR rtl-optimization/71309
* dse.c (find_shift_sequence): Use subreg of shifted from high part
register to avoid loading from address.
gcc/testsuite/ChangeLog:
2020-08-04 Xionghu Luo <luoxhu@linux.ibm.com>
PR rtl-optimization/71309
* gcc.target/powerpc/pr71309.c: New test.
This accomodates increased space required by use of the xsavec
instruction in the dynamic linker trampoline.
libgcc/ChangeLog:
* config/i386/morestack.S (BACKOFF) [x86_64]: Add 2048 bytes.
It should be skipped then.
2020-08-03 Segher Boessenkool <segher@kernel.crashing.org>
gcc/testsuite/
* gcc.target/powerpc/vector_float.c: Skip if not lp64.
This is DR 2032 which says that the restrictions regarding template
parameter packs and default arguments apply to variable templates as
well, but we weren't detecting that.
gcc/cp/ChangeLog:
DR 2032
PR c++/96218
* pt.c (check_default_tmpl_args): Also consider variable
templates.
gcc/testsuite/ChangeLog:
DR 2032
PR c++/96218
* g++.dg/cpp1y/var-templ67.C: New test.
As mentioned in the PR, the fallback path when LSE is unavailable writes
incorrect registers to the memory if the previous content compares equal
to x0, x1 - it writes copy of x0, x1 from the start of function, but it
should write x2, x3.
2020-08-03 Jakub Jelinek <jakub@redhat.com>
PR target/96402
* config/aarch64/lse.S (__aarch64_cas16_acq_rel): Use x2, x3 instead
of x(tmp0), x(tmp1) in STXP arguments.
* gcc.target/aarch64/pr96402.c: New test.
This prevents a ... token in code examples from being turned into a
single HORIZONTAL ELLIPSIS glyph (e.g. via the HTML … entity).
gcc/ChangeLog:
* doc/cpp.texi (Variadic Macros): Use the exact ... token in
code examples.
Standalone attach and detach clauses should not create present/release
mappings for Fortran array descriptors (e.g. used when we have a pointer
to an array), both because it is unnecessary and because those mappings
will be incorrectly subject to reference counting. Simply omitting the
mappings means we just use GOMP_MAP_TO_PSET and GOMP_MAP_{ATTACH,DETACH}
mappings for array descriptors.
That requires a tweak in gimplify.c, since we may now see GOMP_MAP_TO_PSET
without a preceding data-movement mapping.
2020-08-03 Julian Brown <julian@codesourcery.com>
Thomas Schwinge <thomas@codesourcery.com>
gcc/fortran/
* trans-openmp.c (gfc_trans_omp_clauses): Don't create present/release
mappings for array descriptors.
gcc/
* gimplify.c (gimplify_omp_target_update): Allow GOMP_MAP_TO_PSET
without a preceding data-movement mapping.
gcc/testsuite/
* gfortran.dg/goacc/attach-descriptor.f90: Update pattern output. Add
scanning of gimplify dump.
libgomp/
* testsuite/libgomp.oacc-fortran/attach-descriptor-1.f90: Don't run for
shared-memory devices. Extend with further checking.
Co-Authored-By: Thomas Schwinge <thomas@codesourcery.com>
Work on the Arm64 port shows that these two macros can be declared
ahead of the version in darwin.h which needs to override (for X86
and PPC this wasn't needed).
gcc/ChangeLog:
* config/darwin.h (ASM_DECLARE_FUNCTION_NAME): UNDEF before
use.
(DEF_MIN_OSX_VERSION): Only define if there's no existing
def.
The common code that selects suitable sections for literals needs
to inspect the machine_mode. For some sub-targets that might be
represented as a poly-int.
There was a workaround in place that allowed for cases where the poly
int had only one component. This removes the workaround and handles
the cases where we care about the machine_mode size.
gcc/ChangeLog:
* config/darwin.c (IN_TARGET_CODE): Remove.
(darwin_mergeable_constant_section): Handle poly-int machine modes.
(machopic_select_rtx_section): Likewise.