This change reworks the condition variables support for VxWorks
to address the very legit points raised by Rasmus in
https://gcc.gnu.org/pipermail/gcc/2020-May/232524.html
While some of the issues were taken care of by the use of semFlush,
a few others were indeed calling for adjustments.
We first considered resorting to the condvarLib library available
in VxWorks7. Unfortunately, it is vx7 only and we wanted something working
for at least vx 6.9 as well. It also turned out requiring the use of
recursive mutexes for condVarWait, which seemed unnecessarily constraining.
Instead, this change corrects the sequencing logic in a few places and
leverages the semExchange API to ensure the key atomicity requirement on
cond_wait operations.
2020-10-14 Alexandre Oliva <oliva@adacore.com>
libgcc/
* config/gthr-vxworks-thread.c: Include stdlib.h.
(tls_delete_hook): Prototype it.
(__gthread_cond_signal): Return early if no waiters. Consume
signal in case the semaphore got full. Use semInfoGet instead
of kernel-mode-only semInfo.
(__gthread_cond_timedwait): Use semExchange. Always take the
mutex again before returning.
* config/gthr-vxworks-cond.c (__ghtread_cond_wait): Likewise.
This fixlet makes sure -mcpu=xscale selects the correct VX_CPU.
2020-10-14 Olivier Hainque <hainque@adacore.com>
gcc/
* config/arm/vxworks.h (TARGET_OS_CPP_BUILTINS): Fix
the VX_CPU selection for -mcpu=xscale on arm-vxworks.
The standard doesn't guarantee that null pointers compare less than
non-null pointers. AddressSanitizer complains about the pptr()> egptr()
comparison in basic_stringbuf::str() when egptr() is null.
libstdc++-v3/ChangeLog:
PR libstdc++/97415
* include/std/sstream (basic_stringbuf::str()): Check for
null egptr() before comparing to non-null pptr().
This change reworks CPP_BUILTINS_SPEC for powerpc-vxworks to
prepare for the upcoming addition of 32 and 64 bit ports for
VxWorks 7r2.
2020-10-14 Olivier Hainque <hainque@adacore.com>
gcc/
* config/rs6000/vxworks.h (TARGET_OS_CPP_BUILTINS): Accommodate
expectations from different versions of VxWorks, for 32 or 64bit
configurations.
Arrange to inhibit the effects of CPLUSPLUS_CPP_SPEC in gnu-user.h,
which #defines _GNU_SOURCE, which is invalid for VxWorks (possibly
not providing ::mkstemp, for example).
2020-10-14 Olivier Hainque <hainque@adacore.com>
gcc/
* config/vxworks.h: #undef CPLUSPLUS_CPP_SPEC.
This is useful to handle ports where we might arrange to use
different sets of fixed headers for different multilibs, typically
for kernel vs rtp modes.
2020-10-14 Olivier Hainque <hainque@adacore.com>
libgcc/
* config/t-vxworks (LIBGCC2_INCLUDES): Append
$(MULTISUBDIR) to the -I path for fixed headers, as we
arrange to have different sets of such headers for different
multilibs when they are activated.
* config/t-vxworks7: Likewise.
The special vxworks rules for the compilation of libgcc had
-I.../gcc/include and not .../gcc/include-fixed, causing build
failure of our arm-vxworks7r2 port because of indirect dependencies
on limits.h.
The omission was just an oversight and this change just adds the
missing -I.
2020-10-14 Olivier Hainque <hainque@adacore.com>
libgcc/
* config/t-vxworks: Add include-fixed to include search
paths for libgcc on VxWorks.
* config/t-vxworks7: Likewise.
This is a minor adjustment to the vxworks specific macro name
used to guard the header file contents, to make it closer to the
original one and easier to search for.
2020-10-14 Olivier Hainque <hainque@adacore.com>
gcc/
* config/t-vxworks: Adjust the VxWorks alternative LIMITS_H guard
for glimits.h, make it both closer to the previous one and easier to
search for.
DECL_FRIEND_P's meaning has changed over time. It now (almost) means
the the friend function decl has not been met via an explicit decl.
This completes that transition, renaming it to DECL_UNIQUE_FRIEND_P,
so one doesn't think it is the sole indicator of friendliness (plenty
of friends do not have the flag set). This allows reduction in the
complexity of managing the field -- all in duplicate_decls now.
gcc/cp/
* cp-tree.h (struct lang_decl_fn): Adjust context comment.
(DECL_FRIEND_P): Replace with ...
(DECL_UNIQUE_FRIEND_P): ... this. Only for FUNCTION_DECLs.
(DECL_FRIEND_CONTEXT): Adjust.
* class.c (add_implicitly_declared_members): Detect friendly
spaceship from context.
* constraint.cc (remove_constraints): Use a checking assert.
(maybe_substitute_reqs_for): Use DECL_UNIQUE_FRIEND_P.
* decl.c (check_no_redeclaration_friend_default_args):
DECL_UNIQUE_FRIEND_P is signficant, not hiddenness.
(duplicate_decls): Adjust DECL_UNIQUE_FRIEND_P clearing.
(redeclaration_error_message): Use DECL_UNIQUE_FRIEND_P.
(start_preparsed_function): Correct in-class friend processing.
Refactor some initializers.
(grokmethod): Directly check friend decl-spec.
* decl2.c (grokfield): Check DECL_UNIQUE_FRIEND_P.
* friend.c (do_friend): Set DECL_UNIQUE_FRIEND_P first, remove
extraneous conditions. Don't re set it afterwards.
* name-lookup.c (lookup_elaborated_type_1): Simplify revealing
code.
(do_pushtag): Likewise.
* pt.c (optimize_specialization_lookup_p): Check
DECL_UNIQUE_FRIEND_P.
(push_template_decl): Likewise. Drop unneeded friend setting.
(type_dependent_expression_p): Check DECL_UNIQUE_FRIEND_P.
libcc1/
* libcp1plugin.cc (plugin_add_friend): Set DECL_UNIQUE_FRIEND_P.
These builtins have two known issues and this patch fixes one of them.
One issue is that the builtins effectively return two results and
they make the destination addressable until expansion, which means
a stack slot is allocated for them and e.g. with -fstack-protector*
DSE isn't able to optimize that away. I think for that we want to use
the technique of returning complex value; the patch doesn't handle that
though. See PR93990 for that.
The other problem is optimization of successive uses of the builtin
e.g. for arbitrary precision arithmetic additions/subtractions.
As shown PR93990, combine is able to optimize the case when the first
argument to these builtins is 0 (the first instance when several are used
together), and also the last one if the last one ignores its result (i.e.
the carry/borrow is dead and thrown away in that case).
As shown in this PR, combiner refuses to optimize the rest, where it sees:
(insn 10 9 11 2 (set (reg:QI 88 [ _31 ])
(ltu:QI (reg:CCC 17 flags)
(const_int 0 [0]))) "include/adxintrin.h":69:10 785 {*setcc_qi}
(expr_list:REG_DEAD (reg:CCC 17 flags)
(nil)))
- set pseudo 88 to CF from flags, then some uninteresting insns that
don't modify flags, and finally:
(insn 17 15 18 2 (parallel [
(set (reg:CCC 17 flags)
(compare:CCC (plus:QI (reg:QI 88 [ _31 ])
(const_int -1 [0xffffffffffffffff]))
(reg:QI 88 [ _31 ])))
(clobber (scratch:QI))
]) "include/adxintrin.h":69:10 350 {*addqi3_cconly_overflow_1}
(expr_list:REG_DEAD (reg:QI 88 [ _31 ])
(nil)))
to set CF in flags back to what we saved earlier. The combiner just punts
trying to combine the 10, 17 and following addcarrydi (etc.) instruction,
because
if (i1 && !can_combine_p (i1, i3, i0, NULL, i2, NULL, &i1dest, &i1src))
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "Can't combine i1 into i3\n");
undo_all ();
return 0;
}
fails - the 3 insns aren't all adjacent and
|| (! all_adjacent
&& (((!MEM_P (src)
|| ! find_reg_note (insn, REG_EQUIV, src))
&& modified_between_p (src, insn, i3))
src (flags hard register) is modified between the first and third insn - in
the second insn.
The following patch optimizes this by optimizing just the two insns,
10 and 17 above, i.e. save CF into pseudo, set CF from that pseudo, into
a nop. The new define_insn_and_split matches how combine simplifies those
two together (except without the ix86_cc_mode change it was choosing CCmode
for the destination instead of CCCmode, so had to change that function too,
and also adjust costs so that combiner understand it is beneficial).
With this, all the testcases are optimized, so that the:
setc %dl
...
addb $-1, %dl
insns in between the ad[dc][lq] or s[ub]b[lq] instructions are all optimized
away (sure, if something would clobber flags in between they wouldn't, but
there is nothing that can be done about that).
2020-10-14 Jakub Jelinek <jakub@redhat.com>
PR target/97387
* config/i386/i386.md (CC_CCC): New mode iterator.
(*setcc_qi_addqi3_cconly_overflow_1_<mode>): New
define_insn_and_split.
* config/i386/i386.c (ix86_cc_mode): Return CCCmode
for *setcc_qi_addqi3_cconly_overflow_1_<mode> pattern operands.
(ix86_rtx_costs): Return true and *total = 0;
for *setcc_qi_addqi3_cconly_overflow_1_<mode> pattern. Use op0 and
op1 temporaries to simplify COMPARE checks.
* gcc.target/i386/pr97387-1.c: New test.
* gcc.target/i386/pr97387-2.c: New test.
These two tests have started to fail with the old std::string ABI. The
scan-assembler-not checks fail because they match debug info, not code.
Adding -g0 to the test flags fixes them.
libstdc++-v3/ChangeLog:
* testsuite/21_strings/basic_string/modifiers/assign/char/move_assign_optim.cc:
Do not generate debug info.
* testsuite/21_strings/basic_string/modifiers/assign/wchar_t/move_assign_optim.cc:
Likewise.
gcc/ChangeLog:
PR tree-optimization/97396
* gimple-range.cc (gimple_ranger::range_of_phi): Do not call
range_of_ssa_name_with_loop_info with the loop tree root.
gcc/testsuite/ChangeLog:
* gcc.dg/pr97396.c: New test.
This is another tiny piece in some bigger refactoring of
vect_get_and_check_slp_defs. Split out a test that has nothing
to do with def types or commutation.
2020-10-14 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (vect_get_and_check_slp_defs): Split out
test for compatible operand types.
This fixes spurious complaints about PIC mode not supported
from "gcc --help=...", on VxWorks without -mrtp. The spurious message
is emitted by vxworks_override_options, called with flag_pic == -1
when we're running for --help.
The change simply adjusts the check testing for "we're generating pic code"
to "flag_pic > 0" instead of just "flag_pic". We're not generating code at
all when reaching here with -1.
gcc/ChangeLog:
2020-10-14 Olivier Hainque <hainque@adacore.com>
* config/vxworks.c (vxworks_override_options): Guard pic checks with
flag_pic > 0 instead of just flag_pic.
We can end up with { _1, 1.0 } * { 3.0, _2 } which isn't really
profitable. The following adjusts things so we reject more than
one possibly expensive (non-constant and not uniform) vector CTOR
and instead build a CTOR for the scalar operation results.
This also moves a check in vect_get_and_check_slp_defs to a better
place.
2020-10-14 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (vect_get_and_check_slp_defs): Move
check for duplicate/interleave of variable size constants
to a place done once and early.
(vect_build_slp_tree_2): Adjust heuristics when to build
a BB SLP node from scalars.
The function gimple_can_duplicate_bb_p currently always returns true.
The presence of can_duplicate_bb_p in tracer.c however suggests that
there are cases when bb's indeed cannot be duplicated.
Move the implementation of can_duplicate_bb_p to gimple_can_duplicate_bb_p.
Bootstrapped and reg-tested on x86_64-linux.
Build x86_64-linux with nvptx accelerator and tested libgomp.
No issues found.
As corner-case check, bootstrapped and reg-tested a patch that makes
gimple_can_duplicate_bb_p always return false, resulting in
PR97333 - "[gimple_can_duplicate_bb_p == false, tree-ssa-threadupdate]
ICE in duplicate_block, at cfghooks.c:1093".
gcc/ChangeLog:
2020-10-09 Tom de Vries <tdevries@suse.de>
* tracer.c (cached_can_duplicate_bb_p, analyze_bb): Use
can_duplicate_block_p.
(can_duplicate_insn_p, can_duplicate_bb_no_insn_iter_p)
(can_duplicate_bb_p): Move and merge ...
* tree-cfg.c (gimple_can_duplicate_bb_p): ... here.
It turns out that pushdecl_with_scope has somewhat strange behaviour,
which probably made more sense way back. Unfortunately making it
somewhat saner turned into a rathole. Instead use a
push_nested_namespace around pushing the alias -- this is similar to
some of the friend handling we already have.
gcc/cp/
* name-lookup.c (push_local_extern_decl_alias): Push into alias's
namespace and use pushdecl.
(do_pushdecl_with_scope): Clarify behaviour.
gcc/testsuite/
* g++.dg/lookup/extern-redecl2.C: New.
There are a lot of very simple constructors for the old string which are
not defined inline. I don't see any reason for this and it probably
makes them less likely to be optimized away. Move the definitions into
the class body.
libstdc++-v3/ChangeLog:
* include/bits/basic_string.h (basic_string(const Alloc&))
(basic_string(const basic_string&)
(basic_string(const CharT*, size_type, const Alloc&))
(basic_string(const CharT*, const Alloc&))
(basic_string(size_type, CharT, const Alloc&))
(basic_string(initializer_list<CharT>, const Alloc&))
(basic_string(InputIterator, InputIterator, const Alloc&)):
Define inline in class body.
* include/bits/basic_string.tcc (basic_string(const Alloc&))
(basic_string(const basic_string&)
(basic_string(const CharT*, size_type, const Alloc&))
(basic_string(const CharT*, const Alloc&))
(basic_string(size_type, CharT, const Alloc&))
(basic_string(initializer_list<CharT>, const Alloc&))
(basic_string(InputIterator, InputIterator, const Alloc&)):
Move definitions into class body.
The COW std::string does support some features of C++11 allocators, just
not propagation. Change some comments in the tests to be more precise
about that.
libstdc++-v3/ChangeLog:
* testsuite/21_strings/basic_string/allocator/char/copy.cc: Make
comment more precise about what isn't supported by COW strings.
* testsuite/21_strings/basic_string/allocator/char/copy_assign.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/char/move.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/char/move_assign.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/char/noexcept.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/char/operator_plus.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/char/swap.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/copy.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/copy_assign.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/move.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/move_assign.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/noexcept.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/operator_plus.cc:
Likewise.
* testsuite/21_strings/basic_string/allocator/wchar_t/swap.cc:
Likewise.
These tests were not being run when -D_GLIBCXX_USE_CXX11_ABI=0 was added
to the test flags, but they actually work OK with the old string.
libstdc++-v3/ChangeLog:
* testsuite/21_strings/basic_string/allocator/char/minimal.cc:
Do not require cxx11-abi effective target.
* testsuite/21_strings/basic_string/allocator/wchar_t/minimal.cc:
Likewise.
* testsuite/27_io/basic_fstream/cons/base.cc: Likewise.
The basic_string deduction guides are defined for the old ABI, but the
tests are currently disabled. This is because a single case fails when
using the old ABI, which is just because LWG 3706 isn't implemented for
the old ABI. That can be done easily, and the tests can be enabled.
libstdc++-v3/ChangeLog:
* include/bits/basic_string.h [!_GLIBCXX_USE_CXX11_ABI]
(basic_string(const _CharT*, const _Alloc&)): Constrain to
require an allocator-like type to fix CTAD ambiguity (LWG 3706).
* testsuite/21_strings/basic_string/cons/char/deduction.cc:
Remove dg-skip-if.
* testsuite/21_strings/basic_string/cons/wchar_t/deduction.cc:
Likewise.
Local identifiers can not be the same as a module name. Original
patch by Steve Kargl resulted in name clashes between common block
names and local identifiers. A local identifier can be the same as
a global identier if that identifier is not a module or a program.
The original patch was modified to reject global identifiers that
represent a module or a program.
2020-10-14 Steven G. Kargl <kargl@gcc.gnu.org>
Mark Eggleston <markeggleston@gcc.gnu.org>
gcc/fortran/ChangeLog:
PR fortran/95614
* decl.c (gfc_get_common): Use gfc_match_common_name instead
of match_common_name.
* decl.c (gfc_bind_idents): Use gfc_match_common_name instead
of match_common_name.
* match.c : Rename match_common_name to gfc_match_common_name.
* match.c (gfc_match_common): Use gfc_match_common_name instead
of match_common_name.
* match.h : Rename match_common_name to gfc_match_common_name.
* resolve.c (resolve_common_vars): Check each symbol in a
common block has a global symbol. If there is a global symbol
issue an error if the symbol type is a module or a program.
2020-10-14 Mark Eggleston <markeggleston@gcc.gnu.org>
gcc/testsuite/ChangeLog:
PR fortran/95614
* gfortran.dg/pr95614_1.f90: New test.
* gfortran.dg/pr95614_2.f90: New test.
* gfortran.dg/pr95614_3.f90: New test.
* gfortran.dg/pr95614_4.f90: New test.
this patch fixes SCC discovery in ipa-modref which is causing misoptimization
of gnat bootstrapped with LTO, PGO and -O3.
I also improved debug info and spotted wrong parameter to ignore_stores_p
(which is probably quite harmless since we only inline matching functions, but
it is better to be consistent).
PR bootstrap/97350
* ipa-modref.c (ignore_edge): Do not ignore inlined edes.
(ipa_merge_modref_summary_after_inlining): Improve debug output and
fix parameter of ignore_stores_p.
NetBSD does not include the null terminator when in its reported socket
length. Port the upstream bugfix for the issue (#6627).
This was likely missed during the usual upstream merge because the gc
and gccgo socket implementations have diverged quite a bit.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/261741
In g:70cdb21e579191fe9f0f1d45e328908e59c0179e, DECL/global variable has handled
misaligned stores, but it didn't handle PARALLEL values, and I refer the
other part of this function, I found the PARALLEL need handled by
emit_group_* functions, so I add a check, and using emit_group_store if
storing a PARALLEL value, also checked this change didn't break the
testcase(gcc.target/arm/unaligned-argument-3.c) added by the orginal changes.
For riscv64 target, struct S {int a; double b;} will pack into a parallel
value to return and it has TImode when misaligned access is supported,
however TImode required 16-byte align, but it only 8-byte align, so it go to
the misaligned stores handling, then it will try to generate move
instruction from a PARALLEL value.
Tested on following target without introduced new reguression:
- riscv32/riscv64 elf
- x86_64-linux
- arm-eabi
v2 changes:
- Use maybe_emit_group_store instead of emit_group_store.
- Remove push_temp_slots/pop_temp_slots, emit_group_store only require
stack temp slot when dst is CONCAT or PARALLEL, however
maybe_emit_group_store will always use REG for dst if needed.
gcc/ChangeLog:
PR target/96759
* expr.c (expand_assignment): Handle misaligned stores with PARALLEL
value.
gcc/testsuite/ChangeLog:
PR target/96759
* g++.target/riscv/pr96759.C: New.
* gcc.target/riscv/pr96759.c: New.
On AIX, duplication of type descriptors can occur if one is
declared in the libgo and one in the Go program being compiled.
The AIX linker isn't able to merge them together as Linux one does.
One solution is to always load libgo first but that needs a huge mechanism in
gcc core. Thus, this patch ensures that the duplication isn't visible
for the end user.
In reflect and internal/reflectlite, the comparison of rtypes is made on their
name and not only on their addresses.
In reflect, toType() function is using a canonicalization map to force rtypes
having the same rtype.String() to return the same Type. This can't be made in
internal/reflectlite as it needs sync package. But, for now, it doesn't matter
as internal/reflectlite is not widely used.
Fixesgolang/go#39276
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/260158
This patch implements the omp_get_supported_active_levels runtime routine
from the OpenMP 5.0 specification, which returns the maximum number of
active nested parallel regions supported by this implementation. The
current maximum (set using the omp_set_max_active_levels routine or the
OMP_MAX_ACTIVE_LEVELS environment variable) cannot exceed this number.
2020-10-13 Kwok Cheung Yeung <kcy@codesourcery.com>
libgomp/
* env.c (gomp_max_active_levels_var): Initialize to
gomp_supported_active_levels.
(initialize_env): Limit gomp_max_active_levels_var to be at most
equal to gomp_supported_active_levels.
* fortran.c (omp_get_supported_active_levels): Add ialias_redirect.
(omp_get_supported_active_levels_): New.
* icv.c (omp_set_max_active_levels): Limit gomp_max_active_levels_var
to at most equal to gomp_supported_active_levels.
(omp_get_supported_active_levels): New.
* libgomp.h (gomp_supported_active_levels): New.
* libgomp.map (OMP_5.0.1): Add omp_get_supported_active_levels and
omp_get_supported_active_levels_.
* libgomp.texi (omp_get_supported_active_levels): New.
(omp_set_max_active_levels): Update. Add reference to
omp_get_supported_active_levels.
* omp.h.in (omp_get_supported_active_levels): New.
* omp_lib.f90.in (omp_get_supported_active_levels): New.
* omp_lib.h.in (omp_get_supported_active_levels): New.
* testsuite/libgomp.c/lib-2.c (main): Check omp_get_max_active_levels
against omp_get_supported_active_levels.
* testsuite/libgomp.fortran/lib4.f90 (lib4): Likewise.
The following testcases are miscompiled (the first one since my improvements
to rotate discovery on GIMPLE, the other one for many years) because
combiner optimizes nested ROTATEs with narrowing SUBREG in between (i.e.
the outer rotate is performed in shorter precision than the inner one) to
just one ROTATE of the rotated constant. While that (under certain
conditions) can work for shifts, it can't work for rotates where we can only
do that with rotates of the same precision.
2020-10-13 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/97386
* combine.c (simplify_shift_const_1): Don't optimize nested ROTATEs if
they have different modes.
* gcc.c-torture/execute/pr97386-1.c: New test.
* gcc.c-torture/execute/pr97386-2.c: New test.
There's a read of a freed block while accessing the default_slot in
calc_switch_ranges.
default_slot->intersect (def_range);
It seems the default_slot got swiped from under us, and the valgrind
dump indicates the free came from the get_or_insert in the same
function:
irange *&slot = m_edge_table->get_or_insert (e, &existed);
So it looks like the get_or_insert is actually freeing the value of
the previously allocated default_slot. Looking down the chain
from get_or_insert, we see it calls hash_table<>::expand, which
actually does a free while doing a resize of sorts:
if (!m_ggc)
Allocator <value_type> ::data_free (oentries);
else
ggc_free (oentries);
This patch avoids keeping a pointer to the default_slot across multiple
calls to get_or_insert in the loop.
gcc/ChangeLog:
PR tree-optimization/97379
* gimple-range-edge.cc (outgoing_range::calc_switch_ranges): Do
not save hash slot across calls to hash_table<>::get_or_insert.
Using -O2 made the tests subject to LDRD vs. LDM tuning.
The simplest fix seems to be to use -Os, so that LDM is
unequivocally a win.
gcc/testsuite/
* gcc.target/arm/stack-protector-5.c: Use -Os rather than -O2.
* gcc.target/arm/stack-protector-6.c: Likewise.
This patch adds a tuning structure for Neoverse N2 to allow for further
tuning.
For now it's just a deduplication of the Neoverse N1 struct that it was
reusing but with the SVE width set to 128.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
* config/aarch64/aarch64.c (neoversen2_tunings): Define.
* config/aarch64/aarch64-cores.def (neoverse-n2): Use it.
This makes the only consumer of STMT_VINFO_SAME_ALIGN_REFS, the
loop peeling for alignment code, use locally computed data and
then removes STMT_VINFO_SAME_ALIGN_REFS and its computation.
It also adjusts the auto_vec<> move CTOR/assignment so you
can write
auto_vec<..> foo = bar.copy ();
and have foo own the generated copy.
2020-10-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/97382
* tree-vectorizer.h (_stmt_vec_info::same_align_refs): Remove.
(STMT_VINFO_SAME_ALIGN_REFS): Likewise.
* tree-vectorizer.c (vec_info::new_stmt_vec_info): Do not
allocate STMT_VINFO_SAME_ALIGN_REFS.
(vec_info::free_stmt_vec_info): Do not release
STMT_VINFO_SAME_ALIGN_REFS.
* tree-vect-data-refs.c (vect_analyze_data_ref_dependences):
Do not compute self and read-read dependences.
(vect_dr_aligned_if_related_peeled_dr_is): New helper.
(vect_dr_aligned_if_peeled_dr_is): Likewise.
(vect_update_misalignment_for_peel): Use it instead of
iterating over STMT_VINFO_SAME_ALIGN_REFS.
(dr_align_group_sort_cmp): New function.
(vect_enhance_data_refs_alignment): Count the number of
same aligned refs here and elide uses of STMT_VINFO_SAME_ALIGN_REFS.
(vect_find_same_alignment_drs): Remove.
(vect_analyze_data_refs_alignment): Do not call it.
* vec.h (auto_vec<T, 0>::auto_vec): Adjust CTOR to take
a vec<>&&, assert it isn't using auto storage.
(auto_vec& operator=): Apply a similar change.
* gcc.dg/vect/no-vfa-vect-dv-2.c: Remove same align dump
scanning.
* gcc.dg/vect/vect-103.c: Likewise.
* gcc.dg/vect/vect-91.c: Likewise.
* gfortran.dg/vect/vect-4.f90: Likewise.
This propagates needed values from the point where number of iterations
is calculated on composite loops to the places where that information
is needed to use the more efficient square root discovery to compute
the starting iterator values from the logical iteration number.
2020-10-13 Jakub Jelinek <jakub@redhat.com>
* omp-low.c (add_taskreg_looptemp_clauses): For triangular loops
with non-constant number of iterations add another 4 _looptemp_
clauses before the (optional) one for lastprivate.
(lower_omp_for_lastprivate): Skip those clauses when looking for
the lastprivate clause.
(lower_omp_for): For triangular loops with non-constant number of
iterations add another 4 _looptemp_ clauses.
* omp-expand.c (expand_omp_for_init_counts): For triangular loops
with non-constant number of iterations set counts[0],
fd->first_inner_iterations, fd->factor and fd->adjn1 from the newly
added _looptemp_ clauses.
(expand_omp_for_init_vars): Initialize the newly added _looptemp_
clauses.
(find_lastprivate_looptemp): New function.
(expand_omp_for_static_nochunk, expand_omp_for_static_chunk,
expand_omp_taskloop_for_outer): Use it instead of manually skipping
_looptemp_ clauses.