Sometimes, nested lambdas of templated functions get no code generation
due to them being marked as instantianted outside of all modules being
compiled in the current compilation unit. This despite enclosing
template instances being marked as instantiated inside the current
compilation unit. To fix, all enclosing templates are now checked in
`function_defined_in_root_p'.
Because of this change, `function_needs_inline_definition_p' has also
been fixed up to only check whether the regular function definition
itself is to be emitted in the current compilation unit.
PR d/108055
gcc/d/ChangeLog:
* decl.cc (function_defined_in_root_p): Check all enclosing template
instances for definition in a root module.
(function_needs_inline_definition_p): Replace call to
function_defined_in_root_p with test for outer module `isRoot'.
gcc/testsuite/ChangeLog:
* gdc.dg/torture/imports/pr108055conv.d: New.
* gdc.dg/torture/imports/pr108055spec.d: New.
* gdc.dg/torture/imports/pr108055write.d: New.
* gdc.dg/torture/pr108055.d: New test.
Enable TARGET_CONST_ANCHOR to allow complex constants to be created via
immediate add/sub. Use a 24-bit range as that enables a 3 or 4-instruction
immediate to be replaced by 2 add/sub instructions. Fix the costing of
add/sub to support 24-bit and 12-bit shifted immediates.
The generated code for the testcase is now the same or better than LLVM.
It also results in a small codesize reduction on SPEC.
gcc/
* config/aarch64/aarch64.cc (aarch64_rtx_costs): Add correct costs
for 24-bit and 12-bit shifted immediate add/sub.
(TARGET_CONST_ANCHOR): Define.
* config/aarch64/predicates.md (aarch64_pluslong_immediate):
Fix range check.
gcc/testsuite/
* gcc.target/aarch64/movk_3.c: New test.
This optimizes the following sequence
((a < b) & c) | ((a >= b) & d)
into
(a < b ? c : d) & 1
for scalar and on vector we can omit the & 1.
Also recognizes
(-(a < b) & c) | (-(a >= b) & d)
into
a < b ? c : d
This changes the code generation from
zoo2:
cmp w0, w1
cset w0, lt
cset w1, ge
and w0, w0, w2
and w1, w1, w3
orr w0, w0, w1
ret
into
cmp w0, w1
csel w0, w2, w3, lt
and w0, w0, 1
ret
and significantly reduces the number of selects we have to do in the vector
code.
gcc/ChangeLog:
* match.pd: Add new rule.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/if-compare_1.c: New test.
* gcc.target/aarch64/if-compare_2.c: New test.
While writing a patch series I started getting incorrect codegen out from
VEC_PERM on partial struct types.
It turns out that this was happening because the TARGET_CAN_CHANGE_MODE_CLASS
implementation has a slight bug in it. The hook only checked for SIMD to
Partial but never Partial to SIMD. This resulted in incorrect subregs to be
generated from the fallback code in VEC_PERM_EXPR expansions.
I have unfortunately not been able to trigger it using a standalone testcase as
the mid-end optimizes away the permute every time I try to describe a permute
that would result in the bug.
The patch now rejects any conversion of partial SIMD struct types, unless they
are both partial structures of the same number of registers or one is a SIMD
type who's size is less than 8 bytes.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_can_change_mode_class): Restrict
conversions between partial struct types properly.
This implements the new tbranch optab for AArch64.
we cannot emit one big RTL for the final instruction immediately.
The reason that all comparisons in the AArch64 backend expand to separate CC
compares, and separate testing of the operands is for ifcvt.
The separate CC compare is needed so ifcvt can produce csel, cset etc from the
compares. Unlike say combine, ifcvt can not do recog on a parallel with a
clobber. Should we emit the instruction directly then ifcvt will not be able
to say, make a csel, because we have no patterns which handle zero_extract and
compare. (unlike combine ifcvt cannot transform the extract into an AND).
While you could provide various patterns for this (and I did try) you end up
with broken patterns because you can't add the clobber to the CC register. If
you do, ifcvt recog fails.
i.e.
int
f1 (int x)
{
if (x & 1)
return 1;
return x;
}
We lose csel here.
Secondly the reason the compare with an explicit CC mode is needed is so that
ifcvt can transform the operation into a version that doesn't require the flags
to be set. But it only does so if it know the explicit usage of the CC reg.
For instance
int
foo (int a, int b)
{
return ((a & (1 << 25)) ? 5 : 4);
}
Doesn't require a comparison, the optimal form is:
foo(int, int):
ubfx x0, x0, 25, 1
add w0, w0, 4
ret
and no compare is actually needed. If you represent the instruction using an
ANDS instead of a zero_extract then you get close, but you end up with an ands
followed by an add, which is a slower operation.
gcc/ChangeLog:
* config/aarch64/aarch64.md (*tb<optab><mode>1): Rename to...
(*tb<optab><ALLI:mode><GPI:mode>1): ... this.
(tbranch_<code><mode>4): New.
* config/aarch64/iterators.md(ZEROM, zerom): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/tbz_1.c: New test.
This adds a new test-and-branch optab that can be used to do a conditional test
of a bit and branch. This is similar to the cbranch optab but instead can
test any arbitrary bit inside the register.
This patch recognizes boolean comparisons and single bit mask tests.
gcc/ChangeLog:
* dojump.cc (do_jump): Pass along value.
(do_jump_by_parts_greater_rtx): Likewise.
(do_jump_by_parts_zero_rtx): Likewise.
(do_jump_by_parts_equality_rtx): Likewise.
(do_compare_rtx_and_jump): Likewise.
(do_compare_and_jump): Likewise.
* dojump.h (do_compare_rtx_and_jump): New.
* optabs.cc (emit_cmp_and_jump_insn_1): Refactor to take optab to check.
(validate_test_and_branch): New.
(emit_cmp_and_jump_insns): Optiobally take a value, and when value is
supplied then check if it's suitable for tbranch.
* optabs.def (tbranch_eq$a4, tbranch_ne$a4): New.
* doc/md.texi (tbranch_@var{op}@var{mode}4): Document it.
* optabs.h (emit_cmp_and_jump_insns): New.
* tree.h (tree_zero_one_valued_p): New.
The backend has an existing V2HFmode that is used by pairwise operations.
This mode was however never made fully functional. Amongst other things it was
never declared as a vector type which made it unusable from the mid-end.
It's also lacking an implementation for load/stores so reload ICEs if this mode
is every used. This finishes the implementation by providing the above.
Note that I have created a new iterator VHSDF_P instead of extending VHSDF
because the previous iterator is used in far more things than just load/stores.
It's also used for instance in intrinsics and extending this would force me to
provide support for mangling the type while we never expose it through
intrinsics.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (*aarch64_simd_movv2hf): New.
(mov<mode>, movmisalign<mode>, aarch64_dup_lane<mode>,
aarch64_store_lane0<mode>, aarch64_simd_vec_set<mode>,
@aarch64_simd_vec_copy_lane<mode>, vec_set<mode>,
reduc_<optab>_scal_<mode>, reduc_<fmaxmin>_scal_<mode>,
aarch64_reduc_<optab>_internal<mode>, aarch64_get_lane<mode>,
vec_init<mode><Vel>, vec_extract<mode><Vel>): Support V2HF.
(aarch64_simd_dupv2hf): New.
* config/aarch64/aarch64.cc (aarch64_classify_vector_mode):
Add E_V2HFmode.
* config/aarch64/iterators.md (VHSDF_P): New.
(V2F, VMOVE, nunits, Vtype, Vmtype, Vetype, stype, VEL,
Vel, q, vp): Add V2HF.
* config/arm/types.md (neon_fp_reduc_add_h): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/slp_1.c: Update testcase.
This test fails if chrono::days::rep or chrono::years::rep is a 32-bit
type, because a large days or years value silently overflows a 32-bit
integer when converted to seconds. It would be conforming to implement
chrono::days as chrono::duration<int32_t, ratio<86400>>, but would make
this overflow case more likely. Similarly for chrono::years,
chrono::months and chrono::weeks. This test is here to remind us not to
make that change lightly.
libstdc++-v3/ChangeLog:
* testsuite/20_util/duration/arithmetic/overflow_c++20.cc: New
test.
Clang now defines an __is_unsigned built-in, and Windows defines an
_Out_ macro. Replace uses of those as identifiers.
There might also be a problem with __is_signed, which we use in several
places.
libstdc++-v3/ChangeLog:
* include/std/chrono (hh_mm_ss): Rename __is_unsigned member to
_S_is_unsigned.
* include/std/format (basic_format_context): Rename _Out_
template parameter to _Out2.
* testsuite/17_intro/names.cc: Add Windows SAL annotation
macros.
This change was approved for C++23 last month.
libstdc++-v3/ChangeLog:
* include/std/stacktrace (operator<<): Only output to narrow
ostreams (LWG 3515).
* testsuite/19_diagnostics/stacktrace/synopsis.cc:
Recent ACLE additions specified the __ARM_FEATURE_PAUTH and __ARM_FEATURE_BTI macros [1] that the compiler
should define when the pointer authentication and BTI instructions are available (and don't act as NOPs).
We've received requests to enable them in GCC for aarch64, similar to clang [2].
It's a fairly simple patch and should be non-intrusive at this stage.
Pointer authentication has its own "pauth" feature flag, whereas BTI depends on an architecture level
of Armv8.5-a or later.
Bootstrapped and tested on aarch64-none-linux-gnu.
[1] https://github.com/ARM-software/acle/blob/main/main/acle.md#pointer-authentication
[2] https://reviews.llvm.org/rG7d40baa82b1f272f68de63f3c4f68d970bdcd6ed
gcc/ChangeLog:
* config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins): Define
__ARM_FEATURE_PAUTH and __ARM_FEATURE_BTI when appropriate.
* config/aarch64/aarch64.h (TARGET_BTI): Define.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/acle/bti_def.c: New test.
* gcc.target/aarch64/acle/pauth_def.c: New test.
This reverts the part that substitutes from the definition of an
SSA name to the capture, thus ADDR_EXPR@0 eventually yielding
&y_1->a[i_2] instead of _3. That's because I didn't think of
how to deal with substituting @0 in the result pattern. So
the following re-instantiates the SSA def CONSTRUCTOR handling
and in the ADDR_EXPR helpers used by match.pd handles SSA names
defined to ADDR_EXPRs transparently.
* genmatch.cc (dt_simplify::gen): Revert last change.
* match.pd: Revert simplification of CONSTUCTOR leaf handling.
(&x cmp SSA_NAME): Handle ADDR_EXPR in SSA defs.
* fold-const.cc (split_address_to_core_and_offset): Handle
ADDR_EXPRs in SSA defs.
(address_compare): Likewise.
As seen in the original testcase for PR89317 we are missing
comparison simplification patterns for &p->x != p + 4. Fixed
by making an existing one apply. To make the pattern apply
during CCP we need to simplify ccp_fold to not use GENERIC
folding of conditions but also use GIMPLE folding.
PR tree-optimization/89317
* tree-ssa-ccp.cc (ccp_fold): Handle GIMPLE_COND via
gimple_fold_stmt_to_constant_1.
* match.pd (&a != &a + c): Apply to pointer_plus with non-ADDR_EXPR
base as well.
* gcc.dg/tree-ssa/pr89317.c: Amend.
The visitor for lowering IMPORTED_DECLs did not have an override for
dealing with importing OverloadSet symbols. This has now been
implemented in the code generator.
PR d/108050
gcc/d/ChangeLog:
* decl.cc (DeclVisitor::visit (Import *)): Handle build_import_decl
returning a TREE_LIST.
* imports.cc (ImportVisitor::visit (OverloadSet *)): New override.
gcc/testsuite/ChangeLog:
* gdc.dg/imports/pr108050/mod1.d: New.
* gdc.dg/imports/pr108050/mod2.d: New.
* gdc.dg/imports/pr108050/package.d: New.
* gdc.dg/pr108050.d: New test.
In order to support CR on a line, we need to open files
with newline='\n' as our line endings supposed to be of UNIX style.
contrib/ChangeLog:
* check_GNU_style.py: Use newline=\n.
* check_GNU_style_lib.py: Simplify.
* gcc-changelog/git_commit.py: Fix issues seen
Rust patchset.
* gcc-changelog/git_email.py: Use newline argument.
* gcc-changelog/test_email.py: New test.
* gcc-changelog/test_patches.txt: New test.
* mklog.py: Use newline argument.
As well as removing unnecessary casts, this results in less temporaries
being generated during the initial gimple lowering pass. Otherwise the
code generated is identical to the former intrinsic expansion.
gcc/d/ChangeLog:
* intrinsics.cc (expand_intrinsic_bsf): Fix comment.
(expand_intrinsic_bsr): Use BIT_XOR_EXPR instead of MINUS_EXPR.
The PR notices we fail to simplify
a_4 = &x_3(D)->data;
b_5 = x_3(D) + 16;
_1 = b_5 - a_4;
together with the enabler handling ADDR_EXPR leafs in separate
stmts in match.pd the suggested patterns work.
PR tree-optimization/89317
* match.pd ((p + b) - &p->c -> b - offsetof(c)): New patterns.
* gcc.dg/tree-ssa/pr89317.c: New testcase.
The following allows to match ADDR_EXPR for both the invariant
&a.b case as well as the &p->d case in a separate definition
transparently. This also allows to remove the hack we employ
for CONSTRUCTOR which we handle for example with
(match vec_same_elem_p
CONSTRUCTOR@0
(if (TREE_CODE (@0) == SSA_NAME
&& uniform_vector_p (gimple_assign_rhs1 (SSA_NAME_DEF_STMT (@0))))))
Note CONSTUCTORs always appear as separate definition in GIMPLE,
but I continue to play safe and ADDR_EXPRs are now matched in
both places where previously ADDR_EXPR@0 would have missed
the &p->x case.
This is a prerequesite for the PR89317 fix.
* genmatch.cc (dt_node::gen_kids): Handle ADDR_EXPR in both
the GENERIC and GIMPLE op position.
(dt_simplify::gen): Capture both GENERIC and GIMPLE op
position for ADDR_EXPR and CONSTRUCTOR.
* match.pd: Simplify CONSTRUCTOR leaf handling.
* gcc.dg/tree-ssa/forwprop-3.c: Adjust.
* g++.dg/tree-ssa/pr31146-2.C: Likewise.
The following avoids CSE of &ps->wp to &ps->wp.hwnd confusing
-Wstringopt-overflow by making sure to produce addresses to the
biggest container from vectorization. For this I introduce
strip_zero_offset_components which turns &ps->wp.hwnd into
&(*ps) and use that to base the vector data references on.
That will also work for addresses with variable components,
alternatively emitting pointer arithmetic via calling
get_inner_reference and gimplifying that would be possible
but likely more intrusive.
This is by no means a complete fix for all of those issues
(avoiding ADDR_EXPRs in favor of pointer arithmetic might be).
Other passes will have similar issues.
In theory that might now cause false negatives.
PR tree-optimization/106904
* tree.h (strip_zero_offset_components): Declare.
* tree.cc (strip_zero_offset_components): Define.
* tree-vect-data-refs.cc (vect_create_addr_base_for_vector_ref):
Strip zero offset components before building the address.
* gcc.dg/Wstringop-overflow-pr106904.c: New testcase.
Seemingly, 's' (in VI that's the 's'ubstitute command) appeared verbatim in
a gfc_error message when to doing the '...' to %<...%> replacements in commit
r13-4590-g84f6f8a2a97f88be01e223c9c9dbab801a4f501f
gcc/fortran/
* openmp.cc (gfc_match_omp_context_selector_specification):
Remove spurious 's' in an error message.
gcc/fortran/ChangeLog:
PR fortran/106911
* simplify.cc (gfc_simplify_ishftc): If the SIZE argument is known
to be outside the allowed range, terminate simplification.
gcc/testsuite/ChangeLog:
PR fortran/106911
* gfortran.dg/pr106911.f90: New test.
The following testcase ICEs, because the latch bb ends with
asm goto which has both fallthrough to the header and one or more labels
in the header too. In that case there is just a single edge out of the
latch block, but still the asm goto is stmt_ends_bb_p statement, yet
ivopts decides to emit an IV bump at the IP_END position and inserts
it into the same bb as the asm goto after it, which then fails verification
(control flow in the middle of bb).
The following patch fixes it by splitting the latch -> header edge in that
case and inserting into the newly created bb, where split_edge ->
redirect_edge_and_branch is able to deal with this case correctly.
2022-12-10 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107997
* tree-ssa-loop-ivopts.cc: Include cfganal.h.
(create_new_iv) <case IP_END>: If ip_end_pos bb is non-empty and ends
with a stmt which ends bb, instead of adding iv update after it split
the latch edge and insert iterator into the new latch bb.
* gcc.c-torture/compile/pr107997.c: New test.
This commit enabled reverse offload for nvptx such that gomp_target_rev
actually gets called. And it fills the latter function to do all of
the following: finding the host function to the device func ptr and
copying the arguments to the host, processing the mapping/firstprivate,
calling the host function, copying back the data and freeing as needed.
The data handling is made easier by assuming that all host variables
either existed before (and are in the mapping) or that those are
devices variables not yet available on the host. Thus, the reverse
mapping can do without refcounts etc. Note that the spec disallows
inside a target region device-affecting constructs other than target
plus ancestor device-modifier and it also limits the clauses permitted
on this construct.
For the function addresses, an additional splay tree is used; for
the lookup of mapped variables, the existing splay-tree is used.
Unfortunately, its data structure requires a full walk of the tree;
Additionally, the just mapped variables are recorded in a separate
data structure an extra lookup. While the lookup is slow, assuming
that only few variables get mapped in each reverse offload construct
and that reverse offload is the exception and not performance critical,
this seems to be acceptable.
libgomp/ChangeLog:
* libgomp.h (struct target_mem_desc): Predeclare; move
below after 'reverse_splay_tree_node' and add rev_array
member.
(struct reverse_splay_tree_key_s, reverse_splay_compare): New.
(reverse_splay_tree_node, reverse_splay_tree,
reverse_splay_tree_key): New typedef.
(struct gomp_device_descr): Add mem_map_rev member.
* oacc-host.c (host_dispatch): NULL init .mem_map_rev.
* plugin/plugin-nvptx.c (GOMP_OFFLOAD_get_num_devices): Claim
support for GOMP_REQUIRES_REVERSE_OFFLOAD.
* splay-tree.h (splay_tree_callback_stop): New typedef; like
splay_tree_callback but returning int not void.
(splay_tree_foreach_lazy): Define; like splay_tree_foreach but
taking splay_tree_callback_stop as argument.
* splay-tree.c (splay_tree_foreach_internal_lazy,
splay_tree_foreach_lazy): New; but early exit if callback returns
nonzero.
* target.c: Instatiate splay_tree_c with splay_tree_prefix 'reverse'.
(gomp_map_lookup_rev): New.
(gomp_load_image_to_device): Handle reverse-offload function
lookup table.
(gomp_unload_image_from_device): Free devicep->mem_map_rev.
(struct gomp_splay_tree_rev_lookup_data, gomp_splay_tree_rev_lookup,
gomp_map_rev_lookup, struct cpy_data, gomp_map_cdata_lookup_int,
gomp_map_cdata_lookup): New auxiliary structs and functions for
gomp_target_rev.
(gomp_target_rev): Implement reverse offloading and its mapping.
(gomp_target_init): Init current_device.mem_map_rev.root.
* testsuite/libgomp.fortran/reverse-offload-2.f90: New test.
* testsuite/libgomp.fortran/reverse-offload-3.f90: New test.
* testsuite/libgomp.fortran/reverse-offload-4.f90: New test.
* testsuite/libgomp.fortran/reverse-offload-5.f90: New test.
* testsuite/libgomp.fortran/reverse-offload-5a.f90: New test without
mapping of on-device allocated variables.
Add initial ChangeLog file in libgm2 and gcc/m2.
ChangeLog:
* libgm2: (New directory).
* libgm2/ChangeLog: (New file).
gcc/ChangeLog:
* m2: (New directory).
* m2/ChangeLog: (New file).
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
When using %qs instead of '%s' or %<=%> instead of '=' looks nicer
by having nicer quotes and bold text, if the terminal supports it;
otherwise, plain quotes are used.
gcc/fortran/ChangeLog:
* match.cc (gfc_match_member_sep): Use %<...%> in gfc_error.
* openmp.cc (gfc_match_oacc_routine, gfc_match_omp_context_selector,
gfc_match_omp_context_selector_specification,
gfc_match_omp_declare_variant, resolve_omp_clauses): Likewise;
use %qs instead of '%s'.
* primary.cc (match_real_constant, gfc_match_varspec): Likewise.
* resolve.cc (gfc_resolve_formal_arglist, resolve_operator,
resolve_ordinary_assign): Likewise.
Prepare to add changelogs for the Modula2 front end by changing
the contrib git_commit.py script.
contrib/ChangeLog:
* gcc-changelog/git_commit.py (default_changelog_locations):
New entry for gcc/m2. New entry for libgm2.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
Function rs6000_emit_set_const/rs6000_emit_set_long_const are only invoked from
two "define_split"s where the target operand is limited to gpc_reg_operand or
int_reg_operand, then the operand must be REG_P.
And in rs6000_emit_set_const/rs6000_emit_set_long_const, to create temp rtx,
it is using code like "gen_reg_rtx({S|D}Imode)", it must also be REG_P.
So, copy_rtx is not needed for temp and dest.
This patch removes those "copy_rtx" for rs6000_emit_set_const and
rs6000_emit_set_long_const.
gcc/ChangeLog:
* config/rs6000/rs6000.cc (rs6000_emit_set_const): Remove copy_rtx.
(rs6000_emit_set_long_const): Likewise.
Similar story as PR103661, we again return a negative number
for __builtin_cpu_supports:
Documentation says:
int __builtin_cpu_supports(const char *feature)
This function returns a positive integer if the run-time CPU supports feature and returns 0 otherwise.
while we return -2147483648.
Moreover, I noticed "x86-64" is not a valid option for __builtin_cpu_is,
but for __builtin_cpu_supports.
PR target/107551
gcc/ChangeLog:
* config/i386/i386-builtins.cc (fold_builtin_cpu): Use same path
as for PR103661.
* doc/extend.texi: Fix "x86-64" use.
gcc/testsuite/ChangeLog:
* gcc.target/i386/builtin_target.c: Add more checks.
This change resolves a naming conflict introduced by the recently added
SUBTARGET_CC1_SPEC to gcc.cc. Some targets (mips and loongarch) aready used
a SUBTARGET_CC1_SPEC define. Rename the define used by gcc.cc to OS_CC1_SPEC.
gcc/ChangeLog:
* config/rtems.h (SUBTARGET_CC1_SPEC): Rename to...
(OS_CC1_SPEC): ...this.
* gcc.cc (SUBTARGET_CC1_SPEC): Rename to...
(OS_CC1_SPEC): ...this.
gcc/analyzer/ChangeLog:
PR analyzer/108003
* call-summary.cc
(call_summary_replay::convert_region_from_summary_1): Convert
heap_regs_in_use from auto_sbitmap to auto_bitmap.
* region-model-manager.cc
(region_model_manager::get_or_create_region_for_heap_alloc):
Convert from sbitmap to bitmap.
* region-model-manager.h: Likewise.
* region-model.cc
(region_model::get_or_create_region_for_heap_alloc): Convert from
auto_sbitmap to auto_bitmap.
(region_model::get_referenced_base_regions): Likewise.
* region-model.h: Include "bitmap.h" rather than "sbitmap.h".
(region_model::get_referenced_base_regions): Convert from
auto_sbitmap to auto_bitmap.
gcc/testsuite/ChangeLog:
PR analyzer/108003
* g++.dg/analyzer/pr108003.C: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
gcc/analyzer/ChangeLog:
* region-model-impl-calls.cc (class kf_memcpy): Rename to...
(class kf_memcpy_memmove): ...this.
(kf_memcpy::impl_call_pre): Rename to...
(kf_memcpy_memmove::impl_call_pre): ...this, and check the src for
poison.
(register_known_functions): Update for above renaming, and
register BUILT_IN_MEMMOVE and BUILT_IN_MEMMOVE_CHK.
gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/memcpy-1.c (test_8a, test_8b): New tests.
* gcc.dg/analyzer/memmove-1.c: New test, based on memcpy-1.c
* gcc.dg/analyzer/out-of-bounds-1.c (test7): Update expected
result for uninit srcBuf.
* gcc.dg/analyzer/out-of-bounds-5.c (test8, test9): Add
dg-warnings for memcpy from uninit src vla.
* gcc.dg/analyzer/pr104308.c (test_memmove_within_uninit):
Expect creation point note to be missing on riscv*-*-*.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
These are not valid in C++11 and cause a warning when preprocessing,
even though they're inside a skipped group.
chrono:2436: warning: missing terminating ' character
libstdc++-v3/ChangeLog:
PR libstdc++/108015
* include/std/chrono (hh_mm_ss): Remove digit separators.