The commit:
"Add LLL_MUTEX_READ_LOCK [BZ #28537]"
SHA1: d672a98a1a
introduced LLL_MUTEX_READ_LOCK, to skip CAS in spinlock loop
if atomic load fails. But, "continue" inside of do-while loop
does not skip the evaluation of escape expression, thus CAS
is not skipped.
Replace do-while with while and skip LLL_MUTEX_TRYLOCK if
LLL_MUTEX_READ_LOCK fails.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
This tunable allows applications to register the rseq area instead
of glibc.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The rseq area is placed directly into struct pthread. rseq
registration failure is not treated as an error, so it is possible
that threads run with inconsistent registration status.
<sys/rseq.h> is not yet installed as a public header.
Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The function was renamed to __atomic_wide_counter_load_relaxed
in commit 8bd336a00a ("nptl: Extract
<bits/atomic_wide_counter.h> from pthread_cond_common.c").
rseq support will use a 32-byte aligned field in struct pthread,
so the whole struct needs to have at least that alignment.
nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h>
to obtain the fallback definition.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
__libc_signal_restore_set was in the wrong place: It also ran
when setjmp returned the second time (after pthread_exit or
pthread_cancel). This is observable with blocked pending
signals during thread exit.
Fixes commit b3cae39dcb
("nptl: Start new threads with all signals blocked [BZ #25098]").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
And make it an installed header. This addresses a few aliasing
violations (which do not seem to result in miscompilation due to
the use of atomics), and also enables use of wide counters in other
parts of the library.
The debug output in nptl/tst-cond22 has been adjusted to print
the 32-bit values instead because it avoids a big-endian/little-endian
difference.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Update
commit 49302b8fdf
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:54:01 2021 -0800
Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
and
commit 0b82747dc4
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:31:51 2021 -0800
Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
by moving assignment out of the CAS condition.
CAS instruction is expensive. From the x86 CPU's point of view, getting
a cache line for writing is more expensive than reading. See Appendix
A.2 Spinlock in:
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf
The full compare and swap will grab the cache line exclusive and cause
excessive cache line bouncing.
Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock
loop if compare may fail to reduce cache line bouncing on contended locks.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
The check for waiting for the pidfile to be created looks wrong. At the
point when ACCESS is run the pid file will always be created and
accessible as it is created during DO_PREPARE. This means that thread
cancellation may be performed before the pid is written to the pidfile.
This was found to be flaky when testing on my OpenRISC platform.
Fix this by using the semaphore to wait for pidfile pid write
completion.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The choice between the kill vs tgkill system calls is not just about
the TID reuse race, but also about whether the signal is sent to the
whole process (and any thread in it) or to a specific thread.
This was caught by the openposix test suite:
LTP: openposix test suite - FAIL: SIGUSR1 is member of new thread pendingset.
<https://gitlab.com/cki-project/kernel-tests/-/issues/764>
Fixes commit 526c3cf11e ("nptl: Fix race
between pthread_kill and thread exit (bug 12889)").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Linux added FUTEX_LOCK_PI2 to support clock selection
(commit bf22a6976897977b0a3f1aeba6823c959fc4fdae). With the new
flag we can now proper support CLOCK_MONOTONIC for
pthread_mutex_clocklock with Priority Inheritance. If kernel
does not support, EINVAL is returned instead.
The difference is the futex operation will be issued and the kernel
will advertise the missing support (instead of hard-code error
return).
Checked on x86_64-linux-gnu and i686-linux-gnu on Linux 5.14, 5.11,
and 4.15.
This patch uses the new futex PI operation provided by Linux v5.14
when it is required.
The futex_lock_pi64() is moved to futex-internal.c (since it used on
two different places and its code size might be large depending of the
kernel configuration) and clockid is added as an argument.
Co-authored-by: Kurt Kanzenbach <kurt@linutronix.de>
As part of the fix for bug 12889, signals are blocked during
thread exit, so that application code cannot run on the thread that
is about to exit. This would cause problems if the application
expected signals to be delivered after the signal handler revealed
the thread to still exist, despite pthread_kill can no longer be used
to send signals to it. However, glibc internally uses the SIGSETXID
signal in a way that is incompatible with signal blocking, due to the
way the setxid handshake delays thread exit until the setxid operation
has completed. With a blocked SIGSETXID, the handshake can never
complete, causing a deadlock.
As a band-aid, restore the previous handshake protocol by not blocking
SIGSETXID during thread exit.
The new test sysdeps/pthread/tst-pthread-setuid-loop.c is based on
a downstream test by Martin Osvald.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The fix for bug 19193 breaks some old applications which appear
to use pthread_kill to probe if a thread is still running, something
that is not supported by POSIX.
A new thread exit lock and flag are introduced. They are used to
detect that the thread is about to exit or has exited in
__pthread_kill_internal, and the signal is not sent in this case.
The test sysdeps/pthread/tst-pthread_cancel-select-loop.c is derived
from a downstream test originally written by Marek Polacek.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This closes one remaining race condition related to bug 12889: if
the thread already exited on the kernel side, returning ESRCH
is not correct because that error is reserved for the thread IDs
(pthread_t values) whose lifetime has ended. In case of a
kernel-side exit and a valid thread ID, no signal needs to be sent
and cancellation does not have an effect, so just return 0.
sysdeps/pthread/tst-kill4.c triggers undefined behavior and is
removed with this commit.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date. Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.
Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions. These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.
The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively. These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:
https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dchttps://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
A mapped temporary file and a semaphore is used to synchronize the
pid information on the created file, the semaphore is updated once
the file contents is flushed.
Checked on x86_64-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The test nptl/tst-thread_local1.cc fails to build with GCC mainline
because of changes to what libstdc++ headers implicitly include what
other headers:
tst-thread_local1.cc: In function 'int do_test()':
tst-thread_local1.cc:177:5: error: variable 'std::array<std::pair<const char*, std::function<void(void* (*)(void*))> >, 2> do_thread_X' has initializer but incomplete type
177 | do_thread_X
| ^~~~~~~~~~~
Fix this by adding an explicit include of <array>.
Tested with build-many-glibcs.py for aarch64-linux-gnu.
Remove all malloc hook uses from core malloc functions and move it
into a new library libc_malloc_debug.so. With this, the hooks now no
longer have any effect on the core library.
libc_malloc_debug.so is a malloc interposer that needs to be preloaded
to get hooks functionality back so that the debugging features that
depend on the hooks, i.e. malloc-check, mcheck and mtrace work again.
Without the preloaded DSO these debugging features will be nops.
These features will be ported away from hooks in subsequent patches.
Similarly, legacy applications that need hooks functionality need to
preload libc_malloc_debug.so.
The symbols exported by libc_malloc_debug.so are maintained at exactly
the same version as libc.so.
Finally, static binaries will no longer be able to use malloc
debugging features since they cannot preload the debugging DSO.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The clone3 system call (since Linux 5.3) provides a superset of the
functionality of clone and clone2. It also provides a number of API
improvements, including the ability to specify the size of the child's
stack area which can be used by kernel to compute the shadow stack size
when allocating the shadow stack. Add:
extern int __clone_internal (struct clone_args *__cl_args,
int (*__func) (void *__arg), void *__arg);
to provide an abstract interface for clone, clone2 and clone3.
1. Simplify stack management for thread creation by passing both stack
base and size to create_thread.
2. Consolidate clone vs clone2 differences into a single file.
3. Call __clone3 if HAVE_CLONE3_WAPPER is defined. If __clone3 returns
-1 with ENOSYS, fall back to clone or clone2.
4. Use only __clone_internal to clone a thread. Since the stack size
argument for create_thread is now unconditional, always pass stack size
to create_thread.
5. Enable the public clone3 wrapper in the future after it has been
added to all targets.
NB: Sandbox will return ENOSYS on clone3 in both Chromium:
The following revision refers to this bug:
218438259d
commit 218438259dd795456f0a48f67cbe5b4e520db88b
Author: Matthew Denton <mpdenton@chromium.org>
Date: Thu Jun 03 20:06:13 2021
Linux sandbox: return ENOSYS for clone3
Because clone3 uses a pointer argument rather than a flags argument, we
cannot examine the contents with seccomp, which is essential to
preventing sandboxed processes from starting other processes. So, we
won't be able to support clone3 in Chromium. This CL modifies the
BPF policy to return ENOSYS for clone3 so glibc always uses the fallback
to clone.
Bug: 1213452
Change-Id: I7c7c585a319e0264eac5b1ebee1a45be2d782303
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2936184
Reviewed-by: Robert Sesek <rsesek@chromium.org>
Commit-Queue: Matthew Denton <mpdenton@chromium.org>
Cr-Commit-Position: refs/heads/master@{#888980}
[modify] https://crrev.com/218438259dd795456f0a48f67cbe5b4e520db88b/sandbox/linux/seccomp-bpf-helpers/baseline_policy.cc
and Firefox:
https://hg.mozilla.org/integration/autoland/rev/ecb4011a0c76
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
<limits.h> used to be a header file with no declarations.
GCC's libgomp includes it in a #pragma GCC visibility hidden block.
Including <unistd.h> from <limits.h> (indirectly) declares everything
in <unistd.h> with hidden visibility, resulting in linker failures.
This commit avoids C declarations in assembler mode and only declares
__sysconf in <limits.h> (and not the entire contents of <unistd.h>).
The __sysconf symbol is already part of the ABI. PTHREAD_STACK_MIN
is no longer defined for __USE_DYNAMIC_STACK_SIZE && __ASSEMBLER__
because there is no possible definition.
Additionally, PTHREAD_STACK_MIN is now defined by <pthread.h> for
__USE_MISC because this is what developers expect based on the macro
name. It also helps to avoid libgomp linker failures in GCC because
libgomp includes <pthread.h> before its visibility hacks.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The constant PTHREAD_STACK_MIN may be too small for some processors.
Rename _SC_SIGSTKSZ_SOURCE to _DYNAMIC_STACK_SIZE_SOURCE. When
_DYNAMIC_STACK_SIZE_SOURCE or _GNU_SOURCE are defined, define
PTHREAD_STACK_MIN to sysconf(_SC_THREAD_STACK_MIN) which is changed
to MIN (PTHREAD_STACK_MIN, sysconf(_SC_MINSIGSTKSZ)).
Consolidate <bits/local_lim.h> with <bits/pthread_stack_min.h> to
provide a constant target specific PTHREAD_STACK_MIN value.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
As a result, is not necessary to specify __attribute__ ((nocommon))
on individual definitions.
GCC 10 defaults to -fno-common on all architectures except ARC,
but this change is compatible with older GCC versions and ARC, too.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This slightly reduces code size, as can be seen below.
__libc_lock_unlock is usually used along with __libc_lock_lock in
the same function. __libc_lock_lock already has an out-of-line
slow path, so this change should not introduce many additional
non-leaf functions.
This change also fixes a link failure in 32-bit Arm thumb mode
because commit 1f9c804fbd
("nptl: Use internal low-level lock type for !IS_IN (libc)")
introduced __libc_do_syscall calls outside of libc.
Before x86-64:
text data bss dec hex filename
1937748 20456 54896 2013100 1eb7ac libc.so.6
25601 856 12768 39225 9939 nss/libnss_db.so.2
40310 952 25144 66406 10366 nss/libnss_files.so.2
After x86-64:
text data bss dec hex filename
1935312 20456 54896 2010664 1eae28 libc.so.6
25559 864 12768 39191 9917 nss/libnss_db.so.2
39764 960 25144 65868 1014c nss/libnss_files.so.2
Before i686:
2110961 11272 39144 2161377 20fae1 libc.so.6
27243 428 12652 40323 9d83 nss/libnss_db.so.2
43062 476 25028 68566 10bd6 nss/libnss_files.so.2
After i686:
2107347 11272 39144 2157763 20ecc3 libc.so.6
26929 432 12652 40013 9c4d nss/libnss_db.so.2
43132 480 25028 68640 10c20 nss/libnss_files.so.2
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The remaining symbols are mostly used by libthread_db.
__pthread_get_minstack has to remain exported even though unused.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Now that there are no internal users anymore, these new symbol
versions can be removed from the public ABI. The compatibility
symbols remain.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The valgrind/helgrind test suite needs a way to make stack dealloction
more prompt, and this feature seems to be generally useful.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
This allows distributions to strip debugging information from
libc.so.6 without impacting the debugging experience.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
librt.so is no longer installed for PTHREAD_IN_LIBC, and tests
are not linked against it. $(librt) is introduced globally for
shared tests that need to be linked for both PTHREAD_IN_LIBC
and !PTHREAD_IN_LIBC.
GLIBC_PRIVATE symbols that were needed during the transition are
removed again.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This commit also moves the aio_misc and aio_sigquue helper,
so GLIBC_PRIVATE exports need to be added.
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The pthread_atfork is similar between Linux and Hurd, only the compat
version bits differs. The generic version is place at sysdeps/pthread
with a common name.
It also fixes an issue with Hurd license, where the static-only object
did not use LGPL + exception.
Checked on x86_64-linux-gnu, i686-linux-gnu, and with a build for
i686-gnu.
The usage of signals to implementation pthread cancellation is an
implementation detail and should not be visible through cancellation
entrypoints.
However now that pthread_cancel always send the SIGCANCEL, some
entrypoint might be interruptable and return EINTR to the caller
(for instance on sem_wait).
Using SA_RESTART hides this, since the cancellation handler should
either act uppon cancellation (if asynchronous cancellation is enable)
or ignore the cancellation internal signal.
Checked on x86_64-linux-gnu and i686-linux-gnu.
The (private) symbols __pthread_clock_gettime, __pthread_clock_settime and
__pthread_initialize_minimal haven't been defined by libpthread for some
time.
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
This mirrors the situation on Hurd. These directories are on
the include search part, so #include <pthreadP.h> works after this
change on both Hurd and nptl.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
These were turned into compat symbols as part of the libpthread
move. It turns out they are used by language run-time libraries
(e.g., the GCC D front end), so it makes to preserve them as
external symbols even though they are not declared in any header
file.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
A new build flag, _TIME_BITS, enables the usage of the newer 64-bit
time symbols for legacy ABI (where 32-bit time_t is default). The 64
bit time support is only enabled if LFS (_FILE_OFFSET_BITS=64) is
also used.
Different than LFS support, the y2038 symbols are added only for the
required ABIs (armhf, csky, hppa, i386, m68k, microblaze, mips32,
mips64-n32, nios2, powerpc32, sparc32, s390-32, and sh). The ABIs with
64-bit time support are unchanged, both for symbol and types
redirection.
On Linux the full 64-bit time support requires a minimum of kernel
version v5.1. Otherwise, the 32-bit fallbacks are used and might
results in error with overflow return code (EOVERFLOW).
The i686-gnu does not yet support 64-bit time.
This patch exports following rediretions to support 64-bit time:
* libc:
adjtime
adjtimex
clock_adjtime
clock_getres
clock_gettime
clock_nanosleep
clock_settime
cnd_timedwait
ctime
ctime_r
difftime
fstat
fstatat
futimens
futimes
futimesat
getitimer
getrusage
gettimeofday
gmtime
gmtime_r
localtime
localtime_r
lstat_time
lutimes
mktime
msgctl
mtx_timedlock
nanosleep
nanosleep
ntp_gettime
ntp_gettimex
ppoll
pselec
pselect
pthread_clockjoin_np
pthread_cond_clockwait
pthread_cond_timedwait
pthread_mutex_clocklock
pthread_mutex_timedlock
pthread_rwlock_clockrdlock
pthread_rwlock_clockwrlock
pthread_rwlock_timedrdlock
pthread_rwlock_timedwrlock
pthread_timedjoin_np
recvmmsg
sched_rr_get_interval
select
sem_clockwait
semctl
semtimedop
sem_timedwait
setitimer
settimeofday
shmctl
sigtimedwait
stat
thrd_sleep
time
timegm
timerfd_gettime
timerfd_settime
timespec_get
utime
utimensat
utimes
utimes
wait3
wait4
* librt:
aio_suspend
mq_timedreceive
mq_timedsend
timer_gettime
timer_settime
* libanl:
gai_suspend
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The testcase provided on BZ#19366 may update __nptl_nthreads in a wrong
order, triggering an early process exit because the thread decrement
the value twice.
The issue is once the thread exits without acting on cancellation,
it decreaments '__nptl_nthreads' and then atomically set
'cancelhandling' with EXITING_BIT (thus preventing further cancellation
handler to act). The issue happens if a SIGCANCEL is received between
checking '__ntpl_nthreads' and setting EXITING_BIT. To avoid it, the
'__nptl_nthreads' decrement is moved after EXITING_BIT.
It does fully follow the POSIX XSH 2.9.5 Thread Cancellation under
the heading Thread Cancellation Cleanup Handlers that states that
when a cancellation request is acted upon, or when a thread calls
pthread_exit(), the thread first disables cancellation by setting its
cancelability state to PTHREAD_CANCEL_DISABLE and its cancelability type
to PTHREAD_CANCEL_DEFERRED. The issue is '__pthread_enable_asynccancel'
explicit enabled assynchrnous cancellation, so an interrupted syscall
within the cancellation cleanup handlers might see an invalid cancelling
type (a possible fix might be possible with my proposed solution to
BZ#12683).
Trying to come up with a test is quite hard since it requires to
mimic the timing issue described below, however I see that the
bug report reproducer does not early exit anymore.
Checked on x86_64-linux-gnu.
It consolidates the tgkill call and it is the first step of making
pthread_cancel async-signal-safe. It also fix a possible issue
where the 'struct pthread' tid is not read atomically, which might
send an invalid cancellation signal (similar to what
db988e50a8 fixed for pthread_join).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that pthread_kill is provided by libc.so it is possible to
implement the generic POSIX implementation as
'pthread_kill(pthread_self(), sig)'.
For Linux implementation, pthread_kill read the targeting TID from
the TCB. For raise, this it not possible because it would make raise
fail when issue after vfork (where creates the resulting process
has a different TID from the parent, but its TCB is not updated as
for pthread_create). To make raise use pthread_kill, it is make
usable from vfork by getting the target thread id through gettid
syscall.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that the thread cancellation type is not accessed concurrently
anymore, it is possible to move it out the cancelhandling.
By removing the cancel state out of the internal thread cancel handling
state there is no need to check if cancelled bit was set in CAS
operation.
It allows simplifing the cancellation wrappers and the
CANCEL_CANCELED_AND_ASYNCHRONOUS is removed.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that thread cancellation state is not accessed concurrently anymore,
it is possible to move it out the 'cancelhandling'.
The code is also simplified: CANCELLATION_P is replaced with a
internal pthread_testcancel call and the CANCELSTATE_BIT{MASK} is
removed.
With this behavior pthread_setcancelstate does not require to act on
cancellation if cancel type is asynchronous (is already handled either
by pthread_setcanceltype or by the signal handler).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
The CANCELING_BITMASK is used as an optimization to avoid sending
the signal when pthread_cancel is called in a concurrent manner.
This requires then to put both the cancellation state and type on a
shared state (cancelhandling), since 'pthread_cancel' checks whether
cancellation is enabled and asynchrnous to either cancel itself of
sending the signal.
It also requires handle the CANCELING_BITMASK on
__pthread_disable_asynccancel, however this incurs in the same issues
described on BZ#12683: the cancellation is acted upon even *after*
syscall returns with user visible side-effects.
This patch removes this optimization and simplifies the pthread
cancellation implementation: pthread_cancel now first checks if
cancellation is already pending and if not always, sends a signal
if the target is not itself. The SIGCANCEL handler is also simpified
since there is not need to setup a CAS loop.
It also allows to move both the cancellation state and mode out of
'cancelhadling' (it is done in subsequent patches).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.