glibc/nptl/pthread_cond_wait.c

712 lines
31 KiB
C
Raw Normal View History

/* Copyright (C) 2003-2021 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Contributed by Martin Schwidefsky <schwidefsky@de.ibm.com>, 2003.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
Prefer https to http for gnu.org and fsf.org URLs Also, change sources.redhat.com to sourceware.org. This patch was automatically generated by running the following shell script, which uses GNU sed, and which avoids modifying files imported from upstream: sed -ri ' s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g ' \ $(find $(git ls-files) -prune -type f \ ! -name '*.po' \ ! -name 'ChangeLog*' \ ! -path COPYING ! -path COPYING.LIB \ ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \ ! -path manual/texinfo.tex ! -path scripts/config.guess \ ! -path scripts/config.sub ! -path scripts/install-sh \ ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \ ! -path INSTALL ! -path locale/programs/charmap-kw.h \ ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \ ! '(' -name configure \ -execdir test -f configure.ac -o -f configure.in ';' ')' \ ! '(' -name preconfigure \ -execdir test -f preconfigure.ac ';' ')' \ -print) and then by running 'make dist-prepare' to regenerate files built from the altered files, and then executing the following to cleanup: chmod a+x sysdeps/unix/sysv/linux/riscv/configure # Omit irrelevant whitespace and comment-only changes, # perhaps from a slightly-different Autoconf version. git checkout -f \ sysdeps/csky/configure \ sysdeps/hppa/configure \ sysdeps/riscv/configure \ sysdeps/unix/sysv/linux/csky/configure # Omit changes that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines git checkout -f \ sysdeps/powerpc/powerpc64/ppc-mcount.S \ sysdeps/unix/sysv/linux/s390/s390-64/syscall.S # Omit change that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
2019-09-07 13:40:42 +08:00
<https://www.gnu.org/licenses/>. */
#include <endian.h>
#include <errno.h>
#include <sysdep.h>
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
#include <futex-internal.h>
#include <pthread.h>
#include <pthreadP.h>
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
#include <sys/time.h>
#include <atomic.h>
#include <stdint.h>
#include <stdbool.h>
#include <shlib-compat.h>
#include <stap-probe.h>
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
#include <time.h>
#include "pthread_cond_common.c"
struct _condvar_cleanup_buffer
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
uint64_t wseq;
pthread_cond_t *cond;
pthread_mutex_t *mutex;
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
int private;
};
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Decrease the waiter reference count. */
static void
__condvar_confirm_wakeup (pthread_cond_t *cond, int private)
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* If destruction is pending (i.e., the wake-request flag is nonzero) and we
are the last waiter (prior value of __wrefs was 1 << 3), then wake any
threads waiting in pthread_cond_destroy. Release MO to synchronize with
these threads. Don't bother clearing the wake-up request flag. */
if ((atomic_fetch_add_release (&cond->__data.__wrefs, -8) >> 2) == 3)
futex_wake (&cond->__data.__wrefs, INT_MAX, private);
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Cancel waiting after having registered as a waiter previously. SEQ is our
position and G is our group index.
The goal of cancellation is to make our group smaller if that is still
possible. If we are in a closed group, this is not possible anymore; in
this case, we need to send a replacement signal for the one we effectively
consumed because the signal should have gotten consumed by another waiter
instead; we must not both cancel waiting and consume a signal.
Must not be called while still holding a reference on the group.
Returns true iff we consumed a signal.
On some kind of timeouts, we may be able to pretend that a signal we
effectively consumed happened before the timeout (i.e., similarly to first
spinning on signals before actually checking whether the timeout has
passed already). Doing this would allow us to skip sending a replacement
signal, but this case might happen rarely because the end of the timeout
must race with someone else sending a signal. Therefore, we don't bother
trying to optimize this. */
static void
__condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g,
int private)
{
bool consumed_signal = false;
/* No deadlock with group switching is possible here because we do
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
not hold a reference on the group. */
__condvar_acquire_lock (cond, private);
uint64_t g1_start = __condvar_load_g1_start_relaxed (cond) >> 1;
if (g1_start > seq)
{
/* Our group is closed, so someone provided enough signals for it.
Thus, we effectively consumed a signal. */
consumed_signal = true;
}
else
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
if (g1_start + __condvar_get_orig_size (cond) <= seq)
{
/* We are in the current G2 and thus cannot have consumed a signal.
Reduce its effective size or handle overflow. Remember that in
G2, unsigned int size is zero or a negative value. */
if (cond->__data.__g_size[g] + __PTHREAD_COND_MAX_GROUP_SIZE > 0)
{
cond->__data.__g_size[g]--;
}
else
{
/* Cancellations would overflow the maximum group size. Just
wake up everyone spuriously to create a clean state. This
also means we do not consume a signal someone else sent. */
__condvar_release_lock (cond, private);
__pthread_cond_broadcast (cond);
return;
}
}
else
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* We are in current G1. If the group's size is zero, someone put
a signal in the group that nobody else but us can consume. */
if (cond->__data.__g_size[g] == 0)
consumed_signal = true;
else
{
/* Otherwise, we decrease the size of the group. This is
equivalent to atomically putting in a signal just for us and
consuming it right away. We do not consume a signal sent
by someone else. We also cannot have consumed a futex
wake-up because if we were cancelled or timed out in a futex
call, the futex will wake another waiter. */
cond->__data.__g_size[g]--;
}
}
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
__condvar_release_lock (cond, private);
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
if (consumed_signal)
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* We effectively consumed a signal even though we didn't want to.
Therefore, we need to send a replacement signal.
If we would want to optimize this, we could do what
pthread_cond_signal does right in the critical section above. */
__pthread_cond_signal (cond);
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Wake up any signalers that might be waiting. */
static void
__condvar_dec_grefs (pthread_cond_t *cond, unsigned int g, int private)
{
/* Release MO to synchronize-with the acquire load in
__condvar_quiesce_and_switch_g1. */
if (atomic_fetch_add_release (cond->__data.__g_refs + g, -2) == 3)
{
/* Clear the wake-up request flag before waking up. We do not need more
than relaxed MO and it doesn't matter if we apply this for an aliased
group because we wake all futex waiters right after clearing the
flag. */
atomic_fetch_and_relaxed (cond->__data.__g_refs + g, ~(unsigned int) 1);
futex_wake (cond->__data.__g_refs + g, INT_MAX, private);
}
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Clean-up for cancellation of waiters waiting for normal signals. We cancel
our registration as a waiter, confirm we have woken up, and re-acquire the
mutex. */
static void
__condvar_cleanup_waiting (void *arg)
{
struct _condvar_cleanup_buffer *cbuffer =
(struct _condvar_cleanup_buffer *) arg;
pthread_cond_t *cond = cbuffer->cond;
unsigned g = cbuffer->wseq & 1;
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
__condvar_dec_grefs (cond, g, cbuffer->private);
__condvar_cancel_waiting (cond, cbuffer->wseq >> 1, g, cbuffer->private);
/* FIXME With the current cancellation implementation, it is possible that
a thread is cancelled after it has returned from a syscall. This could
result in a cancelled waiter consuming a futex wake-up that is then
causing another waiter in the same group to not wake up. To work around
this issue until we have fixed cancellation, just add a futex wake-up
conservatively. */
futex_wake (cond->__data.__g_signals + g, 1, cbuffer->private);
__condvar_confirm_wakeup (cond, cbuffer->private);
/* XXX If locking the mutex fails, should we just stop execution? This
might be better than silently ignoring the error. */
__pthread_mutex_cond_lock (cbuffer->mutex);
}
/* This condvar implementation guarantees that all calls to signal and
broadcast and all of the three virtually atomic parts of each call to wait
(i.e., (1) releasing the mutex and blocking, (2) unblocking, and (3) re-
acquiring the mutex) happen in some total order that is consistent with the
happens-before relations in the calling program. However, this order does
not necessarily result in additional happens-before relations being
established (which aligns well with spurious wake-ups being allowed).
All waiters acquire a certain position in a 64b waiter sequence (__wseq).
This sequence determines which waiters are allowed to consume signals.
A broadcast is equal to sending as many signals as are unblocked waiters.
When a signal arrives, it samples the current value of __wseq with a
relaxed-MO load (i.e., the position the next waiter would get). (This is
sufficient because it is consistent with happens-before; the caller can
enforce stronger ordering constraints by calling signal while holding the
mutex.) Only waiters with a position less than the __wseq value observed
by the signal are eligible to consume this signal.
This would be straight-forward to implement if waiters would just spin but
we need to let them block using futexes. Futexes give no guarantee of
waking in FIFO order, so we cannot reliably wake eligible waiters if we
just use a single futex. Also, futex words are 32b in size, but we need
to distinguish more than 1<<32 states because we need to represent the
order of wake-up (and thus which waiters are eligible to consume signals);
blocking in a futex is not atomic with a waiter determining its position in
the waiter sequence, so we need the futex word to reliably notify waiters
that they should not attempt to block anymore because they have been
already signaled in the meantime. While an ABA issue on a 32b value will
be rare, ignoring it when we are aware of it is not the right thing to do
either.
Therefore, we use a 64b counter to represent the waiter sequence (on
architectures which only support 32b atomics, we use a few bits less).
To deal with the blocking using futexes, we maintain two groups of waiters:
* Group G1 consists of waiters that are all eligible to consume signals;
incoming signals will always signal waiters in this group until all
waiters in G1 have been signaled.
* Group G2 consists of waiters that arrive when a G1 is present and still
contains waiters that have not been signaled. When all waiters in G1
are signaled and a new signal arrives, the new signal will convert G2
into the new G1 and create a new G2 for future waiters.
We cannot allocate new memory because of process-shared condvars, so we
have just two slots of groups that change their role between G1 and G2.
Each has a separate futex word, a number of signals available for
consumption, a size (number of waiters in the group that have not been
signaled), and a reference count.
The group reference count is used to maintain the number of waiters that
are using the group's futex. Before a group can change its role, the
reference count must show that no waiters are using the futex anymore; this
prevents ABA issues on the futex word.
To represent which intervals in the waiter sequence the groups cover (and
thus also which group slot contains G1 or G2), we use a 64b counter to
designate the start position of G1 (inclusive), and a single bit in the
waiter sequence counter to represent which group slot currently contains
G2. This allows us to switch group roles atomically wrt. waiters obtaining
a position in the waiter sequence. The G1 start position allows waiters to
figure out whether they are in a group that has already been completely
signaled (i.e., if the current G1 starts at a later position that the
waiter's position). Waiters cannot determine whether they are currently
in G2 or G1 -- but they do not have too because all they are interested in
is whether there are available signals, and they always start in G2 (whose
group slot they know because of the bit in the waiter sequence. Signalers
will simply fill the right group until it is completely signaled and can
be closed (they do not switch group roles until they really have to to
decrease the likelihood of having to wait for waiters still holding a
reference on the now-closed G1).
Signalers maintain the initial size of G1 to be able to determine where
G2 starts (G2 is always open-ended until it becomes G1). They track the
remaining size of a group; when waiters cancel waiting (due to PThreads
cancellation or timeouts), they will decrease this remaining size as well.
To implement condvar destruction requirements (i.e., that
pthread_cond_destroy can be called as soon as all waiters have been
signaled), waiters increment a reference count before starting to wait and
decrement it after they stopped waiting but right before they acquire the
mutex associated with the condvar.
pthread_cond_t thus consists of the following (bits that are used for
flags and are not part of the primary value of each field but necessary
to make some things atomic or because there was no space for them
elsewhere in the data structure):
__wseq: Waiter sequence counter
* LSB is index of current G2.
* Waiters fetch-add while having acquire the mutex associated with the
condvar. Signalers load it and fetch-xor it concurrently.
__g1_start: Starting position of G1 (inclusive)
* LSB is index of current G2.
* Modified by signalers while having acquired the condvar-internal lock
and observed concurrently by waiters.
__g1_orig_size: Initial size of G1
* The two least-significant bits represent the condvar-internal lock.
* Only accessed while having acquired the condvar-internal lock.
__wrefs: Waiter reference counter.
* Bit 2 is true if waiters should run futex_wake when they remove the
last reference. pthread_cond_destroy uses this as futex word.
* Bit 1 is the clock ID (0 == CLOCK_REALTIME, 1 == CLOCK_MONOTONIC).
* Bit 0 is true iff this is a process-shared condvar.
* Simple reference count used by both waiters and pthread_cond_destroy.
(If the format of __wrefs is changed, update nptl_lock_constants.pysym
and the pretty printers.)
For each of the two groups, we have:
__g_refs: Futex waiter reference count.
* LSB is true if waiters should run futex_wake when they remove the
last reference.
* Reference count used by waiters concurrently with signalers that have
acquired the condvar-internal lock.
__g_signals: The number of signals that can still be consumed.
* Used as a futex word by waiters. Used concurrently by waiters and
signalers.
* LSB is true iff this group has been completely signaled (i.e., it is
closed).
__g_size: Waiters remaining in this group (i.e., which have not been
signaled yet.
* Accessed by signalers and waiters that cancel waiting (both do so only
when having acquired the condvar-internal lock.
* The size of G2 is always zero because it cannot be determined until
the group becomes G1.
* Although this is of unsigned type, we rely on using unsigned overflow
rules to make this hold effectively negative values too (in
particular, when waiters in G2 cancel waiting).
A PTHREAD_COND_INITIALIZER condvar has all fields set to zero, which yields
a condvar that has G2 starting at position 0 and a G1 that is closed.
Because waiters do not claim ownership of a group right when obtaining a
position in __wseq but only reference count the group when using futexes
to block, it can happen that a group gets closed before a waiter can
increment the reference count. Therefore, waiters have to check whether
their group is already closed using __g1_start. They also have to perform
this check when spinning when trying to grab a signal from __g_signals.
Note that for these checks, using relaxed MO to load __g1_start is
sufficient because if a waiter can see a sufficiently large value, it could
have also consume a signal in the waiters group.
Waiters try to grab a signal from __g_signals without holding a reference
count, which can lead to stealing a signal from a more recent group after
their own group was already closed. They cannot always detect whether they
in fact did because they do not know when they stole, but they can
conservatively add a signal back to the group they stole from; if they
did so unnecessarily, all that happens is a spurious wake-up. To make this
even less likely, __g1_start contains the index of the current g2 too,
which allows waiters to check if there aliasing on the group slots; if
there wasn't, they didn't steal from the current G1, which means that the
G1 they stole from must have been already closed and they do not need to
fix anything.
It is essential that the last field in pthread_cond_t is __g_signals[1]:
The previous condvar used a pointer-sized field in pthread_cond_t, so a
PTHREAD_COND_INITIALIZER from that condvar implementation might only
initialize 4 bytes to zero instead of the 8 bytes we need (i.e., 44 bytes
in total instead of the 48 we need). __g_signals[1] is not accessed before
the first group switch (G2 starts at index 0), which will set its value to
zero after a harmless fetch-or whose return value is ignored. This
effectively completes initialization.
Limitations:
* This condvar isn't designed to allow for more than
__PTHREAD_COND_MAX_GROUP_SIZE * (1 << 31) calls to __pthread_cond_wait.
* More than __PTHREAD_COND_MAX_GROUP_SIZE concurrent waiters are not
supported.
* Beyond what is allowed as errors by POSIX or documented, we can also
return the following errors:
* EPERM if MUTEX is a recursive mutex and the caller doesn't own it.
* EOWNERDEAD or ENOTRECOVERABLE when using robust mutexes. Unlike
for other errors, this can happen when we re-acquire the mutex; this
isn't allowed by POSIX (which requires all errors to virtually happen
before we release the mutex or change the condvar state), but there's
nothing we can do really.
* When using PTHREAD_MUTEX_PP_* mutexes, we can also return all errors
returned by __pthread_tpp_change_priority. We will already have
released the mutex in such cases, so the caller cannot expect to own
MUTEX.
Other notes:
* Instead of the normal mutex unlock / lock functions, we use
__pthread_mutex_unlock_usercnt(m, 0) / __pthread_mutex_cond_lock(m)
because those will not change the mutex-internal users count, so that it
can be detected when a condvar is still associated with a particular
mutex because there is a waiter blocked on this condvar using this mutex.
*/
static __always_inline int
__pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex,
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
clockid_t clockid, const struct __timespec64 *abstime)
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
const int maxspin = 0;
int err;
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
int result = 0;
LIBC_PROBE (cond_wait, 2, cond, mutex);
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
/* clockid will already have been checked by
__pthread_cond_clockwait or pthread_condattr_setclock, or we
don't use it if abstime is NULL, so we don't need to check it
here. */
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Acquire a position (SEQ) in the waiter sequence (WSEQ). We use an
atomic operation because signals and broadcasts may update the group
switch without acquiring the mutex. We do not need release MO here
because we do not need to establish any happens-before relation with
signalers (see __pthread_cond_signal); modification order alone
establishes a total order of waiters/signals. We do need acquire MO
to synchronize with group reinitialization in
__condvar_quiesce_and_switch_g1. */
uint64_t wseq = __condvar_fetch_add_wseq_acquire (cond, 2);
/* Find our group's index. We always go into what was G2 when we acquired
our position. */
unsigned int g = wseq & 1;
uint64_t seq = wseq >> 1;
/* Increase the waiter reference count. Relaxed MO is sufficient because
we only need to synchronize when decrementing the reference count. */
unsigned int flags = atomic_fetch_add_relaxed (&cond->__data.__wrefs, 8);
int private = __condvar_get_private (flags);
/* Now that we are registered as a waiter, we can release the mutex.
Waiting on the condvar must be atomic with releasing the mutex, so if
the mutex is used to establish a happens-before relation with any
signaler, the waiter must be visible to the latter; thus, we release the
mutex after registering as waiter.
If releasing the mutex fails, we just cancel our registration as a
waiter and confirm that we have woken up. */
Update. 2003-06-22 Ulrich Drepper <drepper@redhat.com> * pthreadP.h (__pthread_mutex_init_internal): Mark hidden. (__pthread_mutex_lock_internal): Likewise. (__pthread_mutex_unlock_internal): Likewise. (__pthread_mutex_unlock_usercnt): Declare. * pthread_mutex_destroy.c: Always fail if used in any way. * pthread_mutex_init.c: Update comment. * pthread_mutex_lock.c: If NO_INCR is not defined adjust __nusers. * pthread_mutex_timedlock.c: Adjust __nusers. * pthread_mutex_trylock.c: Adjust __nusers. * pthread_mutex_unlock.c: Old code is in __pthread_mutex_unlock_usercnt and public interfaces are wrapper with pass additional parameter. __pthread_mutex_unlock_usercnt does not adjust __nusers if second parameter zero. * tst-mutex8.c: New file. * Makefile (tests): Add tst-mutex8. * sysdeps/pthread/pthread_cond_timedwait.c: Call __pthread_mutex_unlock_usercnt. * sysdeps/pthread/pthread_cond_wait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/pthread_mutex_cond_lock.c: Define NO_INCR. * sysdeps/unix/sysv/linux/i386/bits/pthreadtypes.h (pthread_mutex_t): Add __nusers. * sysdeps/unix/sysv/linux/ia64/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/s390/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/sh/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/bits/pthreadtypes.h: Likewise. * pthread_mutex_lock.c: Don't store THREAD_ID in __owner, use TID. * pthread_mutex_timedlock.c: Likewise. * pthread_mutex_trylock.c: Adjust __nusers. * pthread_mutex_unlock.c: Compare with TID not THREAD_ID. * tst-mutex9.c: New file. * Makefile (tests): Add tst-mutex9. * sysdeps/i386/tls.h: Remove THREAD_ID definition. * sysdeps/ia64/tls.h: Likewise. * sysdeps/powerpc/tls.h: Likewise. * sysdeps/s390/tls.h: Likewise. * sysdeps/sh/tls.h: Likewise. * sysdeps/x86_64/tls.h: Likewise. * sysdeps/unix/sysv/linux/i386/bits/pthreadtypes.h (pthread_mutex_t): Change type of __owner. * sysdeps/unix/sysv/linux/ia64/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/s390/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/sh/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/bits/pthreadtypes.h: Likewise.
2003-06-23 07:55:27 +08:00
err = __pthread_mutex_unlock_usercnt (mutex, 0);
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
if (__glibc_unlikely (err != 0))
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
__condvar_cancel_waiting (cond, seq, g, private);
__condvar_confirm_wakeup (cond, private);
return err;
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Now wait until a signal is available in our group or it is closed.
Acquire MO so that if we observe a value of zero written after group
switching in __condvar_quiesce_and_switch_g1, we synchronize with that
store and will see the prior update of __g1_start done while switching
groups too. */
unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g);
do
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
while (1)
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Spin-wait first.
Note that spinning first without checking whether a timeout
passed might lead to what looks like a spurious wake-up even
though we should return ETIMEDOUT (e.g., if the caller provides
an absolute timeout that is clearly in the past). However,
(1) spurious wake-ups are allowed, (2) it seems unlikely that a
user will (ab)use pthread_cond_wait as a check for whether a
point in time is in the past, and (3) spinning first without
having to compare against the current time seems to be the right
choice from a performance perspective for most use cases. */
unsigned int spin = maxspin;
while (signals == 0 && spin > 0)
{
/* Check that we are not spinning on a group that's already
closed. */
if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1))
goto done;
/* TODO Back off. */
/* Reload signals. See above for MO. */
signals = atomic_load_acquire (cond->__data.__g_signals + g);
spin--;
}
/* If our group will be closed as indicated by the flag on signals,
don't bother grabbing a signal. */
if (signals & 1)
goto done;
/* If there is an available signal, don't block. */
if (signals != 0)
break;
/* No signals available after spinning, so prepare to block.
We first acquire a group reference and use acquire MO for that so
that we synchronize with the dummy read-modify-write in
__condvar_quiesce_and_switch_g1 if we read from that. In turn,
in this case this will make us see the closed flag on __g_signals
that designates a concurrent attempt to reuse the group's slot.
We use acquire MO for the __g_signals check to make the
__g1_start check work (see spinning above).
Note that the group reference acquisition will not mask the
release MO when decrementing the reference count because we use
an atomic read-modify-write operation and thus extend the release
sequence. */
atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2);
if (((atomic_load_acquire (cond->__data.__g_signals + g) & 1) != 0)
|| (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)))
{
/* Our group is closed. Wake up any signalers that might be
waiting. */
__condvar_dec_grefs (cond, g, private);
goto done;
}
// Now block.
struct _pthread_cleanup_buffer buffer;
struct _condvar_cleanup_buffer cbuffer;
cbuffer.wseq = wseq;
cbuffer.cond = cond;
cbuffer.mutex = mutex;
cbuffer.private = private;
__pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer);
err = __futex_abstimed_wait_cancelable64 (
cond->__data.__g_signals + g, 0, clockid, abstime, private);
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
__pthread_cleanup_pop (&buffer, 0);
if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW))
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
{
__condvar_dec_grefs (cond, g, private);
/* If we timed out, we effectively cancel waiting. Note that
we have decremented __g_refs before cancellation, so that a
deadlock between waiting for quiescence of our group in
__condvar_quiesce_and_switch_g1 and us trying to acquire
the lock during cancellation is not possible. */
__condvar_cancel_waiting (cond, seq, g, private);
result = err;
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
goto done;
}
else
__condvar_dec_grefs (cond, g, private);
/* Reload signals. See above for MO. */
signals = atomic_load_acquire (cond->__data.__g_signals + g);
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
}
/* Try to grab a signal. Use acquire MO so that we see an up-to-date value
of __g1_start below (see spinning above for a similar case). In
particular, if we steal from a more recent group, we will also see a
more recent __g1_start below. */
while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g,
&signals, signals - 2));
/* We consumed a signal but we could have consumed from a more recent group
that aliased with ours due to being in the same group slot. If this
might be the case our group must be closed as visible through
__g1_start. */
uint64_t g1_start = __condvar_load_g1_start_relaxed (cond);
if (seq < (g1_start >> 1))
{
/* We potentially stole a signal from a more recent group but we do not
know which group we really consumed from.
We do not care about groups older than current G1 because they are
closed; we could have stolen from these, but then we just add a
spurious wake-up for the current groups.
We will never steal a signal from current G2 that was really intended
for G2 because G2 never receives signals (until it becomes G1). We
could have stolen a signal from G2 that was conservatively added by a
previous waiter that also thought it stole a signal -- but given that
that signal was added unnecessarily, it's not a problem if we steal
it.
Thus, the remaining case is that we could have stolen from the current
G1, where "current" means the __g1_start value we observed. However,
if the current G1 does not have the same slot index as we do, we did
not steal from it and do not need to undo that. This is the reason
for putting a bit with G2's index into__g1_start as well. */
if (((g1_start & 1) ^ 1) == g)
{
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* We have to conservatively undo our potential mistake of stealing
a signal. We can stop trying to do that when the current G1
changes because other spinning waiters will notice this too and
__condvar_quiesce_and_switch_g1 has checked that there are no
futex waiters anymore before switching G1.
Relaxed MO is fine for the __g1_start load because we need to
merely be able to observe this fact and not have to observe
something else as well.
??? Would it help to spin for a little while to see whether the
current G1 gets closed? This might be worthwhile if the group is
small or close to being closed. */
unsigned int s = atomic_load_relaxed (cond->__data.__g_signals + g);
while (__condvar_load_g1_start_relaxed (cond) == g1_start)
{
/* Try to add a signal. We don't need to acquire the lock
because at worst we can cause a spurious wake-up. If the
group is in the process of being closed (LSB is true), this
has an effect similar to us adding a signal. */
if (((s & 1) != 0)
|| atomic_compare_exchange_weak_relaxed
(cond->__data.__g_signals + g, &s, s + 2))
{
/* If we added a signal, we also need to add a wake-up on
the futex. We also need to do that if we skipped adding
a signal because the group is being closed because
while __condvar_quiesce_and_switch_g1 could have closed
the group, it might stil be waiting for futex waiters to
leave (and one of those waiters might be the one we stole
the signal from, which cause it to block using the
futex). */
futex_wake (cond->__data.__g_signals + g, 1, private);
break;
}
/* TODO Back off. */
}
}
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
done:
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Confirm that we have been woken. We do that before acquiring the mutex
to allow for execution of pthread_cond_destroy while having acquired the
mutex. */
__condvar_confirm_wakeup (cond, private);
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* Woken up; now re-acquire the mutex. If this doesn't fail, return RESULT,
which is set to ETIMEDOUT if a timeout occured, or zero otherwise. */
err = __pthread_mutex_cond_lock (mutex);
/* XXX Abort on errors that are disallowed by POSIX? */
return (err != 0) ? err : result;
}
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* See __pthread_cond_wait_common. */
int
___pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex)
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
{
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
/* clockid is unused when abstime is NULL. */
return __pthread_cond_wait_common (cond, mutex, 0, NULL);
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
}
versioned_symbol (libc, ___pthread_cond_wait, pthread_cond_wait,
GLIBC_2_3_2);
libc_hidden_ver (___pthread_cond_wait, __pthread_cond_wait)
#ifndef SHARED
strong_alias (___pthread_cond_wait, __pthread_cond_wait)
#endif
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
/* See __pthread_cond_wait_common. */
int
___pthread_cond_timedwait64 (pthread_cond_t *cond, pthread_mutex_t *mutex,
const struct __timespec64 *abstime)
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
{
/* Check parameter validity. This should also tell the compiler that
it can assume that abstime is not NULL. */
if (! valid_nanoseconds (abstime->tv_nsec))
New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-05-26 05:43:36 +08:00
return EINVAL;
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
/* Relaxed MO is suffice because clock ID bit is only modified
in condition creation. */
unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs);
clockid_t clockid = (flags & __PTHREAD_COND_CLOCK_MONOTONIC_MASK)
? CLOCK_MONOTONIC : CLOCK_REALTIME;
return __pthread_cond_wait_common (cond, mutex, clockid, abstime);
}
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
#if __TIMESIZE == 64
strong_alias (___pthread_cond_timedwait64, ___pthread_cond_timedwait)
#else
strong_alias (___pthread_cond_timedwait64, __pthread_cond_timedwait64)
y2038: Add support for 64-bit time on legacy ABIs A new build flag, _TIME_BITS, enables the usage of the newer 64-bit time symbols for legacy ABI (where 32-bit time_t is default). The 64 bit time support is only enabled if LFS (_FILE_OFFSET_BITS=64) is also used. Different than LFS support, the y2038 symbols are added only for the required ABIs (armhf, csky, hppa, i386, m68k, microblaze, mips32, mips64-n32, nios2, powerpc32, sparc32, s390-32, and sh). The ABIs with 64-bit time support are unchanged, both for symbol and types redirection. On Linux the full 64-bit time support requires a minimum of kernel version v5.1. Otherwise, the 32-bit fallbacks are used and might results in error with overflow return code (EOVERFLOW). The i686-gnu does not yet support 64-bit time. This patch exports following rediretions to support 64-bit time: * libc: adjtime adjtimex clock_adjtime clock_getres clock_gettime clock_nanosleep clock_settime cnd_timedwait ctime ctime_r difftime fstat fstatat futimens futimes futimesat getitimer getrusage gettimeofday gmtime gmtime_r localtime localtime_r lstat_time lutimes mktime msgctl mtx_timedlock nanosleep nanosleep ntp_gettime ntp_gettimex ppoll pselec pselect pthread_clockjoin_np pthread_cond_clockwait pthread_cond_timedwait pthread_mutex_clocklock pthread_mutex_timedlock pthread_rwlock_clockrdlock pthread_rwlock_clockwrlock pthread_rwlock_timedrdlock pthread_rwlock_timedwrlock pthread_timedjoin_np recvmmsg sched_rr_get_interval select sem_clockwait semctl semtimedop sem_timedwait setitimer settimeofday shmctl sigtimedwait stat thrd_sleep time timegm timerfd_gettime timerfd_settime timespec_get utime utimensat utimes utimes wait3 wait4 * librt: aio_suspend mq_timedreceive mq_timedsend timer_gettime timer_settime * libanl: gai_suspend Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-02-18 00:35:19 +08:00
libc_hidden_def (__pthread_cond_timedwait64)
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
int
___pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex,
const struct timespec *abstime)
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
{
struct __timespec64 ts64 = valid_timespec_to_timespec64 (*abstime);
return __pthread_cond_timedwait64 (cond, mutex, &ts64);
}
#endif /* __TIMESIZE == 64 */
versioned_symbol (libc, ___pthread_cond_timedwait,
pthread_cond_timedwait, GLIBC_2_3_2);
libc_hidden_ver (___pthread_cond_timedwait, __pthread_cond_timedwait)
#ifndef SHARED
strong_alias (___pthread_cond_timedwait, __pthread_cond_timedwait)
#endif
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
/* See __pthread_cond_wait_common. */
int
___pthread_cond_clockwait64 (pthread_cond_t *cond, pthread_mutex_t *mutex,
clockid_t clockid,
const struct __timespec64 *abstime)
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
{
/* Check parameter validity. This should also tell the compiler that
it can assume that abstime is not NULL. */
if (! valid_nanoseconds (abstime->tv_nsec))
nptl: Add POSIX-proposed pthread_cond_clockwait Add: int pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct timespec *abstime) which behaves just like pthread_cond_timedwait except it always measures abstime against the supplied clockid. Currently supports CLOCK_REALTIME and CLOCK_MONOTONIC and returns EINVAL if any other clock is specified. Includes feedback from many others. This function was originally proposed[1] as pthread_cond_timedwaitonclock_np, but The Austin Group preferred the new name. * nptl/Makefile: Add tst-cond26 and tst-cond27 * nptl/Versions (GLIBC_2.30): Add pthread_cond_clockwait * sysdeps/nptl/pthread.h: Likewise * nptl/forward.c: Add __pthread_cond_clockwait * nptl/forward.c: Likewise * nptl/pthreadP.h: Likewise * sysdeps/nptl/pthread-functions.h: Likewise * nptl/pthread_cond_wait.c (__pthread_cond_wait_common): Add clockid parameter and comment describing why we don't need to check its value. Use that value when calling futex_abstimed_wait_cancelable rather than reading the clock from the flags. (__pthread_cond_wait): Pass unused clockid parameter. (__pthread_cond_timedwait): Read clock from flags and pass it to __pthread_cond_wait_common. (__pthread_cond_clockwait): Add new function with weak alias from pthread_cond_clockwait. * sysdeps/mach/hurd/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/aarch64/libpthread.abilist * (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/alpha/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/arm/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/csky/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/hppa/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/i386/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/ia64/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/microblaze/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/nios2/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/be/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/le/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/riscv/rv64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sh/libpthread.abilist (GLIBC_2.30): * Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libpthread.abilist (GLIBC_2.30): Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libpthread.abilist (GLIBC_2.30): Likewise. * nptl/tst-cond11.c (run_test): Support testing pthread_cond_clockwait too by using a special magic CLOCK_USE_ATTR_CLOCK value to determine whether to call pthread_cond_timedwait or pthread_cond_clockwait. (do_test): Pass CLOCK_USE_ATTR_CLOCK for existing tests, and add new tests using all combinations of CLOCK_MONOTONIC and CLOCK_REALTIME. * ntpl/tst-cond26.c: New test for passing unsupported and * invalid clocks to pthread_cond_clockwait. * nptl/tst-cond27.c: Add test similar to tst-cond5.c, but using struct timespec and pthread_cond_clockwait. * manual/threads.texi: Document pthread_cond_clockwait. The * comment was provided by Carlos O'Donell. [1] https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2019-06-22 01:36:56 +08:00
return EINVAL;
if (!futex_abstimed_supported_clockid (clockid))
return EINVAL;
return __pthread_cond_wait_common (cond, mutex, clockid, abstime);
}
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
#if __TIMESIZE == 64
strong_alias (___pthread_cond_clockwait64, ___pthread_cond_clockwait)
#else
y2038: Add support for 64-bit time on legacy ABIs A new build flag, _TIME_BITS, enables the usage of the newer 64-bit time symbols for legacy ABI (where 32-bit time_t is default). The 64 bit time support is only enabled if LFS (_FILE_OFFSET_BITS=64) is also used. Different than LFS support, the y2038 symbols are added only for the required ABIs (armhf, csky, hppa, i386, m68k, microblaze, mips32, mips64-n32, nios2, powerpc32, sparc32, s390-32, and sh). The ABIs with 64-bit time support are unchanged, both for symbol and types redirection. On Linux the full 64-bit time support requires a minimum of kernel version v5.1. Otherwise, the 32-bit fallbacks are used and might results in error with overflow return code (EOVERFLOW). The i686-gnu does not yet support 64-bit time. This patch exports following rediretions to support 64-bit time: * libc: adjtime adjtimex clock_adjtime clock_getres clock_gettime clock_nanosleep clock_settime cnd_timedwait ctime ctime_r difftime fstat fstatat futimens futimes futimesat getitimer getrusage gettimeofday gmtime gmtime_r localtime localtime_r lstat_time lutimes mktime msgctl mtx_timedlock nanosleep nanosleep ntp_gettime ntp_gettimex ppoll pselec pselect pthread_clockjoin_np pthread_cond_clockwait pthread_cond_timedwait pthread_mutex_clocklock pthread_mutex_timedlock pthread_rwlock_clockrdlock pthread_rwlock_clockwrlock pthread_rwlock_timedrdlock pthread_rwlock_timedwrlock pthread_timedjoin_np recvmmsg sched_rr_get_interval select sem_clockwait semctl semtimedop sem_timedwait setitimer settimeofday shmctl sigtimedwait stat thrd_sleep time timegm timerfd_gettime timerfd_settime timespec_get utime utimensat utimes utimes wait3 wait4 * librt: aio_suspend mq_timedreceive mq_timedsend timer_gettime timer_settime * libanl: gai_suspend Reviewed-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-02-18 00:35:19 +08:00
strong_alias (___pthread_cond_clockwait64, __pthread_cond_clockwait64);
libc_hidden_def (__pthread_cond_clockwait64)
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
int
___pthread_cond_clockwait (pthread_cond_t *cond, pthread_mutex_t *mutex,
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
clockid_t clockid,
const struct timespec *abstime)
{
struct __timespec64 ts64 = valid_timespec_to_timespec64 (*abstime);
return __pthread_cond_clockwait64 (cond, mutex, clockid, &ts64);
}
#endif /* __TIMESIZE == 64 */
libc_hidden_ver (___pthread_cond_clockwait, __pthread_cond_clockwait)
#ifndef SHARED
strong_alias (___pthread_cond_clockwait, __pthread_cond_clockwait)
#endif
versioned_symbol (libc, ___pthread_cond_clockwait,
pthread_cond_clockwait, GLIBC_2_34);
#if OTHER_SHLIB_COMPAT (libpthread, GLIBC_2_30, GLIBC_2_34)
compat_symbol (libpthread, ___pthread_cond_clockwait,
pthread_cond_clockwait, GLIBC_2_30);
y2038: nptl: Convert pthread_cond_{clock|timed}wait to support 64 bit time The pthread_cond_clockwait and pthread_cond_timedwait have been converted to support 64 bit time. This change introduces new futex_abstimed_wait_cancelable64 function in ./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible and tries to replace low-level preprocessor macros from lowlevellock-futex.h The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover, there is no need to check for NULL passed as *abstime pointer as __pthread_cond_wait_common() always passes non-NULL struct __timespec64 pointer to futex_abstimed_wait_cancellable64(). For systems with __TIMESIZE != 64 && __WORDSIZE == 32: - Conversions between 64 bit time to 32 bit are necessary - Redirection to __pthread_cond_{clock|timed}wait64 will provide support for 64 bit time The futex_abstimed_wait_cancelable64 function has been put into a separate file on the purpose - to avoid issues apparent on the m68k architecture related to small number of available registers (there is not enough registers to put all necessary arguments in them if the above function would be added to futex-internal.h with __always_inline attribute). In fact - new function - namely __futex_abstimed_wait_cancellable32 is used to reduce number of needed registers (as some in-register values are stored on the stack when function call is made). Build tests: ./src/scripts/build-many-glibcs.py glibcs Run-time tests: - Run specific tests on ARM/x86 32bit systems (qemu): https://github.com/lmajewski/meta-y2038 and run tests: https://github.com/lmajewski/y2038-tests/commits/master Above tests were performed with Y2038 redirection applied as well as without to test the proper usage of both __pthread_cond_{clock|timed}wait64 and __pthread_cond_{clock|timed}wait. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-11 16:49:03 +08:00
#endif