From 4e15a4db5e65e43271f8d20750d6500ab12632d0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Aug 2011 15:30:45 -0400 Subject: [PATCH] Documentation improvement and minor code cleanups for the latch facility. Improve the documentation around weak-memory-ordering risks, and do a pass of general editorialization on the comments in the latch code. Make the Windows latch code more like the Unix latch code where feasible; in particular provide the same Assert checks in both implementations. Fix poorly-placed WaitLatch call in syncrep.c. This patch resolves, for the moment, concerns around weak-memory-ordering bugs in latch-related code: we have documented the restrictions and checked that existing calls meet them. In 9.2 I hope that we will install suitable memory barrier instructions in SetLatch/ResetLatch, so that their callers don't need to be quite so careful. --- src/backend/port/unix_latch.c | 166 ++++++++++++++---------------- src/backend/port/win32_latch.c | 51 ++++----- src/backend/replication/syncrep.c | 20 ++-- src/backend/storage/lmgr/proc.c | 2 +- src/include/storage/latch.h | 70 ++++++++++++- 5 files changed, 185 insertions(+), 124 deletions(-) diff --git a/src/backend/port/unix_latch.c b/src/backend/port/unix_latch.c index 9940a42e33c..950a3a40117 100644 --- a/src/backend/port/unix_latch.c +++ b/src/backend/port/unix_latch.c @@ -3,60 +3,6 @@ * unix_latch.c * Routines for inter-process latches * - * A latch is a boolean variable, with operations that let you to sleep - * until it is set. A latch can be set from another process, or a signal - * handler within the same process. - * - * The latch interface is a reliable replacement for the common pattern of - * using pg_usleep() or select() to wait until a signal arrives, where the - * signal handler sets a global variable. Because on some platforms, an - * incoming signal doesn't interrupt sleep, and even on platforms where it - * does there is a race condition if the signal arrives just before - * entering the sleep, the common pattern must periodically wake up and - * poll the global variable. pselect() system call was invented to solve - * the problem, but it is not portable enough. Latches are designed to - * overcome these limitations, allowing you to sleep without polling and - * ensuring a quick response to signals from other processes. - * - * There are two kinds of latches: local and shared. A local latch is - * initialized by InitLatch, and can only be set from the same process. - * A local latch can be used to wait for a signal to arrive, by calling - * SetLatch in the signal handler. A shared latch resides in shared memory, - * and must be initialized at postmaster startup by InitSharedLatch. Before - * a shared latch can be waited on, it must be associated with a process - * with OwnLatch. Only the process owning the latch can wait on it, but any - * process can set it. - * - * There are three basic operations on a latch: - * - * SetLatch - Sets the latch - * ResetLatch - Clears the latch, allowing it to be set again - * WaitLatch - Waits for the latch to become set - * - * The correct pattern to wait for an event is: - * - * for (;;) - * { - * ResetLatch(); - * if (work to do) - * Do Stuff(); - * - * WaitLatch(); - * } - * - * It's important to reset the latch *before* checking if there's work to - * do. Otherwise, if someone sets the latch between the check and the - * ResetLatch call, you will miss it and Wait will block. - * - * To wake up the waiter, you must first set a global flag or something - * else that the main loop tests in the "if (work to do)" part, and call - * SetLatch *after* that. SetLatch is designed to return quickly if the - * latch is already set. - * - * - * Implementation - * -------------- - * * The Unix implementation uses the so-called self-pipe trick to overcome * the race condition involved with select() and setting a global flag * in the signal handler. When a latch is set and the current process @@ -65,8 +11,8 @@ * interrupt select() on all platforms, and even on platforms where it * does, a signal that arrives just before the select() call does not * prevent the select() from entering sleep. An incoming byte on a pipe - * however reliably interrupts the sleep, and makes select() to return - * immediately if the signal arrives just before select() begins. + * however reliably interrupts the sleep, and causes select() to return + * immediately even if the signal arrives before select() begins. * * When SetLatch is called from the same process that owns the latch, * SetLatch writes the byte directly to the pipe. If it's owned by another @@ -100,7 +46,7 @@ /* Are we currently in WaitLatch? The signal handler would like to know. */ static volatile sig_atomic_t waiting = false; -/* Read and write end of the self-pipe */ +/* Read and write ends of the self-pipe */ static int selfpipe_readfd = -1; static int selfpipe_writefd = -1; @@ -116,7 +62,7 @@ static void sendSelfPipeByte(void); void InitLatch(volatile Latch *latch) { - /* Initialize the self pipe if this is our first latch in the process */ + /* Initialize the self-pipe if this is our first latch in the process */ if (selfpipe_readfd == -1) initSelfPipe(); @@ -127,13 +73,14 @@ InitLatch(volatile Latch *latch) /* * Initialize a shared latch that can be set from other processes. The latch - * is initially owned by no-one, use OwnLatch to associate it with the + * is initially owned by no-one; use OwnLatch to associate it with the * current process. * * InitSharedLatch needs to be called in postmaster before forking child * processes, usually right after allocating the shared memory block - * containing the latch with ShmemInitStruct. The Unix implementation - * doesn't actually require that, but the Windows one does. + * containing the latch with ShmemInitStruct. (The Unix implementation + * doesn't actually require that, but the Windows one does.) Because of + * this restriction, we have no concurrency issues to worry about here. */ void InitSharedLatch(volatile Latch *latch) @@ -145,23 +92,30 @@ InitSharedLatch(volatile Latch *latch) /* * Associate a shared latch with the current process, allowing it to - * wait on it. + * wait on the latch. * - * Make sure that latch_sigusr1_handler() is called from the SIGUSR1 signal - * handler, as shared latches use SIGUSR1 to for inter-process communication. + * Although there is a sanity check for latch-already-owned, we don't do + * any sort of locking here, meaning that we could fail to detect the error + * if two processes try to own the same latch at about the same time. If + * there is any risk of that, caller must provide an interlock to prevent it. + * + * In any process that calls OwnLatch(), make sure that + * latch_sigusr1_handler() is called from the SIGUSR1 signal handler, + * as shared latches use SIGUSR1 for inter-process communication. */ void OwnLatch(volatile Latch *latch) { Assert(latch->is_shared); - /* Initialize the self pipe if this is our first latch in the process */ + /* Initialize the self-pipe if this is our first latch in this process */ if (selfpipe_readfd == -1) initSelfPipe(); /* sanity check */ if (latch->owner_pid != 0) elog(ERROR, "latch already owned"); + latch->owner_pid = MyProcPid; } @@ -173,25 +127,26 @@ DisownLatch(volatile Latch *latch) { Assert(latch->is_shared); Assert(latch->owner_pid == MyProcPid); + latch->owner_pid = 0; } /* - * Wait for a given latch to be set, postmaster death, or until timeout is - * exceeded. 'wakeEvents' is a bitmask that specifies which of those events + * Wait for a given latch to be set, or for postmaster death, or until timeout + * is exceeded. 'wakeEvents' is a bitmask that specifies which of those events * to wait for. If the latch is already set (and WL_LATCH_SET is given), the * function returns immediately. * - * The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT - * event is given, otherwise it is ignored. On some platforms, signals cause - * the timeout to be restarted, so beware that the function can sleep for - * several times longer than the specified timeout. + * The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT flag + * is given. On some platforms, signals cause the timeout to be restarted, + * so beware that the function can sleep for several times longer than the + * specified timeout. * * The latch must be owned by the current process, ie. it must be a * backend-local latch initialized with InitLatch, or a shared latch * associated with the current process by calling OwnLatch. * - * Returns bit field indicating which condition(s) caused the wake-up. Note + * Returns bit mask indicating which condition(s) caused the wake-up. Note * that if multiple wake-up conditions are true, there is no guarantee that * we return all of them in one call, but we will return at least one. Also, * according to the select(2) man page on Linux, select(2) may spuriously @@ -200,7 +155,7 @@ DisownLatch(volatile Latch *latch) * readable, or postmaster has died, even when none of the wake conditions * have been satisfied. That should be rare in practice, but the caller * should not use the return value for anything critical, re-checking the - * situation with PostmasterIsAlive() or read() on a socket if necessary. + * situation with PostmasterIsAlive() or read() on a socket as necessary. */ int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout) @@ -247,12 +202,18 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, int hifd; /* - * Clear the pipe, and check if the latch is set already. If someone + * Clear the pipe, then check if the latch is set already. If someone * sets the latch between this and the select() below, the setter will * write a byte to the pipe (or signal us and the signal handler will * do that), and the select() will return immediately. + * + * Note: we assume that the kernel calls involved in drainSelfPipe() + * and SetLatch() will provide adequate synchronization on machines + * with weak memory ordering, so that we cannot miss seeing is_set + * if the signal byte is already in the pipe when we drain it. */ drainSelfPipe(); + if ((wakeEvents & WL_LATCH_SET) && latch->is_set) { result |= WL_LATCH_SET; @@ -263,7 +224,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, break; } + /* Must wait ... set up the event masks for select() */ FD_ZERO(&input_mask); + FD_ZERO(&output_mask); + FD_SET(selfpipe_readfd, &input_mask); hifd = selfpipe_readfd; @@ -281,7 +245,6 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, hifd = sock; } - FD_ZERO(&output_mask); if (wakeEvents & WL_SOCKET_WRITEABLE) { FD_SET(sock, &output_mask); @@ -320,21 +283,30 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, { result |= WL_POSTMASTER_DEATH; } - } while(result == 0); + } while (result == 0); waiting = false; return result; } /* - * Sets a latch and wakes up anyone waiting on it. Returns quickly if the - * latch is already set. + * Sets a latch and wakes up anyone waiting on it. + * + * This is cheap if the latch is already set, otherwise not so much. */ void SetLatch(volatile Latch *latch) { pid_t owner_pid; + /* + * XXX there really ought to be a memory barrier operation right here, + * to ensure that any flag variables we might have changed get flushed + * to main memory before we check/set is_set. Without that, we have to + * require that callers provide their own synchronization for machines + * with weak memory ordering (see latch.h). + */ + /* Quick exit if already set */ if (latch->is_set) return; @@ -346,13 +318,21 @@ SetLatch(volatile Latch *latch) * we're in a signal handler. We use the self-pipe to wake up the select() * in that case. If it's another process, send a signal. * - * Fetch owner_pid only once, in case the owner simultaneously disowns the - * latch and clears owner_pid. XXX: This assumes that pid_t is atomic, - * which isn't guaranteed to be true! In practice, the effective range of - * pid_t fits in a 32 bit integer, and so should be atomic. In the worst - * case, we might end up signaling wrong process if the right one disowns - * the latch just as we fetch owner_pid. Even then, you're very unlucky if - * a process with that bogus pid exists. + * Fetch owner_pid only once, in case the latch is concurrently getting + * owned or disowned. XXX: This assumes that pid_t is atomic, which isn't + * guaranteed to be true! In practice, the effective range of pid_t fits + * in a 32 bit integer, and so should be atomic. In the worst case, we + * might end up signaling the wrong process. Even then, you're very + * unlucky if a process with that bogus pid exists and belongs to + * Postgres; and PG database processes should handle excess SIGUSR1 + * interrupts without a problem anyhow. + * + * Another sort of race condition that's possible here is for a new process + * to own the latch immediately after we look, so we don't signal it. + * This is okay so long as all callers of ResetLatch/WaitLatch follow the + * standard coding convention of waiting at the bottom of their loops, + * not the top, so that they'll correctly process latch-setting events that + * happen before they enter the loop. */ owner_pid = latch->owner_pid; if (owner_pid == 0) @@ -374,11 +354,23 @@ ResetLatch(volatile Latch *latch) Assert(latch->owner_pid == MyProcPid); latch->is_set = false; + + /* + * XXX there really ought to be a memory barrier operation right here, to + * ensure that the write to is_set gets flushed to main memory before we + * examine any flag variables. Otherwise a concurrent SetLatch might + * falsely conclude that it needn't signal us, even though we have missed + * seeing some flag updates that SetLatch was supposed to inform us of. + * For the moment, callers must supply their own synchronization of flag + * variables (see latch.h). + */ } /* - * SetLatch uses SIGUSR1 to wake up the process waiting on the latch. Wake - * up WaitLatch. + * SetLatch uses SIGUSR1 to wake up the process waiting on the latch. + * + * Wake up WaitLatch, if we're waiting. (We might not be, since SIGUSR1 is + * overloaded for multiple purposes.) */ void latch_sigusr1_handler(void) diff --git a/src/backend/port/win32_latch.c b/src/backend/port/win32_latch.c index 8d59ad493f6..eeb85a96cee 100644 --- a/src/backend/port/win32_latch.c +++ b/src/backend/port/win32_latch.c @@ -1,9 +1,10 @@ /*------------------------------------------------------------------------- * * win32_latch.c - * Windows implementation of latches. + * Routines for inter-process latches * - * See unix_latch.c for information on usage. + * See unix_latch.c for header comments for the exported functions; + * the API presented here is supposed to be the same as there. * * The Windows implementation uses Windows events that are inherited by * all postmaster child processes. @@ -24,7 +25,6 @@ #include "miscadmin.h" #include "postmaster/postmaster.h" -#include "replication/walsender.h" #include "storage/latch.h" #include "storage/shmem.h" @@ -89,7 +89,7 @@ WaitLatch(volatile Latch *latch, int wakeEvents, long timeout) } int -WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, +WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, long timeout) { DWORD rc; @@ -101,12 +101,15 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, int pmdeath_eventno = 0; long timeout_ms; - Assert(wakeEvents != 0); - /* Ignore WL_SOCKET_* events if no valid socket is given */ if (sock == PGINVALID_SOCKET) wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE); + Assert(wakeEvents != 0); /* must have at least one wake event */ + + if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid) + elog(ERROR, "cannot wait on a latch owned by another process"); + /* Convert timeout to milliseconds for WaitForMultipleObjects() */ if (wakeEvents & WL_TIMEOUT) { @@ -122,8 +125,8 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, events[0] = latchevent; events[1] = pgwin32_signal_event; numevents = 2; - if (((wakeEvents & WL_SOCKET_READABLE) || - (wakeEvents & WL_SOCKET_WRITEABLE))) + if ((wakeEvents & WL_SOCKET_READABLE) || + (wakeEvents & WL_SOCKET_WRITEABLE)) { int flags = 0; @@ -152,7 +155,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, */ if (!ResetEvent(latchevent)) elog(ERROR, "ResetEvent failed: error code %d", (int) GetLastError()); - if (latch->is_set && (wakeEvents & WL_LATCH_SET)) + if ((wakeEvents & WL_LATCH_SET) && latch->is_set) { result |= WL_LATCH_SET; /* @@ -171,17 +174,17 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, else if (rc == WAIT_OBJECT_0 + 1) pgwin32_dispatch_queued_signals(); - else if ((wakeEvents & WL_POSTMASTER_DEATH) && - rc == WAIT_OBJECT_0 + pmdeath_eventno) - { - /* Postmaster died */ - result |= WL_POSTMASTER_DEATH; - } else if (rc == WAIT_TIMEOUT) { result |= WL_TIMEOUT; } - else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0 && + else if ((wakeEvents & WL_POSTMASTER_DEATH) && + rc == WAIT_OBJECT_0 + pmdeath_eventno) + { + /* Postmaster died */ + result |= WL_POSTMASTER_DEATH; + } + else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) && rc == WAIT_OBJECT_0 + 2) /* socket is at event slot 2 */ { WSANETWORKEVENTS resEvents; @@ -206,7 +209,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, else if (rc != WAIT_OBJECT_0) elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %d", (int) rc); } - while(result == 0); + while (result == 0); /* Clean up the handle we created for the socket */ if (sockevent != WSA_INVALID_EVENT) @@ -231,15 +234,10 @@ SetLatch(volatile Latch *latch) /* * See if anyone's waiting for the latch. It can be the current process if - * we're in a signal handler. Use a local variable here in case the latch - * is just disowned between the test and the SetEvent call, and event - * field set to NULL. + * we're in a signal handler. * - * Fetch handle field only once, in case the owner simultaneously disowns - * the latch and clears handle. This assumes that HANDLE is atomic, which - * isn't guaranteed to be true! In practice, it should be, and in the - * worst case we end up calling SetEvent with a bogus handle, and SetEvent - * will return an error with no harm done. + * Use a local variable here just in case somebody changes the event field + * concurrently (which really should not happen). */ handle = latch->event; if (handle) @@ -256,5 +254,8 @@ SetLatch(volatile Latch *latch) void ResetLatch(volatile Latch *latch) { + /* Only the owner should reset the latch */ + Assert(latch->owner_pid == MyProcPid); + latch->is_set = false; } diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.c index 32db2bc4c52..56af4237e80 100644 --- a/src/backend/replication/syncrep.c +++ b/src/backend/replication/syncrep.c @@ -166,13 +166,6 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) { int syncRepState; - /* - * Wait on latch for up to 60 seconds. This allows us to check for - * postmaster death regularly while waiting. Note that timeout here - * does not necessarily release from loop. - */ - WaitLatch(&MyProc->waitLatch, WL_LATCH_SET | WL_TIMEOUT, 60000000L); - /* Must reset the latch before testing state. */ ResetLatch(&MyProc->waitLatch); @@ -184,6 +177,12 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) * walsender changes the state to SYNC_REP_WAIT_COMPLETE, it will * never update it again, so we can't be seeing a stale value in that * case. + * + * Note: on machines with weak memory ordering, the acquisition of + * the lock is essential to avoid race conditions: we cannot be sure + * the sender's state update has reached main memory until we acquire + * the lock. We could get rid of this dance if SetLatch/ResetLatch + * contained memory barriers. */ syncRepState = MyProc->syncRepState; if (syncRepState == SYNC_REP_WAITING) @@ -246,6 +245,13 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) SyncRepCancelWait(); break; } + + /* + * Wait on latch for up to 60 seconds. This allows us to check for + * cancel/die signal or postmaster death regularly while waiting. Note + * that timeout here does not necessarily release from loop. + */ + WaitLatch(&MyProc->waitLatch, WL_LATCH_SET | WL_TIMEOUT, 60000000L); } /* diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c index 5d67b93c5d9..2a79f8247ab 100644 --- a/src/backend/storage/lmgr/proc.c +++ b/src/backend/storage/lmgr/proc.c @@ -332,7 +332,7 @@ InitProcess(void) MyProc->waitLSN.xrecoff = 0; MyProc->syncRepState = SYNC_REP_NOT_WAITING; SHMQueueElemInit(&(MyProc->syncRepLinks)); - OwnLatch((Latch *) &MyProc->waitLatch); + OwnLatch(&MyProc->waitLatch); /* * We might be reusing a semaphore that belonged to a failed process. So diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h index df891e68aab..f1a1444a889 100644 --- a/src/include/storage/latch.h +++ b/src/include/storage/latch.h @@ -3,6 +3,67 @@ * latch.h * Routines for interprocess latches * + * A latch is a boolean variable, with operations that let processes sleep + * until it is set. A latch can be set from another process, or a signal + * handler within the same process. + * + * The latch interface is a reliable replacement for the common pattern of + * using pg_usleep() or select() to wait until a signal arrives, where the + * signal handler sets a flag variable. Because on some platforms an + * incoming signal doesn't interrupt sleep, and even on platforms where it + * does there is a race condition if the signal arrives just before + * entering the sleep, the common pattern must periodically wake up and + * poll the flag variable. The pselect() system call was invented to solve + * this problem, but it is not portable enough. Latches are designed to + * overcome these limitations, allowing you to sleep without polling and + * ensuring quick response to signals from other processes. + * + * There are two kinds of latches: local and shared. A local latch is + * initialized by InitLatch, and can only be set from the same process. + * A local latch can be used to wait for a signal to arrive, by calling + * SetLatch in the signal handler. A shared latch resides in shared memory, + * and must be initialized at postmaster startup by InitSharedLatch. Before + * a shared latch can be waited on, it must be associated with a process + * with OwnLatch. Only the process owning the latch can wait on it, but any + * process can set it. + * + * There are three basic operations on a latch: + * + * SetLatch - Sets the latch + * ResetLatch - Clears the latch, allowing it to be set again + * WaitLatch - Waits for the latch to become set + * + * WaitLatch includes a provision for timeouts (which should hopefully not + * be necessary once the code is fully latch-ified) and a provision for + * postmaster child processes to wake up immediately on postmaster death. + * See unix_latch.c for detailed specifications for the exported functions. + * + * The correct pattern to wait for event(s) is: + * + * for (;;) + * { + * ResetLatch(); + * if (work to do) + * Do Stuff(); + * WaitLatch(); + * } + * + * It's important to reset the latch *before* checking if there's work to + * do. Otherwise, if someone sets the latch between the check and the + * ResetLatch call, you will miss it and Wait will incorrectly block. + * + * To wake up the waiter, you must first set a global flag or something + * else that the wait loop tests in the "if (work to do)" part, and call + * SetLatch *after* that. SetLatch is designed to return quickly if the + * latch is already set. + * + * Presently, when using a shared latch for interprocess signalling, the + * flag variable(s) set by senders and inspected by the wait loop must + * be protected by spinlocks or LWLocks, else it is possible to miss events + * on machines with weak memory ordering (such as PPC). This restriction + * will be lifted in future by inserting suitable memory barriers into + * SetLatch and ResetLatch. + * * * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -51,16 +112,17 @@ extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, extern void SetLatch(volatile Latch *latch); extern void ResetLatch(volatile Latch *latch); -#define TestLatch(latch) (((volatile Latch *) latch)->is_set) +/* beware of memory ordering issues if you use this macro! */ +#define TestLatch(latch) (((volatile Latch *) (latch))->is_set) /* - * Unix implementation uses SIGUSR1 for inter-process signaling, Win32 doesn't - * need this. + * Unix implementation uses SIGUSR1 for inter-process signaling. + * Win32 doesn't need this. */ #ifndef WIN32 extern void latch_sigusr1_handler(void); #else -#define latch_sigusr1_handler() +#define latch_sigusr1_handler() ((void) 0) #endif #endif /* LATCH_H */