Fix race conditions with drop of reused pgstats entries

This fixes a set of race conditions with cumulative statistics where a
shared stats entry could be dropped while it should still be valid in
the event when it is reused: an entry may refer to a different object
but requires the same hash key.  This can happen with various stats
kinds, like:
- Replication slots that compute internally an index number, for
different slot names.
- Stats kinds that use an OID in the object key, where a wraparound
causes the same key to be used if an OID is used for the same object.
- As of PostgreSQL 18, custom pgstats kinds could also be an issue,
depending on their implementation.

This issue is fixed by introducing a counter called "generation" in the
shared entries via PgStatShared_HashEntry, initialized at 0 when an
entry is created and incremented when the same entry is reused, to avoid
concurrent issues on drop because of other backends still holding a
reference to it.  This "generation" is copied to the local copy that a
backend holds when looking at an object, then cross-checked with the
shared entry to make sure that the entry is not dropped even if its
"refcount" justifies that if it has been reused.

This problem could show up when a backend shuts down and needs to
discard any entries it still holds, causing statistics to be removed
when they should not, or even an assertion failure.  Another report
involved a failure in a standby after an OID wraparound, where the
startup process would FATAL on a "can only drop stats once", stopping
recovery abruptly.  The buildfarm has been sporadically complaining
about the problem, as well, but the window is hard to reach with the
in-core tests.

Note that the issue can be reproduced easily by adding a sleep before
dshash_find() in pgstat_release_entry_ref() to enlarge the problematic
window while repeating test_decoding's isolation test oldest_xmin a
couple of times, for example, as pointed out by Alexander Lakhin.

Reported-by: Alexander Lakhin, Peter Smith
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/CAA4eK1KxuMVyAryz_Vk5yq3ejgKYcL6F45Hj9ZnMNBS-g+PuZg@mail.gmail.com
Discussion: https://postgr.es/m/17947-b9554521ad963c9c@postgresql.org
Backpatch-through: 15
This commit is contained in:
Michael Paquier 2024-11-15 11:32:18 +09:00
parent 16a2bb0793
commit 154c5b42a3
2 changed files with 54 additions and 5 deletions

View File

@ -273,6 +273,11 @@ pgstat_init_entry(PgStat_Kind kind,
* further if a longer lived reference is needed.
*/
pg_atomic_init_u32(&shhashent->refcount, 1);
/*
* Initialize "generation" to 0, as freshly created.
*/
pg_atomic_init_u32(&shhashent->generation, 0);
shhashent->dropped = false;
chunk = dsa_allocate0(pgStatLocal.dsa, pgstat_get_kind_info(kind)->shared_size);
@ -296,6 +301,12 @@ pgstat_reinit_entry(PgStat_Kind kind, PgStatShared_HashEntry *shhashent)
/* mark as not dropped anymore */
pg_atomic_fetch_add_u32(&shhashent->refcount, 1);
/*
* Increment "generation", to let any backend with local references know
* that what they point to is outdated.
*/
pg_atomic_fetch_add_u32(&shhashent->generation, 1);
shhashent->dropped = false;
/* reinitialize content */
@ -336,6 +347,7 @@ pgstat_acquire_entry_ref(PgStat_EntryRef *entry_ref,
entry_ref->shared_stats = shheader;
entry_ref->shared_entry = shhashent;
entry_ref->generation = pg_atomic_read_u32(&shhashent->generation);
}
/*
@ -501,7 +513,8 @@ pgstat_get_entry_ref(PgStat_Kind kind, Oid dboid, Oid objoid, bool create,
* case are replication slot stats, where a new slot can be
* created with the same index just after dropping. But oid
* wraparound can lead to other cases as well. We just reset the
* stats to their plain state.
* stats to their plain state, while incrementing its "generation"
* in the shared entry for any remaining local references.
*/
shheader = pgstat_reinit_entry(kind, shhashent);
pgstat_acquire_entry_ref(entry_ref, shhashent, shheader);
@ -568,10 +581,27 @@ pgstat_release_entry_ref(PgStat_HashKey key, PgStat_EntryRef *entry_ref,
if (!shent)
elog(ERROR, "could not find just referenced shared stats entry");
Assert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0);
Assert(entry_ref->shared_entry == shent);
pgstat_free_entry(shent, NULL);
/*
* This entry may have been reinitialized while trying to release
* it, so double-check that it has not been reused while holding a
* lock on its shared entry.
*/
if (pg_atomic_read_u32(&entry_ref->shared_entry->generation) ==
entry_ref->generation)
{
/* Same "generation", so we're OK with the removal */
Assert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0);
Assert(entry_ref->shared_entry == shent);
pgstat_free_entry(shent, NULL);
}
else
{
/*
* Shared stats entry has been reinitialized, so do not drop
* its shared entry, only release its lock.
*/
dshash_release_lock(pgStatLocal.shared_hash, shent);
}
}
}

View File

@ -93,6 +93,19 @@ typedef struct PgStatShared_HashEntry
*/
pg_atomic_uint32 refcount;
/*
* Counter tracking the number of times the entry has been reused.
*
* Set to 0 when the entry is created, and incremented by one each time
* the shared entry is reinitialized with pgstat_reinit_entry().
*
* May only be incremented / decremented while holding at least a shared
* lock on the dshash partition containing the entry. Like refcount, it
* needs to be an atomic variable because multiple backends can increment
* the generation with just a shared lock.
*/
pg_atomic_uint32 generation;
/*
* Pointer to shared stats. The stats entry always starts with
* PgStatShared_Common, embedded in a larger struct containing the
@ -132,6 +145,12 @@ typedef struct PgStat_EntryRef
*/
PgStatShared_Common *shared_stats;
/*
* Copy of PgStatShared_HashEntry->generation, keeping locally track of
* the shared stats entry "generation" retrieved (number of times reused).
*/
uint32 generation;
/*
* Pending statistics data that will need to be flushed to shared memory
* stats eventually. Each stats kind utilizing pending data defines what