From 17b8887c29e55e2ac2e04a41a87714a4960484fa Mon Sep 17 00:00:00 2001 From: Thomas Munro Date: Mon, 3 Jul 2023 16:18:20 +1200 Subject: [PATCH] Fix race in SSI interaction with bitmap heap scan. When performing a bitmap heap scan, we don't want to miss concurrent writes that occurred after we observed the heap's rs_nblocks, but before we took predicate locks on index pages. Therefore, we can't skip fetching any heap tuples that are referenced by the index, because we need to test them all with CheckForSerializableConflictOut(). The old optimization that would ignore any references to blocks >= rs_nblocks gets in the way of that requirement, because it means that concurrent writes in that window are ignored. Removing that optimization shouldn't affect correctness at any isolation level, because any new tuples shouldn't be visible to an MVCC snapshot. There also shouldn't be any error-causing references to heap blocks past the end, because we should have held at least an AccessShareLock on the table before the index scan. It can't get smaller while our transaction is running. For now, though, we'll keep the optimization at lower levels to avoid making unnecessary changes in a bug fix. Back-patch to all supported releases. In release 11, the code is in a different place but not fundamentally different. Fixes one aspect of bug #17949. Reported-by: Artem Anisimov Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Heikki Linnakangas Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org --- src/backend/access/heap/heapam_handler.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c index 8d44b93231..0737cd0664 100644 --- a/src/backend/access/heap/heapam_handler.c +++ b/src/backend/access/heap/heapam_handler.c @@ -2247,9 +2247,11 @@ heapam_scan_bitmap_next_block(TableScanDesc scan, * Ignore any claimed entries past what we think is the end of the * relation. It may have been extended after the start of our scan (we * only hold an AccessShareLock, and it could be inserts from this - * backend). + * backend). We don't take this optimization in SERIALIZABLE isolation + * though, as we need to examine all invisible tuples reachable by the + * index. */ - if (page >= hscan->rs_nblocks) + if (!IsolationIsSerializable() && page >= hscan->rs_nblocks) return false; /*