Fix oversized memory allocation in Parallel Hash Join

During the calculations of the maximum for the number of buckets, take into
account that later we round that to the next power of 2.

Reported-by: Karen Talarico
Bug: #16925
Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org
Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov
Reviewed-by: Alena Rybakina
Backpatch-through: 12
This commit is contained in:
Alexander Korotkov 2024-01-07 09:03:55 +02:00
parent 37c5516633
commit 714a987bc1

View File

@ -1160,6 +1160,7 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
double dtuples;
double dbuckets;
int new_nbuckets;
uint32 max_buckets;
/*
* We probably also need a smaller bucket array. How many
@ -1172,9 +1173,16 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
* array.
*/
dtuples = (old_batch0->ntuples * 2.0) / new_nbatch;
/*
* We need to calculate the maximum number of buckets to
* stay within the MaxAllocSize boundary. Round the
* maximum number to the previous power of 2 given that
* later we round the number to the next power of 2.
*/
max_buckets = pg_prevpower2_32((uint32)
(MaxAllocSize / sizeof(dsa_pointer_atomic)));
dbuckets = ceil(dtuples / NTUP_PER_BUCKET);
dbuckets = Min(dbuckets,
MaxAllocSize / sizeof(dsa_pointer_atomic));
dbuckets = Min(dbuckets, max_buckets);
new_nbuckets = (int) dbuckets;
new_nbuckets = Max(new_nbuckets, 1024);
new_nbuckets = pg_nextpower2_32(new_nbuckets);