Make the overflow guards in ExecChooseHashTableSize be more protective.

The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
which while not insane on its own terms did nothing to protect subsequent
code like "palloc(nbatch * sizeof(BufFile *))".  Since enormous join size
estimates might well be planner error rather than reality, it seems best
to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
thus ensuring the allocated arrays don't exceed work_mem.  We will allow
nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
calls, but we should still guard against integer overflow in those palloc
requests.  Per bug #5145 from Bernt Marius Johnsen.

Although the given test case only seems to fail back to 8.2, previous
releases have variants of this issue, so patch all supported branches.
This commit is contained in:
Tom Lane 2009-10-30 20:59:23 +00:00
parent 4ce82365d0
commit 7c0e8048c5

View File

@ -8,7 +8,7 @@
*
*
* IDENTIFICATION
* $Header: /cvsroot/pgsql/src/backend/executor/nodeHash.c,v 1.79 2003/08/04 02:39:59 momjian Exp $
* $Header: /cvsroot/pgsql/src/backend/executor/nodeHash.c,v 1.79.4.1 2009/10/30 20:59:23 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -377,6 +377,8 @@ ExecChooseHashTableSize(double ntuples, int tupwidth,
nbuckets = (int) (hash_table_bytes / (bucketsize * FUDGE_FAC));
if (nbuckets <= 0)
nbuckets = 1;
/* Ensure we can allocate an array of nbuckets pointers */
nbuckets = Min(nbuckets, MaxAllocSize / sizeof(void *));
if (totalbuckets <= nbuckets)
{
@ -401,10 +403,10 @@ ExecChooseHashTableSize(double ntuples, int tupwidth,
*/
dtmp = ceil((inner_rel_bytes - hash_table_bytes) /
hash_table_bytes);
if (dtmp < INT_MAX)
if (dtmp < MaxAllocSize / sizeof(void *))
nbatch = (int) dtmp;
else
nbatch = INT_MAX;
nbatch = MaxAllocSize / sizeof(void *);
if (nbatch <= 0)
nbatch = 1;
}