mirror of
https://git.postgresql.org/git/postgresql.git
synced 2024-12-27 08:39:28 +08:00
757c5182f2
1. Integer overflow in internal_size could result in memory corruption in decompression since a zero-length array would be allocated and then written to. This leads to crashes or corruption when traversing an index which has been populated with sufficiently sparse values. Fix by using int64 for computations and checking for overflow. 2. Integer overflow in g_int_compress could cause pessimal merge choices, resulting in unnecessarily large ranges (which would in turn trigger issue 1 above). Fix by using int64 again. 3. Even without overflow, array sizes could become large enough to cause unexplained memory allocation errors. Fix by capping the sizes to a safe limit and report actual errors pointing at gist__intbig_ops as needed. 4. Large inputs to the compression function always consist of large runs of consecutive integers, and the compression loop was processing these one at a time in an O(N^2) manner with a lot of overhead. The expected runtime of this function could easily exceed 6 months for a single call as a result. Fix by performing a linear-time first pass, which reduces the worst case to something on the order of seconds. Backpatch all the way, since this has been wrong forever. Per bug #15518 from report from irc user "dymk", analysis and patch by me. Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org |
||
---|---|---|
.. | ||
bench | ||
data | ||
expected | ||
sql | ||
_int_bool.c | ||
_int_gin.c | ||
_int_gist.c | ||
_int_op.c | ||
_int_selfuncs.c | ||
_int_tool.c | ||
_int.h | ||
_intbig_gist.c | ||
.gitignore | ||
intarray--1.0--1.1.sql | ||
intarray--1.1--1.2.sql | ||
intarray--1.2.sql | ||
intarray--unpackaged--1.0.sql | ||
intarray.control | ||
Makefile |