In PLy_output(), when the elog() call in the TRY branch throws an exception
(this can happen when a statement timeout kicks in, for example), the
PyErr_SetString() call in the CATCH branch can cause a segfault, because the
Py_XDECREF(so) call before it releases memory that is still used by the sv
variable that PyErr_SetString() uses as argument, because sv points into
memory owned by so.
Backpatched back to 8.0, where this code was introduced.
I also threw in a couple of volatile declarations for variables that are used
before and after the TRY. I don't think they caused the crash that I
observed, but they could become issues.
The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
which while not insane on its own terms did nothing to protect subsequent
code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size
estimates might well be planner error rather than reality, it seems best
to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
thus ensuring the allocated arrays don't exceed work_mem. We will allow
nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
calls, but we should still guard against integer overflow in those palloc
requests. Per bug #5145 from Bernt Marius Johnsen.
Although the given test case only seems to fail back to 8.2, previous
releases have variants of this issue, so patch all supported branches.
that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair.
The RI cascade triggers suppress that overhead on the assumption that they
are always run non-deferred, so it's possible to violate the condition if
someone mistakenly changes pg_trigger to mark such a trigger deferred.
We don't really care about supporting that, but throwing an error instead
of crashing seems desirable. Per report from Marcelo Costa.
pam_message array contains exactly one PAM_PROMPT_ECHO_OFF message.
Instead, deal with however many messages there are, and don't throw error
for PAM_ERROR_MSG and PAM_TEXT_INFO messages. This logic is borrowed from
openssh 5.2p1, which hopefully has seen more real-world PAM usage than we
have. Per bug #5121 from Ryan Douglas, which turned out to be caused by
the conv_proc being called with zero messages. Apparently that is normal
behavior given the combination of Linux pam_krb5 with MS Active Directory
as the domain controller.
Patch all the way back, since this code has been essentially untouched
since 7.4. (Surprising we've not heard complaints before.)
in CREATE OR REPLACE FUNCTION. The original code would update pg_shdepend
as if a new function was being created, even if it wasn't, with two bad
consequences: pg_shdepend might record the wrong owner for the function,
and any dependencies for roles mentioned in the function's ACL would be lost.
The fix is very easy: just don't touch pg_shdepend at all when doing a
function replacement.
Also update the CREATE FUNCTION reference page, which never explained
exactly what changes and doesn't change in a function replacement.
In passing, fix the CREATE VIEW reference page similarly; there's no
code bug there, but the docs didn't say what happens.
possibility of shared-inval messages causing a relcache flush while it tries
to fill in missing data in preloaded relcache entries. There are actually
two distinct failure modes here:
1. The flush could delete the next-to-be-processed cache entry, causing
the subsequent hash_seq_search calls to go off into the weeds. This is
the problem reported by Michael Brown, and I believe it also accounts
for bug #5074. The simplest fix is to restart the hashtable scan after
we've read any new data from the catalogs. It appears that pre-8.4
branches have not suffered from this failure, because by chance there were
no other catalogs sharing the same hash chains with the catalogs that
RelationCacheInitializePhase2 had work to do for. However that's obviously
pretty fragile, and it seems possible that derivative versions with
additional system catalogs might be vulnerable, so I'm back-patching this
part of the fix anyway.
2. The flush could delete the *current* cache entry, in which case the
pointer to the newly-loaded data would end up being stored into an
already-deleted Relation struct. As long as it was still deleted, the only
consequence would be some leaked space in CacheMemoryContext. But it seems
possible that the Relation struct could already have been recycled, in
which case this represents a hard-to-reproduce clobber of cached data
structures, with unforeseeable consequences. The fix here is to pin the
entry while we work on it.
In passing, also change RelationCacheInitializePhase2 to Assert that
formrdesc() set up the relation's cached TupleDesc (rd_att) with the
correct type OID and hasoids values. This is more appropriate than
silently updating the values, because the original tupdesc might already
have been copied into the catcache. However this part of the patch is
not in HEAD because it fails due to some questionable recent changes in
formrdesc :-(. That will be cleaned up in a subsequent patch.
of checkpoint. Although the checkpoint has been written to WAL at that point
already, so that all data is safe, and we'll retry removing the WAL segment at
the next checkpoint, if such a failure persists we won't be able to remove any
other old WAL segments either and will eventually run out of disk space. It's
better to treat the failure as non-fatal, and move on to clean any other WAL
segment and continue with any other end-of-checkpoint cleanup.
We don't normally expect any such failures, but on Windows it can happen with
some anti-virus or backup software that lock files without FILE_SHARE_DELETE
flag.
Also, the loop in pgrename() to retry when the file is locked was broken. If a
file is locked on Windows, you get ERROR_SHARE_VIOLATION, not
ERROR_ACCESS_DENIED, at least on modern versions. Fix that, although I left
the check for ERROR_ACCESS_DENIED in there as well (presumably it was correct
in some environment), and added ERROR_LOCK_VIOLATION to be consistent with
similar checks in pgwin32_open(). Reduce the timeout on the loop from 30s to
10s, on the grounds that since it's been broken, we've effectively had a
timeout of 0s and no-one has complained, so a smaller timeout is actually
closer to the old behavior. A longer timeout would mean that if recycling a
WAL file fails because it's locked for some reason, InstallXLogFileSegment()
will hold ControlFileLock for longer, potentially blocking other backends, so
a long timeout isn't totally harmless.
While we're at it, set errno correctly in pgrename().
Backpatch to 8.2, which is the oldest version supported on Windows. The xlog.c
changes would make sense on other platforms and thus on older versions as
well, but since there's no such locking issues on other platforms, it's not
worth it.
file handle on it, the file goes into "pending deletion" state where it
still shows up in directory listing, but isn't accessible otherwise. That
confuses RemoveOldXLogFiles(), making it think that the file hasn't been
archived yet, while it actually was, and it was deleted along with the .done
file.
Fix that by renaming the file with ".deleted" extension before deleting it.
Also check the return value of rename() and unlink(), so that if the removal
fails for any reason (e.g another process is holding the file locked), we
don't delete the .done file until the WAL file is really gone.
Backpatch to 8.2, which is the oldest version supported on Windows.
It seems the flex developers have decided to change yyleng from int to size_t.
This has already happened in the latest release of OS X, and will start
happening elsewhere once the next release of flex appears. Rather than trying
to divine how it's declared in any particular build, let's just remove the one
existing not-very-necessary external usage.
Back-patch to all supported branches; not so much because users in the field
are likely to care about building old branches with cutting-edge flex, as
to keep OSX-based buildfarm members from having problems with old branches.
to the Default timezone abbreviation set.
Back-port the the current file set to all branches that contain tznames.
This includes adding SGT to the Default set in pre-8.4 releases.
Joachim Wieland
to unload and re-load the library.
The difficulty with unloading a library is that we haven't defined safe
protocols for doing so. In particular, there's no safe mechanism for
getting out of a "hook" function pointer unless libraries are unloaded
in reverse order of loading. And there's no mechanism at all for undefining
a custom GUC variable, so GUC would be left with a pointer to an old value
that might or might not still be valid, and very possibly wouldn't be in
the same place anymore.
While the unload and reload behavior had some usefulness in easing
development of new loadable libraries, it's of no use whatever to normal
users, so just disabling it isn't giving up that much. Someday we might
care to expend the effort to develop safe unload protocols; but even if
we did, there'd be little certainty that every third-party loadable module
was following them, so some security restrictions would still be needed.
Back-patch to 8.2; before that, LOAD was superuser-only anyway.
Security: unprivileged users could crash backend. CVE not assigned yet
functions.
This extends the previous patch that forbade SETting these variables inside
security-definer functions. RESET is equally a security hole, since it
would allow regaining privileges of the caller; furthermore it can trigger
Assert failures and perhaps other internal errors, since the code is not
expecting these variables to change in such contexts. The previous patch
did not cover this case because assign hooks don't really have enough
information, so move the responsibility for preventing this into guc.c.
Problem discovered by Heikki Linnakangas.
Security: no CVE assigned yet, extends CVE-2007-6600
(could happen if either postgresql.conf or postmaster.opts is empty).
It's been broken since the C version was written for 8.0, so patch
all the way back.
initdb's copy of the function is broken in the same way, but it's
less important there since the input files should never be empty.
Patch that in HEAD only, and also fix some cosmetic differences that
crept into that copy of the function.
Per report from Corry Haines and Jeff Davis.
#include the version of history.h that is in the same directory as the
readline.h we are using. This avoids problems in some scenarios where both
readline and editline are installed. Report and patch by Zdenek Kotala.
when we reach the post-COPY "pump it dry" error recovery code that was added
2006-11-24. Per a report from Neil Best, there is at least one code path
in which this occurs, leading to an infinite loop in code that's supposed
to be making it more robust not less so. A reasonable response seems to be
to call PQputCopyEnd() again, so let's try that.
Back-patch to all versions that contain the cleanup loop.
we should ignore NULL array entries, not non-NULL ones. This had the
effect of disabling commit_delay, and could have caused a crash in the
rare race condition the patch was intended to fix.
Bug report and diagnosis by Jeff Janes, in bug #4952.
The original coding was not dealing specially with this file being a symlink,
with the end result that it was not installed in VPATH builds. Oddly enough,
the clean target does know about it ...
last pair of parameter name/value strings, even when there are MAXPARAMS
of them. Aboriginal bug in contrib/xml2, noted while studying bug #4912
(though I'm not sure whether there's something else involved in that
report).
This might be thought a security issue, since it's a potential backend
crash; but considering that untrustworthy users shouldn't be allowed
to get their hands on xslt_process() anyway, it's probably not worth
getting excited about.
a number of other geometric operators also depend on. It miscalculated the
slope of the perpendicular to the given line segment anytime that slope was
other than 0, infinite, or +/-1. In some cases the error would be masked
because the true closest point on the line segment was one of its endpoints
rather than the intersection point, but in other cases it could give an
arbitrarily bad answer. Per bug #4872 from Nick Roosevelt.
Bug goes clear back to Berkeley days, so patch all supported branches.
Make a couple of cosmetic adjustments while at it.
eg Japan. Report and fix by Itagaki Takahiro. Also fix CASHDEBUG printout
format for branches with 64-bit money type, and some minor comment cleanup.
Back-patch to 7.4, because it's broken all the way back.
should use GinItemPointerGetBlockNumber/GinItemPointerGetOffsetNumber,
not ItemPointerGetBlockNumber/ItemPointerGetOffsetNumber, because the latter
will Assert() on ip_posid == 0, ie a "Min" pointer. (Thus, ItemPointerIsMin
has never worked at all, but it seems unused at present.) I'm not certain
that the case can occur in normal functioning, but it's blowing up on me
while investigating Tatsuo-san's data corruption problem. In any case it
seems like a problem waiting to bite someone.
Back-patch just in case this really is a problem for somebody in the field.
there's no analyzable attributes or indexes. We also used to report 0 live
and dead tuples for such tables, which messed with autovacuum threshold
calculations.
This fixes bug #4812 reported by George Su. Backpatch back to 8.1.
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
you can end up with an unrecoverable backup if you start a new base backup
right after finishing archive recovery. In that scenario, the redo pointer of
the checkpoint that pg_start_backup() writes points to the XLOG segment where
the timeline-changing end-of-archive-recovery checkpoint is. The beginning
of that segment contains pages with the old timeline ID, and we don't accept
that in recovery unless we find a history file covering the old timeline ID.
If you omit pg_xlog from the base backup and clear the archive directory
before starting the backup, there will be no such history file available.
The bug is present in all versions since PITR was introduced in 8.0, but I'm
back-patching only back to 8.2. Earlier versions didn't have XLOG switch
records, making this fix unfeasible. Given the lack of reports until now,
it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1.
Per report and suggestion by Mikael Krantz
to make sure that the error code is reset, as a precaution in
case the API doesn't properly reset it on success. This could
be necessary, since we check the error value even if the function
doesn't fail for specific success cases.
as per my recent proposal. release.sgml itself is now just a stub that should
change rarely; ideally, only once per major release to add a new include line.
Most editing work will occur in the release-N.N.sgml files. To update a back
branch for a minor release, just copy the appropriate release-N.N.sgml
file(s) into the back branch.
This commit doesn't change the end-product documentation at all, only the
source layout. However, it makes it easy to start omitting ancient information
from newer branches' documentation, should we ever decide to do that.