unless (1) the @ isn't quoted and (2) the filename isn't empty. This guards
against unexpectedly treating usernames or other strings in "flat files"
as inclusion requests, as seen in a recent trouble report from Ed L.
The empty-filename case would be guaranteed to misbehave anyway, because our
subsequent path-munging behavior results in trying to read the directory
containing the current input file.
I think this might finally explain the report at
http://archives.postgresql.org/pgsql-bugs/2004-05/msg00132.php
of a crash after printing "authentication file token too long, skipping",
since I was able to duplicate that message (though not a crash) on a
platform where stdio doesn't refuse to read directories. We never got
far in investigating that problem, but now I'm suspicious that the trigger
condition was an @ in the flat password file.
Back-patch to all active branches since the problem can be demonstrated in all
branches except HEAD. The test case, creating a user named "@", doesn't cause
a problem in HEAD since we got rid of the flat password file. Nonetheless it
seems like a good idea to not consider quoted @ as a file inclusion spec,
so I changed HEAD too.
set ferror() but never set feof(). This is known to be the case for recent
glibc when trying to read a directory as a file, and might be true for other
platforms/cases too. Per report from Ed L. (There is more that we ought to
do about his report, but this is one easily identifiable issue.)
exceed the total number of non-dropped source table fields for
dblink_build_sql_*(). Addresses bug report from Rushabh Lathia.
Backpatch all the way to the 7.3 branch.
matching before recursing instead of after. The DFA match eliminates
unworkable midpoint choices a lot faster than the recursive check, in most
cases, so doing it first can speed things up; particularly in pathological
cases such as recently exhibited by Michael Glaesemann.
In addition, apply some cosmetic changes that were applied upstream (in the
Tcl project) at the same time, in order to sync with upstream version 1.15
of regexec.c.
Upstream apparently intends to backpatch this, so I will too. The
pathological behavior could be unpleasant if encountered in the field,
which seems to justify any risk of introducing new bugs.
Tom Lane, reviewed by Donal K. Fellows of Tcl project
You might think this is unnecessary since that interpreter is never used
to run code --- but it turns out that's wrong. As of Tcl 8.5, the "clock"
command (alone among builtin Tcl commands) is partially implemented by
loaded-on-demand Tcl code, which means that it fails if there's not
unknown-command support, and also that it's impossible to run it directly
in a safe interpreter. The way they get around the latter is that
Tcl_CreateSlave() automatically sets up an alias command that forwards any
execution of "clock" in a safe slave interpreter to its parent interpreter.
Thus, when attempting to execute "clock" in trusted pltcl, the command
actually executes in the "hold" interpreter, where it will fail if
unknown-command support hasn't been introduced by sourcing the standard
init.tcl script, which is done by Tcl_Init(). (This is a pretty dubious
design decision on the Tcl boys' part, if you ask me ... but they didn't.)
Back-patch all the way. It's not clear that anyone would try to use ancient
versions of pltcl with a recent Tcl, but it's not clear they wouldn't, either.
Also add a regression test using "clock", in branches that have regression
test support for pltcl.
Per recent trouble report from Kyle Bateman.
of the string". The previous coding treated only -1 that way, and would
produce an invalid result value for other negative values.
We ought to fix it so that 2-parameter bit substring() is a different C
function and the 3-parameter form throws error for negative length, but
that takes a pg_proc change which is impractical in the back branches;
and in any case somebody might be relying on -1 working this way.
So just do this as a back-patchable fix.
an allegedly immutable index function. It was previously recognized that
we had to prevent such a function from executing SET/RESET ROLE/SESSION
AUTHORIZATION, or it could trivially obtain the privileges of the session
user. However, since there is in general no privilege checking for changes
of session-local state, it is also possible for such a function to change
settings in a way that might subvert later operations in the same session.
Examples include changing search_path to cause an unexpected function to
be called, or replacing an existing prepared statement with another one
that will execute a function of the attacker's choosing.
The present patch secures VACUUM, ANALYZE, and CREATE INDEX/REINDEX against
these threats, which are the same places previously deemed to need protection
against the SET ROLE issue. GUC changes are still allowed, since there are
many useful cases for that, but we prevent security problems by forcing a
rollback of any GUC change after completing the operation. Other cases are
handled by throwing an error if any change is attempted; these include temp
table creation, closing a cursor, and creating or deleting a prepared
statement. (In 7.4, the infrastructure to roll back GUC changes doesn't
exist, so we settle for rejecting changes of "search_path" in these contexts.)
Original report and patch by Gurjeet Singh, additional analysis by
Tom Lane.
Security: CVE-2009-4136
attacks where an attacker would put <attack>\0<propername> in the field and
trick the validation code that the certificate was for <attack>.
This is a very low risk attack since it reuqires the attacker to trick the
CA into issuing a certificate with an incorrect field, and the common
PostgreSQL deployments are with private CAs, and not external ones. Also,
default mode in 8.4 does not do any name validation, and is thus also not
vulnerable - but the higher security modes are.
Backpatch all the way. Even though versions 8.3.x and before didn't have
certificate name validation support, they still exposed this field for
the user to perform the validation in the application code, and there
is no way to detect this problem through that API.
Security: CVE-2009-4034
This avoids a useless connection retry and complaint in the postmaster log
when receiving a connection from 8.5 or later libpq.
Backpatch in all supported branches, but of course *not* HEAD.
In VACUUM FULL, an interrupt after the initial transaction has been recorded
as committed can cause postmaster to restart with the following error message:
PANIC: cannot abort transaction NNNN, it was already committed
This problem has been reported many times.
In lazy VACUUM, an interrupt after the table has been truncated by
lazy_truncate_heap causes other backends' relcache to still point to the
removed pages; this can cause future INSERT and UPDATE queries to error out
with the following error message:
could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes
The window to this race condition is extremely narrow, but it has been seen in
the wild involving a cancelled autovacuum process.
The solution for both problems is to inhibit interrupts in both operations
until after the respective transactions have been committed. It's not a
complete solution, because the transaction could theoretically be aborted by
some other error, but at least fixes the most common causes of both problems.
The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
which while not insane on its own terms did nothing to protect subsequent
code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size
estimates might well be planner error rather than reality, it seems best
to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
thus ensuring the allocated arrays don't exceed work_mem. We will allow
nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
calls, but we should still guard against integer overflow in those palloc
requests. Per bug #5145 from Bernt Marius Johnsen.
Although the given test case only seems to fail back to 8.2, previous
releases have variants of this issue, so patch all supported branches.
pam_message array contains exactly one PAM_PROMPT_ECHO_OFF message.
Instead, deal with however many messages there are, and don't throw error
for PAM_ERROR_MSG and PAM_TEXT_INFO messages. This logic is borrowed from
openssh 5.2p1, which hopefully has seen more real-world PAM usage than we
have. Per bug #5121 from Ryan Douglas, which turned out to be caused by
the conv_proc being called with zero messages. Apparently that is normal
behavior given the combination of Linux pam_krb5 with MS Active Directory
as the domain controller.
Patch all the way back, since this code has been essentially untouched
since 7.4. (Surprising we've not heard complaints before.)
possibility of shared-inval messages causing a relcache flush while it tries
to fill in missing data in preloaded relcache entries. There are actually
two distinct failure modes here:
1. The flush could delete the next-to-be-processed cache entry, causing
the subsequent hash_seq_search calls to go off into the weeds. This is
the problem reported by Michael Brown, and I believe it also accounts
for bug #5074. The simplest fix is to restart the hashtable scan after
we've read any new data from the catalogs. It appears that pre-8.4
branches have not suffered from this failure, because by chance there were
no other catalogs sharing the same hash chains with the catalogs that
RelationCacheInitializePhase2 had work to do for. However that's obviously
pretty fragile, and it seems possible that derivative versions with
additional system catalogs might be vulnerable, so I'm back-patching this
part of the fix anyway.
2. The flush could delete the *current* cache entry, in which case the
pointer to the newly-loaded data would end up being stored into an
already-deleted Relation struct. As long as it was still deleted, the only
consequence would be some leaked space in CacheMemoryContext. But it seems
possible that the Relation struct could already have been recycled, in
which case this represents a hard-to-reproduce clobber of cached data
structures, with unforeseeable consequences. The fix here is to pin the
entry while we work on it.
In passing, also change RelationCacheInitializePhase2 to Assert that
formrdesc() set up the relation's cached TupleDesc (rd_att) with the
correct type OID and hasoids values. This is more appropriate than
silently updating the values, because the original tupdesc might already
have been copied into the catcache. However this part of the patch is
not in HEAD because it fails due to some questionable recent changes in
formrdesc :-(. That will be cleaned up in a subsequent patch.
It seems the flex developers have decided to change yyleng from int to size_t.
This has already happened in the latest release of OS X, and will start
happening elsewhere once the next release of flex appears. Rather than trying
to divine how it's declared in any particular build, let's just remove the one
existing not-very-necessary external usage.
Back-patch to all supported branches; not so much because users in the field
are likely to care about building old branches with cutting-edge flex, as
to keep OSX-based buildfarm members from having problems with old branches.
functions.
This extends the previous patch that forbade SETting these variables inside
security-definer functions. RESET is equally a security hole, since it
would allow regaining privileges of the caller; furthermore it can trigger
Assert failures and perhaps other internal errors, since the code is not
expecting these variables to change in such contexts. The previous patch
did not cover this case because assign hooks don't really have enough
information, so move the responsibility for preventing this into guc.c.
Problem discovered by Heikki Linnakangas.
Security: no CVE assigned yet, extends CVE-2007-6600
#include the version of history.h that is in the same directory as the
readline.h we are using. This avoids problems in some scenarios where both
readline and editline are installed. Report and patch by Zdenek Kotala.
when we reach the post-COPY "pump it dry" error recovery code that was added
2006-11-24. Per a report from Neil Best, there is at least one code path
in which this occurs, leading to an infinite loop in code that's supposed
to be making it more robust not less so. A reasonable response seems to be
to call PQputCopyEnd() again, so let's try that.
Back-patch to all versions that contain the cleanup loop.
a number of other geometric operators also depend on. It miscalculated the
slope of the perpendicular to the given line segment anytime that slope was
other than 0, infinite, or +/-1. In some cases the error would be masked
because the true closest point on the line segment was one of its endpoints
rather than the intersection point, but in other cases it could give an
arbitrarily bad answer. Per bug #4872 from Nick Roosevelt.
Bug goes clear back to Berkeley days, so patch all supported branches.
Make a couple of cosmetic adjustments while at it.
eg Japan. Report and fix by Itagaki Takahiro. Also fix CASHDEBUG printout
format for branches with 64-bit money type, and some minor comment cleanup.
Back-patch to 7.4, because it's broken all the way back.
as per my recent proposal. release.sgml itself is now just a stub that should
change rarely; ideally, only once per major release to add a new include line.
Most editing work will occur in the release-N.N.sgml files. To update a back
branch for a minor release, just copy the appropriate release-N.N.sgml
file(s) into the back branch.
This commit doesn't change the end-product documentation at all, only the
source layout. However, it makes it easy to start omitting ancient information
from newer branches' documentation, should we ever decide to do that.
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.