This is required on Windows due to the special locale
handling for UTF8 that doesn't change the full environment.
Fixes crash with translated error messages per bugs 4180
and 4196.
Tom Lane
called before, not after, calling the assign_hook if any. This is because
push_old_value might fail (due to palloc out-of-memory), and in that case
there would be no stack entry to tell transaction abort to undo the GUC
assignment. Of course the actual assignment to the GUC variable hasn't
happened yet --- but the assign_hook might have altered subsidiary state.
Without a stack entry we won't call it again to make it undo such actions.
So this is necessary to make the world safe for assign_hooks with side
effects. Per a discussion a couple weeks ago with Magnus.
Back-patch to 8.0. 7.x did not have the problem because it did not have
allocatable stacks of GUC values.
cases. Recent buildfarm experience shows that it is sometimes possible
to execute several SQL commands in less time than the granularity of
Windows' not-very-high-resolution gettimeofday(), leading to a failure
because the tests expect the value of now() to change and it doesn't.
Also, it was recognized some time ago that the same area of the tests
could fail if local midnight passes between the insertion and the checking
of the values for 'yesterday', 'tomorrow', etc. Clean all this up per
ideas from myself and Greg Stark.
There remains a window for failure if the transaction block is entered
exactly at local midnight (so that 'now' and 'today' have the same value),
but that seems low-probability enough to live with.
Since the point of this change is mostly to eliminate buildfarm noise,
back-patch to all versions we are still actively testing.
to go beoynd 10MB, as demonstrated by Gavin Sharry's example of dropping a
schema with ~25000 objects. The really bogus thing about the limit was that
it was enforced when a state file file was read in, not when it was written,
so you would end up with a prepared transaction that you can't commit or
abort, and the only recourse was to shut down the server and remove the file
by hand.
Raise the limit to MaxAllocSize, and enforce it also when a state file is
written. We could've removed the limit altogether, but reading in a file
larger than MaxAllocSize would fail anyway because we read it into a
palloc'd buffer.
Backpatch down to 8.1, where 2PC and this issue was introduced.
varoattno along with varattno. This resulted in having Vars that were not
seen as equal(), causing inheritance of the "same" constraint from different
parent relations to fail. An example is
create table pp1 (f1 int check (f1>0));
create table cc1 (f2 text, f3 int) inherits (pp1);
create table cc2(f4 float) inherits(pp1,cc1);
Backpatch as far as 7.4. (The test case still fails in 7.4, for reasons
that I don't feel like investigating at the moment.)
This is a backpatch commit only. The fix will be applied in HEAD as part
of the upcoming pg_constraint patch.
parameter. This fixes bug 4137 reported by Wojciech Strzalka, where a WAL
file is deleted too early when starting the recovery of a warm standby server.
Also add a sanity check in pg_standby so that it will refuse to delete anything
earlier than the file being restored, and improve the debug message in case
nothing is deleted.
Simon Riggs. Backpatch to 8.3, which is where %r was introduced.
a user-supplied TID is out of range for the relation. This is needed to
preserve compatibility with our pre-8.3 behavior, and it is sensible anyway
since if the query were implemented by brute force rather than optimized
into a TidScan, the behavior for a non-existent TID would be zero rows out,
never an error. Per gripe from Gurjeet Singh.
which is a global variable not a function, and so the probe failed on machines
where the linker makes a distinction (cf. Red Hat bug #444317). Probe for
an actual function instead.
binary search instead of linear search when checking child-transaction XIDs.
Per example from Robert Treat, the speed of TransactionIdIsCurrentTransactionId
is significantly more important in 8.3 than it was in prior releases, so
this seems worth taking back-patching risk for.
checked to see if it's been initialized to all non-nulls. The implicit NOT
NULL constraint was not being checked during the ALTER (in fact, not even if
there was an explicit NOT NULL too), because ATExecAddColumn neglected to
set the flag needed to make the test happen. This has been broken since
the capability was first added, in 8.0.
Brendan Jurd, per a report from Kaloyan Iliev.
<craig@postnewspapers.com.au>.
It was my mistake, I missed limitation of number of held locks, now GIN doesn't
use continiuous locks, but still hold buffers pinned to prevent interference
with vacuum's deletion algorithm.
output is not of the same type that's needed for the IN comparison (ie,
where the parser inserted an implicit coercion above the subselect result).
We should record the coerced expression, not just a raw Var referencing
the subselect output, as the quantity that needs to be unique-ified if
we choose to implement the IN as Unique followed by a plain join.
As of 8.3 this error was causing crashes, as seen in bug #4113 from Javier
Hernandez, because the executor was being told to hash or sort the raw
subselect output column using operators appropriate to the coerced type.
In prior versions there was no crash because the executor chose the
hash or sort operators for itself based on the column type it saw.
However, that's still not really right, because what's unique for one data
type might not be unique for another. In corner cases we could get multiple
outputs of a row that should appear only once, as demonstrated by the
regression test case included in this commit.
However, this patch doesn't apply cleanly to 8.2 or before, and the code
involved has shifted enough over time that I'm hesitant to try to back-patch.
Given the lack of complaints from the field about such corner cases, I think
the bug may not be important enough to risk breaking other things with a
back-patch.
version ones, to make it clear to users just browsing the notes
that there are a lot more changes available from whatever version
they are at than what's in the minor version release notes.
UPDATE/SHARE couldn't occur as a subquery in a query with a non-SELECT
top-level operation. Symptoms included outright failure (as in report from
Mark Mielke) and silently neglecting to take the requested row locks.
Back-patch to 8.3, because the visible failure in the INSERT ... SELECT case
is a regression from 8.2. I'm a bit hesitant to back-patch further given the
lack of field complaints.
I never understood why initial authors GiST in pgsql choose so
stgrange signature for 'same' method:
bool *sameFn(Datum a, Datum b, bool* result)
instead of simple, logical
bool sameFn(Datum a, Datum b)
This change will break any existing GiST extension, so we still live with
it and will live.
file; the idea is that we should clean up as much as we can, even if there's
some problem removing one file. Make the error messages a bit less misleading,
too. In passing, const-ify function arguments.
place to prevent reusing relation OIDs before next checkpoint, and DROP
DATABASE. First, if a database was dropped, bgwriter would still try to unlink
the files that the rmtree() call by the DROP DATABASE command has already
deleted, or is just about to delete. Second, if a database is dropped, and
another database is created with the same OID, bgwriter would in the worst
case delete a relation in the new database that happened to get the same OID
as a dropped relation in the old database.
To fix these race conditions:
- make rmtree() ignore ENOENT errors. This fixes the 1st race condition.
- make ForgetDatabaseFsyncRequests forget unlink requests as well.
- force checkpoint on in dropdb on all platforms
Since ForgetDatabaseFsyncRequests() is asynchronous, the 2nd change isn't
enough on its own to fix the problem of dropping and creating a database with
same OID, but forcing a checkpoint on DROP DATABASE makes it sufficient.
Per Tom Lane's bug report and proposal. Backpatch to 8.3.
we had several code paths where a physical tlist could be used for the input
to a Sort node, which is a dumb idea because any unneeded table columns will
increase the volume of data the sort has to push around.
(Unfortunately the easy-looking fix of calling disuse_physical_tlist during
make_sort_xxx doesn't work because in most cases we're already committed to
the current input tlist --- it's been marked with sort column numbers, or
we've built grouping column numbers using it, etc. The tlist has to be
selected properly at the calling level before we start constructing sort-col
information. This is easy enough to do, we were just failing to take the
point into consideration.)
Back-patch to 8.3. I believe the problem probably exists clear back to 7.4
when the physical tlist optimization was added, but I'm afraid to back-patch
further than 8.3 without a great deal more study than I want to put into it.
The code in this area has drifted a lot over time. The real-world importance
of these code paths is uncertain anyway --- I think in many cases we'd
probably prefer hash-based methods.
corrupted. (Neither is very important if SIGTERM is used to shut down the
whole database cluster together, but there's a problem if someone tries to
SIGTERM individual backends.) To do this, introduce new infrastructure
macros PG_ENSURE_ERROR_CLEANUP/PG_END_ENSURE_ERROR_CLEANUP that take care
of transiently pushing an on_shmem_exit cleanup hook. Also use this method
for createdb cleanup --- that wasn't a shared-memory-corruption problem,
but SIGTERM abort of createdb could leave orphaned files lying around.
Backpatch as far as 8.2. The shmem corruption cases don't exist in 8.1,
and the createdb usage doesn't seem important enough to risk backpatching
further.
it is trying to build a relcache entry for. This is an oversight in my 8.2
patch that tried to ensure we always took a lock on a relation before trying
to build its relcache entry. The implication is that if someone committed a
reindex of a critical system index at about the same time that some other
backend were starting up without a valid pg_internal.init file, the second one
might PANIC due to not seeing any valid version of the index's pg_class row.
Improbable case, but definitely not impossible.
results to contain uninitialized, unpredictable values. While this was okay
as far as the datatypes themselves were concerned, it's a problem for the
parser because occurrences of the "same" literal might not be recognized as
equal by datumIsEqual (and hence not by equal()). It seems sufficient to fix
this in the input functions since the only critical use of equal() is in the
parser's comparisons of ORDER BY and DISTINCT expressions.
Per a trouble report from Marc Cousin.
Patch all the way back. Interestingly, array_in did not have the bug before
8.2, which may explain why the issue went unnoticed for so long.
on win32, because the stat() function in the runtime cannot
be trusted to always update the st_size field.
Per report and research by Sergey Zubkovsky.
the columns it works with to be domains over the expected type, not just
exactly the expected type. In passing, fix ts_stat() the same way.
Per report from Markus Wollny.
currently support this because we must be able to build Vars referencing
join columns, and varattno is only 16 bits wide. Perhaps this should be
improved in future, but considering that it never came up before, I'm not
sure the problem is worth much effort. Per bug #4070 from Marcello
Ceschia.
The problem seems largely academic in 8.0 and 7.4, because they have
(different) O(N^2) performance issues with such wide joins, but
back-patch all the way anyway.
classed all as "dead"; also get it to count DEAD item pointers as dead rows,
instead of ignoring them as before. Also improve matters so that tuples
previously inserted or deleted by our own transaction are handled nicely:
the stats collector's live-tuple and dead-tuple counts will end up correct
after our transaction ends, regardless of whether we end in commit or abort.
While there's more work that could be done to improve the counting of in-doubt
tuples in both VACUUM and ANALYZE, this commit is enough to alleviate some
known bad behaviors in 8.3; and the other stuff that's been discussed seems
like research projects anyway.
Pavan Deolasee and Tom Lane
responsible for copying the query string into the new Portal. Such copying
is unnecessary in the common code path through exec_simple_query, and in
this case it can be enormously expensive because the string might contain
a large number of individual commands; we were copying the entire, long
string for each command, resulting in O(N^2) behavior for N commands.
(This is the cause of bug #4079.) A second problem with it is that
PortalDefineQuery really can't risk error, because if it elog's before
having set up the Portal, we will leak the plancache refcount that the
caller is trying to hand off to the portal. So go back to the design in
which the caller is responsible for making sure everything is copied into
the portal if necessary.
eval_const_expressions needs to be passed the PlannerInfo ("root") structure,
because in some cases we want it to substitute values for Param nodes.
(So "constant" is not so constant as all that ...) This mistake partially
disabled optimization of unnamed extended-Query statements in 8.3: in
particular the LIKE-to-indexscan optimization would never be applied if the
LIKE pattern was passed as a parameter, and constraint exclusion depending
on a parameter value didn't work either.
The places that did, eg,
(statbuf.st_mode & S_IFMT) == S_IFDIR
were correct, but there is no good reason not to use S_ISDIR() instead,
especially when that's what the other 90% of our code does. The places
that did, eg,
(statbuf.st_mode & S_IFDIR)
were flat out *wrong* and would fail in various platform-specific ways,
eg a symlink could be mistaken for a regular file on most Unixen.
The actual impact of this is probably small, since the problem cases
seem to always involve symlinks or sockets, which are unlikely to be
found in the directories that PG code might be scanning. But it's
clearly trouble waiting to happen, so patch all the way back anyway.
(There seem to be no occurrences of the mistake in 7.4.)
the query result must be exactly one row (since we don't do this when there's
any GROUP BY). Therefore any ORDER BY or DISTINCT attached to the query is
useless and can be dropped. Aside from saving useless cycles, this protects
us against problems with matching the hacked-up tlist entries to sort clauses,
as seen in a bug report from Taiki Yamaguchi. We might need to work harder
if we ever try to optimize grouped queries with this approach, but this
solution will do for now.