The old code creates three separate arrays when only one is needed,
using two different shmem allocation functions for no obvious reason.
It also strangely splits up the initialization of AuxilaryProcs
between the top and bottom of the function to no evident purpose.
Review by Tom Lane.
ReadRecord's habit of using both direct references to tmpRecPtr and
references to *RecPtr (which is pointing at tmpRecPtr) triggers an
optimization bug in gcc 4.6.0, which apparently has forgotten about
aliasing rules. Avoid the compiler bug, and make the code more readable
to boot, by getting rid of the direct references. Improve the comments
while at it.
Back-patch to all supported versions, in case they get built with 4.6.0.
Tom Lane, with some cosmetic suggestions from Alex Hunsaker
Even if a flag is modified only by the backend owning the transaction, it's
not safe to modify it without a lock. Another backend might be setting or
clearing a different flag in the flags field concurrently, and that
operation might be lost because setting or clearing a bit in a word is not
atomic.
Make did-write flag a simple backend-private boolean variable, because it
was only set or tested in the owning backend (except when committing a
prepared transaction, but it's not worthwhile to optimize for the case of a
read-only prepared transaction). This also eliminates the need to add
locking where that flag is set.
Also, set the did-write flag when doing DDL operations like DROP TABLE or
TRUNCATE -- that was missed earlier.
"Blind writes" are a mechanism to push buffers down to disk when
evicting them; since they may belong to different databases than the one
a backend is connected to, the backend does not necessarily have a
relation to link them to, and thus no way to blow them away. We were
keeping those files open indefinitely, which would cause a problem if
the underlying table was deleted, because the operating system would not
be able to reclaim the disk space used by those files.
To fix, have bufmgr mark such files as transient to smgr; the lower
layer is allowed to close the file descriptor when the current
transaction ends. We must be careful to have any other access of the
file to remove the transient markings, to prevent unnecessary expensive
system calls when evicting buffers belonging to our own database (which
files we're likely to require again soon.)
This commit fixes a bug in the previous one, which neglected to cleanly
handle the LRU ring that fd.c uses to manage open files, and caused an
unacceptable failure just before beta2 and was thus reverted.
"Blind writes" are a mechanism to push buffers down to disk when
evicting them; since they may belong to different databases than the one
a backend is connected to, the backend does not necessarily have a
relation to link them to, and thus no way to blow them away. We were
keeping those files open indefinitely, which would cause a problem if
the underlying table was deleted, because the operating system would not
be able to reclaim the disk space used by those files.
To fix, have bufmgr mark such files as transient to smgr; the lower
layer is allowed to close the file descriptor when the current
transaction ends. We must be careful to have any other access of the
file to remove the transient markings, to prevent unnecessary expensive
system calls when evicting buffers belonging to our own database (which
files we're likely to require again soon.)
SimpleLruTruncate() a page number that's "in the future", because it will
issue a warning and refuse to truncate anything. Instead, we leave behind
the latest segment. If the slru is not needed before XID wrap-around, the
segment will appear as new again, and not be cleaned up until it gets old
enough again. That's a bit unpleasant, but better than not cleaning up
anything.
Also, fix broken calculation to check and warn if the span of the OldSerXid
SLRU is getting too large to fit in the 64k SLRU pages that we have
available. It was not XID wraparound aware.
Kevin Grittner and me.
The documentation of the columns collection_type_identifier and
dtd_identifier was wrong. This effectively reverts commits
8e1ccad519 and
57352df66d and updates the name
array_type_identifier (the name in SQL:1999) to
collection_type_identifier.
closes bug #5926
This is an ugly hack to get around the fact that significant parts of the
core backend assume they don't need to worry about passing collation to
equality and hashing functions. That's true for the core string datatypes,
but citext should ideally have equality behavior that depends on the
specified collation's LC_CTYPE. However, there's no chance of fixing the
core before 9.2, so we'll have to live with this compromise arrangement for
now. Per bug #6053 from Regina Obe.
The code changes in this commit should be reverted in full once the core
code is up to speed, but be careful about reverting the docs changes:
I fixed a number of obsolete statements while at it.
Since start/stop/restart/reload/status is a kind of standard command
set, it seems odd to insert the special-purpose "promote" in between
the closely related "restart" and "reload". So put it after "status"
in code and documentation.
Put the documentation of the -U option in some sensible place.
Rewrite the synopsis sentence in help and documentation to make it
less of a growing mouthful.
This use-case was broken in commit 529cb267a6
of 2010-10-21, in which I commented "For the moment, we just forbid such
matching. We might later wish to insert an automatic downcast to the
underlying array type, but such a change should also change matching of
domains to ANYELEMENT for consistency". We still lack consensus about what
to do with ANYELEMENT; but not matching ANYARRAY is a clear loss of
functionality compared to prior releases, so let's go ahead and make that
happen. Per complaint from Regina Obe and extensive subsequent discussion.
Truncating or dropping a table is treated like deletion of all tuples, and
check for conflicts accordingly. If a table is clustered or rewritten by
ALTER TABLE, all predicate locks on the heap are promoted to relation-level
locks, because the tuple or page ids of any existing tuples will change and
won't be valid after rewriting the table. Arguably ALTER TABLE should be
treated like a mass-UPDATE of every row, but if you e.g change the datatype
of a column, you could also argue that it's just a change to the physical
layout, not a logical change. Reindexing promotes all locks on the index to
relation-level lock on the heap.
Kevin Grittner, with a lot of cosmetic changes by me.
This has never been supported, but we previously let md.c issue the
complaint for us at whatever point we tried to examine the backing file.
Now we print a nicer error message.
Per bug #6041, reported by Emanuel, and extensive discussion with Tom
Lane over where to put the check.
Since the original implementation of CTEs only allowed them in SELECT
queries, the rule rewriter did not expect to find any CTEs in statements
being rewritten by ON INSERT/UPDATE/DELETE rules. We had dealt with this
to some extent but the code was still several bricks shy of a load, as
illustrated in bug #6051 from Jehan-Guillaume de Rorthais.
In particular, we have to be able to copy CTEs from the original query's
cteList into that of a rule action, in case the rule action references the
CTE (which it pretty much always will). This also implies we were doing
things in the wrong order in RewriteQuery: we have to recursively rewrite
the CTE queries before expanding the main query, so that we have the
rewritten queries available to copy.
There are unpleasant limitations yet to resolve here, but at least we now
throw understandable FEATURE_NOT_SUPPORTED errors for them instead of just
failing with bizarre implementation-dependent errors. In particular, we
can't handle propagating the same CTE into multiple post-rewrite queries
(because then the CTE would be evaluated multiple times), and we can't cope
with conflicts between CTE names in the original query and in the rule
actions.
This avoids an Assert failure when we try to use ordinary index fetches
while checking for exclusion conflicts. Per report from Noah Misch.
No need for back-patch because the Assert wasn't there before 9.1.
We were trying to make that strictly an internal implementation detail,
but it turns out that it's exposed anyway when dumping a view defined
like
CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;
This comes out as
CREATE VIEW ... ORDER BY "*VALUES*".column1;
which fails to parse when reloading the dump.
Hacking ruleutils.c to suppress the column qualification looks like it'd
be a risky business, so instead promote the RTE alias to full-fledged
usability.
Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
This case was missed when NOT VALID constraints were first introduced in
commit 722bf7017b by Simon Riggs on
2011-02-08. Among other things, it causes pg_dump to omit the NOT VALID
flag when dumping such constraints, which may cause them to fail to
load afterwards, if they contained values failing the constraint.
Per report from Thom Brown.
The existence of a btree opclass accepting composite types caused us to
assume that every composite type is sortable. This isn't true of course;
we need to check if the column types are all sortable. There was logic
for this for the case of array comparison (ie, check that the element
type is sortable), but we missed the point for rowtypes. Per Teodor's
report of an ANALYZE failure for an unsortable composite type.
Rather than just add some more ad-hoc logic for this, I moved knowledge of
the issue into typcache.c. The typcache will now only report out array_eq,
record_cmp, and friends as usable operators if the array or composite type
will work with those functions.
Unfortunately we don't have enough info to do this for anonymous RECORD
types; in that case, just assume it will work, and take the runtime failure
as before if it doesn't.
This patch might be a candidate for back-patching at some point, but
given the lack of complaints from the field, I'd rather just test it in
HEAD for now.
Note: most of the places touched in this patch will need further work
when we get around to supporting hashing of record types.
This is the original DocBook SGML limit, but apparently most
installations have changed it or ignore it, which is why few people
have run into this problem.
pointed out by Brendan Jurd