Instead of a separate CRC on each backup block, include backup blocks
in their parent WAL record's CRC; this is important to ensure that the
backup block really goes with the WAL record, ie there was not a page
tear right at the start of the backup block. Implement a simple form
of compression of backup blocks: drop any run of zeroes starting at
pd_lower, so as not to store the unused 'hole' that commonly exists in
PG heap and index pages. Tweak PageRepairFragmentation and related
routines to ensure they keep the unused space zeroed, so that the above
compression method remains effective. All per recent discussions.
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library. Along the way, clean up crosstab()
code and documentation a little.
< * Prevent child tables from altering constraints like CHECK that were
< inherited from the parent table
470a469,471
>
> o Prevent child tables from altering constraints like CHECK that were
> inherited from the parent table
conventions of only allowing octal, like \045. Remove support for
\decimal, \0octal, and \0xhex which matches the strtol() function but
didn't make sense with backslashes.
These now return the same character:
test=> \set x '\54'
test=> \echo :x
,
test=> \set x '\054'
test=> \echo :x
,
THIS IS A BACKWARD COMPATIBILITY CHANGE.
key, compare the new and old row versions. If the foreign key column has
not changed, we needn't enqueue the trigger, since the update cannot
violate the foreign key. This optimization was previously applied in the
RI trigger function, but it is more efficient to avoid firing the trigger
altogether. Per recent discussion on pgsql-hackers.
Also add a regression test for some unintuitive foreign key behavior, and
refactor some code that deals with the OIDs of the various RI trigger
functions.
keys, rather than a single trigger for both events. This should not change
functionality, but it is more consistent: previously, there were trigger
functions for both "check_insert" and "check_update", but the former was
used for both events.
Bump catalog version number (not strictly necessary, but best to be
cautious).
would be evaluated only once anyway (ie, it's just a SELECT with no
FROM or an INSERT ... VALUES). The planner can't do it any faster than
the executor, so no point in an extra copying of the expression tree.
pg_class_aclmask(). We only need to do this when we have to check
pg_shadow.usecatupd, and that's not relevant unless the target table
is a system catalog. So we can usually avoid one syscache lookup.
are now reported via elog, eliminating the need to test the result code
at most call sites. Make it possible for the caller to distinguish a
freshly acquired lock from one already held in the current transaction.
Use that capability to avoid redundant AcceptInvalidationMessages() calls
in LockRelation().
status of the most recently queried userid. Since the common pattern is
many successive queries about the same user (ie, the current user) this
can save a lot of syscache probes.
were pretty expensive and I believe the case they were put in to
defend against can no longer arise, now that we have dependency checks
to prevent deletion of a type entry that is still referenced. Certainly
the example given in the CVS log entry can't happen anymore.
Since this was the only use of typeidIsValid(), remove the routine too.
to columns of an RTE that was a function returning RECORD with a column
definition list. Apparently no one has tried to use non-default typmod
with a function returning RECORD before.
spotted by Qingqing Zhou. The HASH_ENTER action now automatically
fails with elog(ERROR) on out-of-memory --- which incidentally lets
us eliminate duplicate error checks in quite a bunch of places. If
you really need the old return-NULL-on-out-of-memory behavior, you
can ask for HASH_ENTER_NULL. But there is now an Assert in that path
checking that you aren't hoping to get that behavior in a palloc-based
hash table.
Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions,
which were not being used anywhere anymore, and were surely too ugly
and unsafe to want to see revived again.
hash table. This is a pretty unlikely scenario, since the table
should be tiny, but we can't guarantee continued correct operation
if it does occur. Spotted by Qingqing Zhou.
from a RECORD Const node, because that's what it may be faced with
after constant-folding of a function returning RECORD. Per example
from Michael Fuhr.
<
< * Add XML output to pg_dump and COPY
<
< We already allow XML to be stored in the database, and XPath queries
< can be used on that data using /contrib/xml2. It also supports XSLT
< transformations.
routines in the index's relcache entry, instead of doing a fresh fmgr_info
on every index access. We were already doing this for the index's opclass
support functions; not sure why we didn't think to do it for the AM
functions too. This supersedes the former method of caching (only)
amgettuple in indexscan scan descriptors; it's an improvement because the
function lookup can be amortized across multiple statements instead of
being repeated for each statement. Even though lookup for builtin
functions is pretty cheap, this seems to drop a percent or two off some
simple benchmarks.
> * Consider sorting hash buckets so entries can be found using a binary
> search, rather than a linear scan
> * In hash indexes, consider storing the hash value with or instead
> of the key itself
> * Add the features of packages
> o Make private objects accessable only to objects in the same schema
> o Allow current_schema.objname to access current schema objects
> o Add session variables
> o Allow nested schemas