#include the version of history.h that is in the same directory as the
readline.h we are using. This avoids problems in some scenarios where both
readline and editline are installed. Report and patch by Zdenek Kotala.
renders useless one of the few test methodologies we have for WAL replay,
which is to intentionally crash the system just after completing the
regression tests and see if it recovers to the expected database state.
The reason is that DROP TABLESPACE forces a checkpoint, so there's essentially
no WAL available for replay after the tests complete.
"all tuples visible" flag in heap page headers. The flag update *must*
be applied before calling XLogInsert, but heap_update and the tuple
moving routines in VACUUM FULL were ignoring this rule. A crash and
replay could therefore leave the flag incorrectly set, causing rows
to appear visible in seqscans when they should not be. This might explain
recent reports of data corruption from Jeff Ross and others.
In passing, do a bit of editorialization on comments in visibilitymap.c.
or previously truncated in the current (sub)transaction. This is safe since
if the (sub)transaction later rolls back, we'd just discard the rel's current
physical file anyway. This avoids unreasonable growth in the number of
transient files when a relation is repeatedly truncated. Per a performance
gripe a couple weeks ago from Todd Cook.
values before they get passed to the index access method. This avoids
repeated detoastings that will otherwise ensue as the comparison value
is examined by various index support functions. We have seen a couple of
reports of cases where repeated detoastings result in an order-of-magnitude
slowdown, so it seems worth adding a bit of extra logic to prevent this.
I had previously proposed trying to avoid duplicate detoastings in general,
but this fix takes care of what seems the most important case in practice
with very little effort or risk.
Back-patch to 8.4 so that the PostGIS folk won't have to wait a year to
have this fix in a production release. (The issue exists further back,
of course, but the code's diverged enough to make backpatching further a
higher-risk action. Also it appears that the possible gains may be limited
in prior releases because of different handling of lossy operators.)
about it doesn't simplify the grammar at all, and it does invite confusion
among those who only read the SELECT syntax summary and not the full details.
Per gripe from Jaime Casanova.
physical conversion when there are dropped columns in the same places in
the input and output tupdescs. This avoids possible performance loss from
the recent patch to improve dropped-column handling, in some cases where
the old code would have worked.
truncate_identifier won't do anything if the passed-in strlen is already
less than NAMEDATALEN, which it always would be given the strlcpy usage.
This has been broken since the arrays-of-composite-types code went in.
Arguably truncate_identifier is suffering from excessive optimization
and should always process the string, but for the moment I'll take the
more localized patch.
Per bug #4987.
This test is clearly not being used anymore, since it's been broken for
long periods of time without anyone noticing. Per discussion, it's not
worth keeping in our source tree.
for standalone backends.
Although we probably ought to just remove this long-obsolete test case from
our code, it seems worthwhile to document the issue and fix in CVS first.
Jeff Janes
Add some checks on various data types are converted into and out of Python.
This is extracted from Caleb Welton's patch for improved bytea support,
but much expanded.
When examining what Python type to convert a PostgreSQL type to on input,
look at the base type of the input type, otherwise all domains end up
defaulting to string.
I mistakenly removed it last month, thinking it was no longer needed ---
but it is still needed for dealing with joininfo lists. Fortunately this
bit of brain fade hadn't made it into any released versions yet.
does match some unique index on the referenced table, but that index is
only deferrably unique. We were doing this nicely for the
default-to-primary-key case, but were being lazy for the other case.
Dean Rasheed
To make this work in the base case, pg_database now has a nailed-in-cache
relation descriptor that is initialized using hardwired knowledge in
relcache.c. This means pg_database is added to the set of relations that
need to have a Schema_pg_xxx macro maintained in pg_attribute.h. When this
path is taken, we'll have to do a seqscan of pg_database to find the row
we need.
In the normal case, we are able to do an indexscan to find the database's row
by name. This is made possible by storing a global relcache init file that
describes only the shared catalogs and their indexes (and therefore is usable
by all backends in any database). A new backend loads this cache file,
finds its database OID after an indexscan on pg_database, and then loads
the local relcache init file for that database.
This change should effectively eliminate number of databases as a factor
in backend startup time, even with large numbers of databases. However,
the real reason for doing it is as a first step towards getting rid of
the flat files altogether. There are still several other sub-projects
to be tackled before that can happen.
to access a Relation entry it had just closed. I happened to be testing with
CLOBBER_CACHE_ALWAYS, which made this a guaranteed core dump (at least on
machines where sprintf %s isn't forgiving of a NULL pointer). It's probably
quite unlikely that it would fail in the field, but a bug is a bug. Fix by
moving the relation_close call down past the logging action.
of the previous monolithic setup-create-run sequence, that was apparently
inherited from a previous test infrastructure, but makes working with the
tests and adding new ones weird.
Documentation files in HTML and man formats are now prepared for
distribution using the distprep make target, like everything else. They
are placed in doc/src/sgml/html and manX and installed from there by
make install, if present. The business with the tarballs in the tarball
is gone.
two new lists, rather than repeatedly rescanning the main TOC list.
This avoids a potential O(N^2) slowdown, although you'd need a *lot*
of tables to make that really significant; and it might simplify future
improvements in the scheduling algorithm by making the set of ready
items more easily inspectable. The original thought that it would
in itself result in a more efficient job dispatch order doesn't seem
to have been borne out in testing, but it seems worth doing anyway.
Test coverage support now covers the entire source tree, including
contrib, instead of just src/backend. In a related but independent
development, the commands make coverage and make coverage-html can be run
in any directory.
This turned out to be much easier than feared. Besides a few ad hoc fixes
to pass the make target down the tree, change all affected makefiles to
list their directories in the SUBDIRS variable, changed from variants like
DIRS and WANTED_DIRS. MSVC build fix was attempted as well.
when we reach the post-COPY "pump it dry" error recovery code that was added
2006-11-24. Per a report from Neil Best, there is at least one code path
in which this occurs, leading to an infinite loop in code that's supposed
to be making it more robust not less so. A reasonable response seems to be
to call PQputCopyEnd() again, so let's try that.
Back-patch to all versions that contain the cleanup loop.
if a smart shutdown is already in progress. Backpatch to 8.3, this was broken
in the patch that introduced "dead-end backends".
Per report by Itagaki Takahiro, patch by Fujii Masao.
by supporting conversions in places that used to demand exact rowtype match.
Since this issue is certain to come up elsewhere (in fact, already has,
in ExecEvalConvertRowtype), factor out the support code into new core
functions for tuple conversion. I chose to put these in a new source
file since heaptuple.c is already overly long.
Heavily revised version of a patch by Pavel Stehule.
fsync() fails, say "file" rather than "relation" when printing the filename.
This makes messages that display block numbers a bit confusing. For example,
in message 'could not read block 150000 of file "base/1234/5678.1"', 150000
is the block number from the beginning of the relation, ie. segment 0, not
150000th block within that segment. Per discussion, users aren't usually
interested in the exact location within the file, so we can live with that.
To ease constructing error messages, add FilePathName(File) function to
return the pathname of a virtual fd.
This switches the man page building process to use the DocBook XSL stylesheet
toolchain. The previous targets for Docbook2X are removed. configure has been
updated to look for the new tools. The Documentation appendix contains the
new build instructions. There are also a few isolated tweaks in the
documentation to improve places that came out strangely in the man pages.
The previous implementation got it right in most cases but failed in one:
if you pg_dump into an archive with standard_conforming_strings enabled, then
pg_restore to a script file (not directly to a database), the script will set
standard_conforming_strings = on but then emit large object data as
nonstandardly-escaped strings.
At the moment the code is made to emit hex-format bytea strings when dumping
to a script file. We might want to change to old-style escaping for backwards
compatibility, but that would be slower and bulkier. If we do, it's just a
matter of reimplementing appendByteaLiteral().
This has been broken for a long time, but given the lack of field complaints
I'm not going to worry about back-patching.