String are converted to UTF8 on the way into perl and to the
database encoding on the way back. This avoids a number of
observed anomalies, and ensures Perl a consistent view of the
world.
Some minor code cleanups are also accomplished.
Alex Hunsaker, reviewed by Andy Colson.
Remove the claim that ALTER TABLE .. SET DATA TYPE is the fastest way of
rewriting a table, since it no longer is.
Noah Misch and Robert Haas, based on a suggestion from Tom Lane.
src/pl/plpgsql/src/plerrcodes.h, src/include/utils/errcodes.h, and a
big chunk of errcodes.sgml are now automatically generated from a single
file, src/backend/utils/errcodes.txt.
Jan Urbański, reviewed by Tom Lane.
Add the current xlog insert location to the response of
IDENTIFY_SYSTEM, and adds result sets containing start
and stop location of backups to BASE_BACKUP responses.
Prior to 9.0, restartpoints never created, deleted, or recycled WAL
files, but now they can. This code makes log_checkpoints treat
checkpoints and restartpoints symmetrically. It also adjusts up
the documentation of the parameter to mention restartpoints.
Fujii Masao. Docs by me, as suggested by Itagaki Takahiro.
This allows the language-specific try/catch construct to catch and
handle exceptions arising from SPI calls, matching the behavior of
other PLs.
As an additional bonus you no longer get all the ugly "unrecognized
error in PLy_spi_execute_query" errors.
Jan Urbański, reviewed by Steve Singer
Unlike Btree-based LIKE optimization, this works for non-left-anchored
search patterns. The effectiveness of the search depends on how many
trigrams can be extracted from the pattern. (The worst case, with no
trigrams, degrades to a full-table scan, so this isn't a panacea. But
it can be very useful.)
Alexander Korotkov, reviewed by Jan Urbanski
Previously reported as ERRCODE_ADMIN_SHUTDOWN, this case is now
reported as ERRCODE_T_R_DATABASE_DROPPED. No message text change.
Unlikely to happen on most servers, so low impact change to allow
session poolers to correctly handle this situation.
Tatsuo Ishii, edits by me, review by Robert Haas
Document how to build 64 bit Windows binaries using the MinGW64 tool set.
Remove recommendation against using Mingw as a build platform.
Be more specific about when Cygwin is useful and when it's not, in
particular note its usefulness for running psql, and
add a note about building on Cygwin in non-C locales.
Per recent discussions.
Operation", merged from upgrade sections in "Installation from Source
Code" and "Backup and Restore". This now gives a single place for all
upgrade information.
With this patch, pg_basebackup doesn't write a backup_label file in the
data directory, so it doesn't interfere with a pg_start/stop_backup() based
backup anymore. backup_label is still included in the backup, but it is
injected directly into the tar stream.
Heikki Linnakangas, reviewed by Fujii Masao and Magnus Hagander.
When included, this makes the base backup a complete working
"clone" of the initial database, ready to have a postmaster
started against it without the need to set up any log archiving
or similar.
Magnus Hagander, reviewed by Fujii Masao and Heikki Linnakangas
Although this improves the style, an ulterior motive is to keep the two
table links from breaking across lines in PDF output, per complaint from
Josh Kupershmidt.
The link to the CREATE CONVERSION manual page was split across a page
boundary in the PDF output, leading to "\pdfendlink ended up in different
nesting level than \pdfstartlink" error while building PDFs.
It wouldn't be worth changing text that's undergoing active editing to
avoid this, since other editing might result in moving the link away from
the page end anyway. But this paragraph has been static for a long time,
so might as well fix it to prevent it from being an issue in future.
There isn't any need to track this state on a table-wide basis, and trying
to do so introduces undesirable semantic fuzziness. Move the flag to
pg_index, where it clearly describes just a single index and can be
immutable after index creation.
This feature allows a unique or pkey constraint to be created using an
already-existing unique index. While the constraint isn't very
functionally different from the bare index, it's nice to be able to do that
for documentation purposes. The main advantage over just issuing a plain
ALTER TABLE ADD UNIQUE/PRIMARY KEY is that the index can be created with
CREATE INDEX CONCURRENTLY, so that there is not a long interval where the
table is locked against updates.
On the way, refactor some of the code in DefineIndex() and index_create()
so that we don't have to pass through those functions in order to create
the index constraint's catalog entries. Also, in parse_utilcmd.c, pass
around the ParseState pointer in struct CreateStmtContext to save on
notation, and add error location pointers to some error reports that didn't
have one before.
Gurjeet Singh, reviewed by Steve Singer and Tom Lane
This is still pretty rough - among other things, the documentation
needs work, and the messages need a visit from the style police -
but this gets the basic framework in place.
KaiGai Kohei
This tool makes it possible to do the pg_start_backup/
copy files/pg_stop_backup step in a single command.
There are still some steps to be done before this is a
complete backup solution, such as the ability to stream
the required WAL logs, but it's still usable, and
could do with some buildfarm coverage.
In passing, make the checkpoint request optionally
fast instead of hardcoding it.
Magnus Hagander, reviewed by Fujii Masao and Dimitri Fontaine
If wal_buffers is initially set to -1 (which is now the default), it's
replaced by 1/32nd of shared_buffers, with a minimum of 8 (the old default)
and a maximum of the XLOG segment size. The allowed range for manual
settings is still from 4 up to whatever will fit in shared memory.
Greg Smith, with implementation correction by me.
Actually rename the program, rather than just claiming we did. Hook it
into the build system. Get rid of useless dependency on libpq. Clean up
#include list and messy whitespace.
Makes it easier to parse mainly the BASE_BACKUP command
with it's options, and avoids having to manually deal
with quoted identifiers in the label (previously broken),
and makes it easier to add new commands and options in
the future.
In passing, refactor the case statement in the walsender
to put each command in it's own function.
Fix broken test for pre-existing postmaster, caused by wrong code for
appending lines to the lockfile; don't write a failed listen_address
setting into the lockfile; don't arbitrarily change the location of the
data directory in the lockfile compared to previous releases; provide more
consistent and useful definitions of the socket path and listen_address
entries; avoid assuming that pg_ctl has the same DEFAULT_PGSOCKET_DIR as
the postmaster; assorted code style improvements.
Instead, run them in the encoding that the locale selects, which is
more representative of real use.
Also document how locale and encoding for regression test runs can be
selected.
Add BASE_BACKUP command to walsender, allowing it to stream a
base backup to the client (in tar format). The syntax is still
far from ideal, that will be fixed in the switch to use a proper
grammar for walsender.
No client included yet, will come as a separate commit.
Magnus Hagander and Heikki Linnakangas
The array containment operators now behave per mathematical expectation
for empty arrays (ie, an empty array is contained in anything).
Both these operators and the query_int operators now work as expected in
GiST and GIN index searches, rather than having corner cases where the
index searches gave different answers.
Also, fix unexpected failures where the operators would claim that an array
contained nulls, when in fact there was no longer any null present (similar
to bug #5784). The restriction to not have nulls is still there, as
removing it would take a lot of added code complexity and probably slow
things down significantly.
Also, remove the arbitrary restriction to 1-D arrays; unlike the other
restriction, this was buying us nothing performance-wise.
Assorted cosmetic improvements and marginal performance improvements, too.
The only use we have had for amindexnulls is in determining whether an
index is safe to cluster on; but since the addition of the amclusterable
flag, that usage is pretty redundant.
In passing, clean up assorted sloppiness from the last patch that touched
pg_am.h: Natts_pg_am was wrong, and ambuildempty was not documented.
Per my recent proposal(s). Null key datums can now be returned by
extractValue and extractQuery functions, and will be stored in the index.
Also, placeholder entries are made for indexable items that are NULL or
contain no keys according to extractValue. This means that the index is
now always complete, having at least one entry for every indexed heap TID,
and so we can get rid of the prohibition on full-index scans. A full-index
scan is implemented much the same way as partial-match scans were already:
we build a bitmap representing all the TIDs found in the index, and then
drive the results off that.
Also, introduce a concept of a "search mode" that can be requested by
extractQuery when the operator requires matching to empty items (this is
just as cheap as matching to a single key) or requires a full index scan
(which is not so cheap, but it sure beats failing or giving wrong answers).
The behavior remains backward compatible for opclasses that don't return
any null keys or request a non-default search mode.
Using these features, we can now make the GIN index opclass for anyarray
behave in a way that matches the actual anyarray operators for &&, <@, @>,
and = ... which it failed to do before in assorted corner cases.
This commit fixes the core GIN code and ginarrayprocs.c, updates the
documentation, and adds some simple regression test cases for the new
behaviors using the array operators. The tsearch and contrib GIN opclass
support functions still need to be looked over and probably fixed.
Another thing I intend to fix separately is that this is pretty inefficient
for cases where more than one scan condition needs a full-index search:
we'll run duplicate GinScanEntrys, each one of which builds a large bitmap.
There is some existing logic to merge duplicate GinScanEntrys but it needs
refactoring to make it work for entries belonging to different scan keys.
Note that most of gin.h has been split out into a new file gin_private.h,
so that gin.h doesn't export anything that's not supposed to be used by GIN
opclasses or the rest of the backend. I did quite a bit of other code
beautification work as well, mostly fixing comments and choosing more
appropriate names for things.
This can be overriden by using NOREPLICATION on the CREATE ROLE
statement, but by default they will have it, making it backwards
compatible and "less surprising" (given that superusers normally
override all checks).
Add new function pg_sequence_parameters that returns a sequence's start,
minimum, maximum, increment, and cycle values, and use that in the view.
(bug #5662; design suggestion by Tom Lane)
Also slightly adjust the view's column order and permissions after review of
SQL standard.
Foreign tables are a core component of SQL/MED. This commit does
not provide a working SQL/MED infrastructure, because foreign tables
cannot yet be queried. Support for foreign table scans will need to
be added in a future patch. However, this patch creates the necessary
system catalog structure, syntax support, and support for ancillary
operations such as COMMENT and SECURITY LABEL.
Shigeru Hanada, heavily revised by Robert Haas
The contents of an unlogged table are WAL-logged; thus, they are not
available on standby servers and are truncated whenever the database
system enters recovery. Indexes on unlogged tables are also unlogged.
Unlogged GiST indexes are not currently supported.
This privilege is required to do Streaming Replication, instead of
superuser, making it possible to set up a SR slave that doesn't
have write permissions on the master.
Superuser privileges do NOT override this check, so in order to
use the default superuser account for replication it must be
explicitly granted the REPLICATION permissions. This is backwards
incompatible change, in the interest of higher default security.
This is how it was documented originally, but several years ago somebody
decided that DEFAULT isn't a type of constraint. Well, the grammar thinks
it is. The documentation was wrong in two ways: it alleged that DEFAULT
had to appear before any other kind of constraint, and it alleged that you
can't prefix a DEFAULT clause with a "CONSTRAINT name" clause, when in fact
you can. (The latter behavior probably isn't SQL-standard, but our grammar
has always allowed it.)
This patch responds to Fujii Masao's observation that the ALTER TABLE
documentation mistakenly implied that you couldn't include DEFAULT in
ALTER TABLE ADD COLUMN; though this isn't the way he proposed fixing it.
My previous commit, 85cff3ce7f on
2010-12-25, failed to update errcodes.sgml or plerrcodes.h. This patch
corrects that oversight, per a gripe from Tom Lane, and also corrects
a typographical error.
cleanup stage to finish incomplete inserts or splits anymore. There was two
reasons for the cleanup step:
1. When a new tuple was inserted to a leaf page, the downlink in the parent
needed to be updated to contain (ie. to be consistent with) the new key.
Updating the parent in turn might require recursively updating the parent of
the parent. We now handle that by updating the parent while traversing down
the tree, so that when we insert the leaf tuple, all the parents are already
consistent with the new key, and the tree is consistent at every step.
2. When a page is split, we need to insert the downlink for the new right
page(s), and update the downlink for the original page to not include keys
that moved to the right page(s). We now handle that by setting a new flag,
F_FOLLOW_RIGHT, on the non-rightmost pages in the split. When that flag is
set, scans always follow the rightlink, regardless of the NSN mechanism used
to detect concurrent page splits. That way the tree is consistent right after
split, even though the downlink is still missing. This is very similar to the
way B-tree splits are handled. When the downlink is inserted in the parent,
the flag is cleared. To keep the insertion algorithm simple, when an
insertion sees an incomplete split, indicated by the F_FOLLOW_RIGHT flag, it
finishes the split before doing anything else.
These changes allow removing the whole "invalid tuple" mechanism, but I
retained the scan code to still follow invalid tuples correctly. While we
don't create any such tuples anymore, we want to handle them gracefully in
case you pg_upgrade a GiST index that has them. If we encounter any on an
insert, though, we just throw an error saying that you need to REINDEX.
The issue that got me into doing this is that if you did a checkpoint while
an insert or split was in progress, and the checkpoint finishes quickly so
that there is no WAL record related to the insert between RedoRecPtr and the
checkpoint record, recovery from that checkpoint would not know to finish
the incomplete insert. IOW, we have the same issue we solved with the
rm_safe_restartpoint mechanism during normal operation too. It's highly
unlikely to happen in practice, and this fix is far too large to backpatch,
so we're just going to live with in previous versions, but this refactoring
fixes it going forward.
With this patch, you don't get the annoying
'index "FOO" needs VACUUM or REINDEX to finish crash recovery' notices
anymore if you crash at an unfortunate moment.
This function is like the PQserverVersion() function except
it returns the version of libpq, making it possible for a client
program or driver to determine which version of libpq is in
use at runtime, and not just at link time.
Suggested by Harald Armin Massa and several others.
Add support for collecting "minidump" style crash dumps on
Windows, by setting up an exception handling filter. Crash
dumps will be generated in PGDATA/crashdumps if the directory
is created (the existance of the directory is used as on/off
switch for the generation of the dumps).
Craig Ringer and Magnus Hagander
Recent versions of the Linux system header files cause xlogdefs.h to
believe that open_datasync should be the default sync method, whereas
formerly fdatasync was the default on Linux. open_datasync is a bad
choice, first because it doesn't actually outperform fdatasync (in fact
the reverse), and second because we try to use O_DIRECT with it, causing
failures on certain filesystems (e.g., ext4 with data=journal option).
This part of the patch is largely per a proposal from Marti Raudsepp.
More extensive changes are likely to follow in HEAD, but this is as much
change as we want to back-patch.
Also clean up confusing code and incorrect documentation surrounding the
fsync_writethrough option. Those changes shouldn't result in any actual
behavioral change, but I chose to back-patch them anyway to keep the
branches looking similar in this area.
In 9.0 and HEAD, also do some copy-editing on the WAL Reliability
documentation section.
Back-patch to all supported branches, since any of them might get used
on modern Linux versions.
First, avoid scanning the whole ProcArray once we know there
are at least commit_siblings active; second, skip the check
altogether if commit_siblings = 0.
Greg Smith
This is a heavily revised version of builtin_knngist_core-0.9. The
ordering operators are no longer mixed in with actual quals, which would
have confused not only humans but significant parts of the planner.
Instead, ordering operators are carried separately throughout planning and
execution.
Since the API for ambeginscan and amrescan functions had to be changed
anyway, this commit takes the opportunity to rationalize that a bit.
RelationGetIndexScan no longer forces a premature index_rescan call;
instead, callers of index_beginscan must call index_rescan too. Aside from
making the AM-side initialization logic a bit less peculiar, this has the
advantage that we do not make a useless extra am_rescan call when there are
runtime key values. AMs formerly could not assume that the key values
passed to amrescan were actually valid; now they can.
Teodor Sigaev and Tom Lane
Basically, we want to distinguish all cases where the connection was
not made from those where it was. A convenient proxy for this is to
see if we got a message with a SQLSTATE code back from the postmaster.
This presumes that the postmaster will always send us a SQLSTATE in
a failure message, which is true for 7.4 and later postmasters in
every case except fork failure. (We could possibly complicate the
postmaster code to do something about that, but it seems not worth
the trouble, especially since pg_ctl's response for that case should
be to keep waiting anyway.)
If we did get a SQLSTATE from the postmaster, there are basically only
two cases, as per last week's discussion: ERRCODE_CANNOT_CONNECT_NOW
and everything else. Any other error code implies that the postmaster
is in principle willing to accept connections, it just didn't like or
couldn't handle this particular request. We want to make a special
case for ERRCODE_CANNOT_CONNECT_NOW so that "pg_ctl start -w" knows
it should keep waiting.
In passing, pick names for the enum constants that are a tad less
likely to present collision hazards in future.
This adds support for changing the schema of a conversion, operator,
operator class, operator family, text search configuration, text search
dictionary, text search parser, or text search template.
Dimitri Fontaine, with assorted corrections and other kibitzing.
status, including a status where the server is running but refuses a
postgres connection.
Have pg_ctl use this new function. This fixes the case where pg_ctl
reports that the server is not running (cannot connect) but in fact it
is running.
supplied, also print the IP address. This allows IPv4 and IPv6 failures
to be distinguished. Also useful when a hostname resolves to multiple
IP addresses.
Also, remove use of inet_ntoa() and use our own inet_net_ntop() in all
places, including in libpq, because it is thread-safe.
This commit adds columns amoppurpose and amopsortfamily to pg_amop, and
column amcanorderbyop to pg_am. For the moment all the entries in
amcanorderbyop are "false", since the underlying support isn't there yet.
Also, extend the CREATE OPERATOR CLASS/ALTER OPERATOR FAMILY commands with
[ FOR SEARCH | FOR ORDER BY sort_operator_family ] clauses to allow the new
columns of pg_amop to be populated, and create pg_dump support for dumping
that information.
I also added some documentation, although it's perhaps a bit premature
given that the feature doesn't do anything useful yet.
Teodor Sigaev, Robert Haas, Tom Lane
Also, move index entries into the tables, closer to the function description,
for easier editing in the future. Resort some tables to be more alphabetical.
Remove the entries for count, max, min, and sum in the tutorial area, because
that was felt to be confusing.
Thom Brown
Currently, three conversion format specifiers are supported: %s for a
string, %L for an SQL literal, and %I for an SQL identifier. The latter
two are deliberately designed not to overlap with what sprintf() already
supports, in case we want to add more of sprintf()'s functionality here
later.
Patch by Pavel Stehule, heavily revised by me. Reviewed by Jeff Janes
and, in earlier versions, by Itagaki Takahiro and Tom Lane.
In the previous coding, we simply issued ALTER SEQUENCE RESTART commands,
which do not roll back on error. This meant that an error between
truncating and committing left the sequences out of sync with the table
contents, with potentially bad consequences as were noted in a Warning on
the TRUNCATE man page.
To fix, create a new storage file (relfilenode) for a sequence that is to
be reset due to RESTART IDENTITY. If the transaction aborts, we'll
automatically revert to the old storage file. This acts just like a
rewriting ALTER TABLE operation. A penalty is that we have to take
exclusive lock on the sequence, but since we've already got exclusive lock
on its owning table, that seems unlikely to be much of a problem.
The interaction of this with usual nontransactional behaviors of sequence
operations is a bit weird, but it's hard to see what would be completely
consistent. Our choice is to discard cached-but-unissued sequence values
both when the RESTART is executed, and at rollback if any; but to not touch
the currval() state either time.
In passing, move the sequence reset operations to happen before not after
any AFTER TRUNCATE triggers are fired. The previous ordering was not
logically sensible, but was forced by the need to minimize inconsistency
if the triggers caused an error. Transactional rollback is a much better
solution to that.
Patch by Steve Singer, rather heavily adjusted by me.
This new field counts the number of times that a backend which writes a
buffer out to the OS must also fsync() it. This happens when the
bgwriter fsync request queue is full, and is generally detrimental to
performance, so it's good to know when it's happening. Along the way,
log a new message at level DEBUG1 whenever we fail to hand off an fsync,
so that the problem can also be seen in examination of log files
(if the logging level is cranked up high enough).
Greg Smith, with minor tweaks by me.
Replace for loops in makefiles with proper dependencies. Parallel
make can now span across directories. Also, make -k and make -q work
properly.
GNU make 3.80 or newer is now required.
Explicitly document that the -o options of pg_ctl init mode are meant
for initdb, not postgres (Euler Taveira de Oliveira). Assorted other
copy-editing (Tom).
PG 8.4 added a built-in feature for casting pretty much any data type to
string types (text, varchar, etc). We allowed this to work in any of the
historically-allowed syntaxes: CAST(x AS text), x::text, text(x), or
x.text. However, multiple complaints have shown that it's too easy to
invoke such casts unintentionally in the latter two styles, particularly
field selection. To cure the problem with the narrowest possible change
of behavior, disallow use of I/O conversion casts from composite types to
string types via functional/attribute syntax. The new functionality is
still available via cast syntax.
In passing, document the equivalence of functional and attribute syntax
in a more visible place.
\dn without "S" now hides all pg_XXX schemas as well as information_schema.
Thus, in a bare database you'll only see "public". ("public" is considered
a user schema, not a system schema, mainly because it's droppable.)
Per discussion back in late September.
WAL-logged. Make the notice about the lack of WAL-logging more visible by
making it a <caution>. Also remove the false statement from hot standby
caveats section that hash indexes are not used during hot standby.
After much expenditure of effort, we've got this to the point where the
performance penalty is pretty minimal in typical cases.
Andrew Dunstan, reviewed by Brendan Jurd, Dean Rasheed, and Tom Lane
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
This is not the hoped-for facility of using INSERT/UPDATE/DELETE inside
a WITH, but rather the other way around. It seems useful in its own
right anyway.
Note: catversion bumped because, although the contents of stored rules
might look compatible, there's actually a subtle semantic change.
A single Query containing a WITH and INSERT...VALUES now represents
writing the WITH before the INSERT, not before the VALUES. While it's
not clear that that matters to anyone, it seems like a good idea to
have it cited in the git history for catversion.h.
Original patch by Marko Tiikkaja, with updating and cleanup by
Hitoshi Harada.
I also rearranged the order of the sections to match the logical order
of processing steps: the distinct-elimination implied by SELECT DISTINCT
happens before, not after, any UNION/INTERSECT/EXCEPT combination.
Per a suggestion from Hitoshi Harada.
This is a parsed DocBook DTD for the PSGML Emacs mode, but it hasn't
been updated since we switched to DocBook 4.2 about seven years ago.
Also, PSGML has deprecated this method of DTD parsing.
The GRANT reference page failed to mention that the USAGE privilege
allows modifying associated user mappings, although this was already
documented on the CREATE/ALTER/DROP USER MAPPING pages.