These things didn't work because the planner omitted to do the necessary
preprocessing of a WindowFunc's argument list. Add the few dozen lines
of code needed to handle that.
Although this sounds like a feature addition, it's really a bug fix because
the default-argument case was likely to crash previously, due to lack of
checking of the number of supplied arguments in the built-in window
functions. It's not a security issue because there's no way for a
non-superuser to create a window function definition with defaults that
refers to a built-in C function, but nonetheless people might be annoyed
that it crashes rather than producing a useful error message. So
back-patch as far as the patch applies easily, which turns out to be 9.2.
I'll put a band-aid in earlier versions as a separate patch.
(Note that these features still don't work for aggregates, and fixing that
case will be harder since we represent aggregate arg lists as target lists
not bare expression lists. There's no crash risk though because CREATE
AGGREGATE doesn't accept defaults, and we reject named-argument notation
when parsing an aggregate call.)
For rather inscrutable reasons, SQL:2008 disallows copying-and-modifying a
window definition that has any explicit framing clause. The error message
we gave for this only made sense if the referencing window definition
itself contains an explicit framing clause, which it might well not.
Moreover, in the context of an OVER clause it's not exactly obvious that
"OVER (windowname)" implies copy-and-modify while "OVER windowname" does
not. This has led to multiple complaints, eg bug #5199 from Iliya
Krapchatov. Change to a hopefully more intelligible error message, and
in the case where we have just "OVER (windowname)", add a HINT suggesting
that omitting the parentheses will fix it. Also improve the related
documentation. Back-patch to all supported branches.
SGML documentation, as well as code comments, failed to note that an FDW's
validator will be applied to foreign-table options for foreign tables using
the FDW.
Etsuro Fujita
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
It seems like a good idea to update these examples since some fairly
basic planner behaviors have changed in 9.3; notably that the startup cost
for an indexscan plan node is no longer invariably estimated at 0.00.
Currently we don't need to update the pg_tablespace catalog
after redefining the symbolic links to the tablespaces
because pg_tablespace.spclocation column was removed in
PostgreSQL 9.2.
Back patch to 9.2 where pg_tablespace.spclocation was removed.
Ian Barwick, with minor change by me.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is. In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.
We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort. This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context. Currently, such freeing is automatic for tuple tables
made within the failed subtransaction. We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.
A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places. (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)
Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users. We'll patch this in 9.3, but live with
the leakage in older branches.
Commit 7f7485a0cd made these changes
in master; per discussion, backport the API changes (but not the
functional changes), so that people don't get used to the 9.3 API
only to see it get broken in the next release. There are already
some people coding to the original 9.3 API, and this will cause
minor breakage, but there will be even more if we wait until next
year to roll out these changes.
Per discussion on pgsql-hackers, these aren't really needed. Interim
versions of the background worker patch had the worker starting with
signals already unblocked, which would have made this necessary.
But the final version does not, so we don't really need it; and it
doesn't work well with the new facility for starting dynamic background
workers, so just rip it out.
Also per discussion on pgsql-hackers, back-patch this change to 9.3.
It's best to get the API break out of the way before we do an
official release of this facility, to avoid more pain for extension
authors later.
The documentation for ALTER VIEW had a minor copy-and-paste error in
defining the parameters. Noticed when reviewing the WITH CHECK OPTION
patch.
Backpatch to 9.2 where this was first introduced.
This makes superuser-issued REFRESH MATERIALIZED VIEW safe regardless of
the object's provenance. REINDEX is an earlier example of this pattern.
As a downside, functions called from materialized views must tolerate
running in a security-restricted operation. CREATE MATERIALIZED VIEW
need not change user ID. Nonetheless, avoid creation of materialized
views that will invariably fail REFRESH by making it, too, start a
security-restricted operation.
Back-patch to 9.3 so materialized views have this from the beginning.
Reviewed by Kevin Grittner.
Previously, pg_upgrade docs recommended using .pgpass if using MD5
authentication to avoid being prompted for a password. Turns out pg_ctl
never prompts for a password, so MD5 requires .pgpass --- document that.
Also recommend 'peer' for authentication too.
Backpatch back to 9.1.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Pavel Stehule, lexical revisions by Noah Misch.
Make it easier for readers of the FP docs to find out about possibly
truncated values.
Per complaint from Tom Duffey in message
F0E0F874-C86F-48D1-AA2A-0C5365BF5118@trillitech.com
Author: Albe Laurenz
Reviewed by: Abhijit Menon-Sen
On Unix, you can embed double-quotes in single-quotes, and via versa.
However, on Windows, you can only escape double-quotes in double-quotes,
so use that in the pg_dump -t/table example.
Backpatch to 9.3.
Report from Mike Toews
Adjust the wording in the first para of "Sequence Manipulation Functions"
so that neither of the link phrases in it break across line boundaries,
in either A4- or US-page-size PDF output. This fixes a reported build
failure for the 9.3beta2 A4 PDF docs, and future-proofs this particular
para against causing similar problems in future. (Perhaps somebody will
fix this issue in the SGML/TeX documentation tool chain someday, but I'm
not holding my breath.)
Back-patch to all supported branches, since the same problem could rise up
to bite us in future updates if anyone changes anything earlier than this
in func.sgml.
If there is no <date> element, the publication date for the EPUB
manifest is taken from the copyright year. But something like
"1996-2013" is not a legal date specification. So the EPUB output
currently fails epubcheck.
Put in a separate <date> element with the current year. Put it in
legal.sgml, because copyright.pl already instructs to update that
manually, so it hopefully won't be missed.
ALTER TABLE .. VALIDATE CONSTRAINT previously
gave incorrect details about lock levels and
therefore incomplete reasons to use the option.
Initial bug report and fix from Marko Tiikkaja
Reworded by me to include comments by Kevin Grittner
Extend the FDW API (which we already changed for 9.3) so that an FDW can
report whether specific foreign tables are insertable/updatable/deletable.
The default assumption continues to be that they're updatable if the
relevant executor callback function is supplied by the FDW, but finer
granularity is now possible. As a test case, add an "updatable" option to
contrib/postgres_fdw.
This patch also fixes the information_schema views, which previously did
not think that foreign tables were ever updatable, and fixes
view_is_auto_updatable() so that a view on a foreign table can be
auto-updatable.
initdb forced due to changes in information_schema views and the functions
they rely on. This is a bit unfortunate to do post-beta1, but if we don't
change this now then we'll have another API break for FDWs when we do
change it.
Dean Rasheed, somewhat editorialized on by Tom Lane
Per discussion on -hackers. We treat Unicode escapes when unescaping
them similarly to the way we treat them in PostgreSQL string literals.
Escapes in the ASCII range are always accepted, no matter what the
database encoding. Escapes for higher code points are only processed in
UTF8 databases, and attempts to process them in other databases will
result in an error. \u0000 is never unescaped, since it would result in
an impermissible null byte.
Per discussion, this restriction isn't needed for any real security reason,
and it seems to confuse people more often than it helps them. It could
also result in some database states being unrestorable. So just drop it.
Back-patch to 9.0, where ALTER DEFAULT PRIVILEGES was introduced.
In 9.2, Unicode escape sequences are not analysed at all other than
to make sure that they are in the form \uXXXX. But in 9.3 many of the
new operators and functions try to turn JSON text values into text in
the server encoding, and this includes de-escaping Unicode escape
sequences. This processing had not taken into account the possibility
that this might contain a surrogate pair to designate a character
outside the BMP. That is now handled correctly.
This also enforces correct use of surrogate pairs, something that is not
done by the type's input routines. This fact is noted in the docs.
Although the DTD technically allows this, the resulting HTML is invalid
because it puts block elements inside inline elements. DocBook 5.0 also
doesn't allow it anymore, so it's fair to assume that this was never
really intended to work. Replace <synopsis> with <literal>, which is
the markup used elsewhere in the documentation in similar cases.
The documentation for ALTER TYPE .. RENAME claimed to support a
RESTRICT/CASCADE option at the 'type' level, which wasn't implemented
and doesn't make a whole lot of sense to begin with. What is supported,
and previously undocumented, is
ALTER TYPE .. RENAME ATTRIBUTE .. RESTRICT/CASCADE.
I've updated the documentation and back-patched this to 9.1 where it was
first introduced.
The 9.2 patch that added argument name support in SQL-language functions
missed updating a parenthetical comment about that in the CREATE FUNCTION
reference page. Noted by Erwin Brandstetter.
If a standby server has a cascading standby server connected to it, it's
possible that WAL has already been sent up to the next WAL page boundary,
splitting a WAL record in the middle, when the first standby server is
promoted. Don't throw an assertion failure or error in walsender if that
happens.
Also, fix a variant of the same bug in pg_receivexlog: if it had already
received WAL on previous timeline up to a segment boundary, when the
upstream standby server is promoted so that the timeline switch record falls
on the previous segment, pg_receivexlog would miss the segment containing
the timeline switch. To fix that, have walsender send the position of the
timeline switch at end-of-streaming, in addition to the next timeline's ID.
It was previously assumed that the switch happened exactly where the
streaming stopped.
Note: this is an incompatible change in the streaming protocol. You might
get an error if you try to stream over timeline switches, if the client is
running 9.3beta1 and the server is more recent. It should be fine after a
reconnect, however.
Reported by Fujii Masao.
What we have implemented is a radix tree (or a radix trie or a patricia
trie), but the docs and code comments incorrectly called it a "suffix tree".
Alexander Korotkov
It is surprisingly common mistake to leave out backup_label file from a base
backup. Say more explicitly that it must be included.
Jeff Janes, with minor rewording by me.
Previously this state was represented by whether the view's disk file had
zero or nonzero size, which is problematic for numerous reasons, since it's
breaking a fundamental assumption about heap storage. This was done to
allow unlogged matviews to revert to unpopulated status after a crash
despite our lack of any ability to update catalog entries post-crash.
However, this poses enough risk of future problems that it seems better to
not support unlogged matviews until we can find another way. Accordingly,
revert that choice as well as a number of existing kluges forced by it
in favor of creating a pg_class.relispopulated flag column.
The initial implementation of this feature was really unsupportable,
because it's relying on the physical size of an on-disk file to carry the
relation's populated/unpopulated state, which is at least a modularity
violation and could have serious long-term consequences. We could say that
an unlogged matview goes to empty on crash, but not everybody likes that
definition, so let's just remove the feature for 9.3. We can add it back
when we have a less klugy implementation.
I left the grammar and tab-completion support for CREATE UNLOGGED
MATERIALIZED VIEW in place, since it's harmless and allows delivering a
more specific error message about the unsupported feature.
I'm committing this separately to ease identification of what should be
reverted when/if we are able to re-enable the feature.
Restore 4-byte designation for docs. Fix 9.3 doc query to properly pad
to four digits.
Backpatch to all active branches
Per suggestions from Ian Lawrence Barwick
Previously, libpq and the backend had opposite ideas about whether
it was necessary for the client to send a CopyDone message after
receiving an ErrorResponse, making it impossible to cleanly exit
COPY BOTH mode. Fix libpq so that works correctly, adopting the
backend's notion that an ErrorResponse kills the copy in both
directions.
Adjust receivelog.c to avoid a degradation in the quality of the
resulting error messages. libpqwalreceiver.c is already doing
the right thing, so no adjustment needed there.
Add an explicit statement to the documentation explaining how
this part of the protocol is supposed to work, in the hopes of
avoiding future confusion in this area.
Since the consequences of all this confusion are very limited,
especially in the back-branches where no client ever attempts
to exit COPY BOTH mode without closing the connection entirely,
no back-patch.
This changes the behavior of the start and stop actions to exit
successfully if the server was already started or stopped.
This changes the default behavior of the start action: Before, if the
server was already running, it would print a message and succeed. Now,
that situation will result in an error. When running in idempotent
mode, no message is printed and pg_ctl exits successfully.
It was considered to just make the idempotent behavior the default and
only option, but pg_upgrade needs the old behavior.
This wasn't addressed in the original patch, but it doesn't take very
much additional code to cover the case, so let's get it done.
Since pg_trgm 1.1 hasn't been released yet, I just changed the definition
of what's in it, rather than inventing a 1.2.
This works by extracting trigrams from the given regular expression,
in generally the same spirit as the previously-existing support for
LIKE searches, though of course the details are far more complicated.
Currently, only GIN indexes are supported. We might be able to make
it work with GiST indexes later.
The implementation includes adding API functions to backend/regex/
to provide a view of the search NFA created from a regular expression.
These functions are meant to be generic enough to be supportable in
a standalone version of the regex library, should that ever happen.
Alexander Korotkov, reviewed by Heikki Linnakangas and Tom Lane
This will hopefully be easier to use than pg_config for users who are
already used to the pkg-config interface. It also works better for
multi-arch installations.
reviewed by Tom Lane
The JSON parser is converted into a recursive descent parser, and
exposed for use by other modules such as extensions. The API provides
hooks for all the significant parser event such as the beginning and end
of objects and arrays, and providing functions to handle these hooks
allows for fairly simple construction of a wide variety of JSON
processing functions. A set of new basic processing functions and
operators is also added, which use this API, including operations to
extract array elements, object fields, get the length of arrays and the
set of keys of a field, deconstruct an object into a set of key/value
pairs, and create records from JSON objects and arrays of objects.
Catalog version bumped.
Andrew Dunstan, with some documentation assistance from Merlin Moncure.
I changed this in commit fd15dba543, but
missed the fact that the SGML documentation of the function specified
exactly what it did. Well, one of the two places where it's specified
documented that --- probably I looked at the other place and thought
nothing needed to be done. Sync the two places where encode() and
decode() are described.
The main change here is to call security_compute_create_name_raw()
rather than security_compute_create_raw(). This ups the minimum
requirement for libselinux from 2.0.99 to 2.1.10, but it looks
like most distributions will have picked that up before 9.3 is out.
KaiGai Kohei
This event takes place just before ddl_command_end, and is fired if and
only if at least one object has been dropped by the command. (For
instance, DROP TABLE IF EXISTS of a table that does not in fact exist
will not lead to such a trigger firing). Commands that drop multiple
objects (such as DROP SCHEMA or DROP OWNED BY) will cause a single event
to fire. Some firings might be surprising, such as
ALTER TABLE DROP COLUMN.
The trigger is fired after the drop has taken place, because that has
been deemed the safest design, to avoid exposing possibly-inconsistent
internal state (system catalogs as well as current transaction) to the
user function code. This means that careful tracking of object
identification is required during the object removal phase.
Like other currently existing events, there is support for tag
filtering.
To support the new event, add a new pg_event_trigger_dropped_objects()
set-returning function, which returns a set of rows comprising the
objects affected by the command. This is to be used within the user
function code, and is mostly modelled after the recently introduced
pg_identify_object() function.
Catalog version bumped due to the new function.
Dimitri Fontaine and Álvaro Herrera
Review by Robert Haas, Tom Lane
A new 'starttli' field was added to the response of BASE_BACKUP command.
Make pg_basebackup tolerate the case that it's missing, so that it still
works with older servers.
Add an explicit check for the server version, so that you get a nicer error
message if you try to use it with a pre-9.1 server.
The streaming protocol message format changed in 9.3, so -X stream still won't
work with pre-9.3 servers. I added a version check to ReceiveXLogStream()
earlier, but write that slightly differently, so that in 9.4, it will still
work with a 9.3 server. (In 9.4, the error message needs to be adjusted to
"9.3 or above", though). Also, if the version check fails, don't retry.
New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.
The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.
Joachim Wieland, lightly editorialized by Andrew Dunstan.
Doing that results in a broken index entry in PDF output. We had only
a few like that, which is probably why nobody noticed before.
Standardize on putting the <term> first.
Josh Kupershmidt
One of the use-cases for postgres_fdw is extracting data from older PG
servers, so cross-version compatibility is important. Document what we
can do here, and further annotate some of the coding choices that create
compatibility constraints. In passing, remove one unnecessary
incompatibility with old servers, namely assuming that we didn't need to
quote the timezone name 'UTC'.
Commit 13fe298ca0 changed this GUC to be
PGC_SUSET, but neglected to update the documentation to match.
While at it, edit and rearrange the text a little for clarity.
Checksums are set immediately prior to flush out of shared buffers
and checked when pages are read in again. Hint bit setting will
require full page write when block is dirtied, which causes various
infrastructure changes. Extensive comments, docs and README.
WARNING message thrown if checksum fails on non-all zeroes page;
ERROR thrown but can be disabled with ignore_checksum_failure = on.
Feature enabled by an initdb option, since transition from option off
to option on is long and complex and has not yet been implemented.
Default is not to use checksums.
Checksum used is WAL CRC-32 truncated to 16-bits.
Simon Riggs, Jeff Davis, Greg Smith
Wide input and assistance from many community members. Thank you.
Introduce pg_identify_object(oid,oid,int4), which is similar in spirit
to pg_describe_object but instead produces a row of machine-readable
information to uniquely identify the given object, without resorting to
OIDs or other internal representation. This is intended to be used in
the event trigger implementation, to report objects being operated on;
but it has usefulness of its own.
Catalog version bumped because of the new function.
Add section to the Reliability section about what is and is not protected for
various file types.
Further edits welcome.
Designed to allow 1-2 line change when/if checksums are committed.
Inspired by docs written by Jeff Davis, though completely different from his
patch.
The docs showed that early-January dates can be considered part of the
previous year for week-counting purposes, but failed to say explicitly
that late-December dates can also be considered part of the next year.
Fix that, and add a cross-reference to the "isoyear" field. Per bug
#7967 from Pawel Kobylak.
Remove use of PageSetTLI() from all page manipulation functions
and adjust README to indicate change in the way we make changes
to pages. Repurpose those bytes into the pd_checksum field and
explain how that works in comments about page header.
Refactoring ahead of actual feature patch which would make use
of the checksum field, arriving later.
Jeff Davis, with comments and doc changes by Simon Riggs
Direction suggested by Robert Haas; many others providing
review comments.
This GUC allows limiting the time spent waiting to acquire any one
heavyweight lock.
In support of this, improve the recently-added timeout infrastructure
to permit efficiently enabling or disabling multiple timeouts at once.
That reduces the performance hit from turning on lock_timeout, though
it's still not zero.
Zoltán Böszörményi, reviewed by Tom Lane,
Stephen Frost, and Hari Babu
Clarify the docs explaining what commit_delay does, and add a
recommendation about a useful value for it, namely half of the single-page
fsync time reported by pg_test_fsync. This is informed by testing of
the new-in-9.3 implementation of commit_delay; in prior versions it
was far harder to arrive at a useful setting.
In passing, do some wordsmithing and markup-fixing in the same general
area.
Also, change pg_test_fsync's default time-per-test from 2 seconds to 5.
The old value was about the minimum at which the results could be taken
seriously at all, and so seems a tad optimistic as a default.
Peter Geoghegan, reviewed by Noah Misch; some additional editing by me
There's still some discussion about exactly how postgres_fdw ought to
handle this case, but there seems no debate that we want to allow defaults
to be used for inserts into foreign tables. So remove the core-code
restrictions that prevented it.
While at it, get rid of the special grammar productions for CREATE FOREIGN
TABLE, and instead add explicit FEATURE_NOT_SUPPORTED error checks for the
disallowed cases. This makes the grammar a shade smaller, and more
importantly results in much more intelligible error messages for
unsupported cases. It's also one less thing to fix if we ever start
supporting constraints on foreign tables.
Apparently I lost some of the edits I had done on this page for commit
0ac5ad5134.
Per note from Etsuro Fujita, although I didn't use his patch.
Make some of the wording in the affected section a bit more complete,
too.
This adds the following:
json_agg(anyrecord) -> json
to_json(any) -> json
hstore_to_json(hstore) -> json (also used as a cast)
hstore_to_json_loose(hstore) -> json
The last provides heuristic treatment of numbers and booleans.
Also, in json generation, if any non-builtin type has a cast to json,
that function is used instead of the type's output function.
Andrew Dunstan, reviewed by Steve Singer.
Catalog version bumped.
This patch adds the core-system infrastructure needed to support updates
on foreign tables, and extends contrib/postgres_fdw to allow updates
against remote Postgres servers. There's still a great deal of room for
improvement in optimization of remote updates, but at least there's basic
functionality there now.
KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather
heavily revised by Tom Lane.
A materialized view has a rule just like a view and a heap and
other physical properties like a table. The rule is only used to
populate the table, references in queries refer to the
materialized data.
This is a minimal implementation, but should still be useful in
many cases. Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed. At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.
Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
psql \copy syntax. Like with reading/writing files, the backend version is
superuser-only, and in the psql version, the program is run in the client.
In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
the stdin/stdout is quoted, it's now interpreted as a filename. For example,
"\copy foo from 'stdin'" now reads from a file called 'stdin', not from
standard input. Before this, there was no way to specify a filename called
stdin, stdout, pstdin or pstdout.
This creates a new function in pgport, wait_result_to_str(), which can
be used to convert the exit status of a process, as returned by wait(3),
to a human-readable string.
Etsuro Fujita, reviewed by Amit Kapila.
Like with pg_basebackup and pg_receivexlog, it's a bit strange to call the
option -d/--dbname, when in fact you cannot pass a database name in it.
Original patch by Amit Kapila, heavily modified by me.
You could already pass a database name just by passing it as the last
option, without -d. This is an alias for that, like the -d/--dbname option
in psql and many other client applications. For consistency.
Without this, there's no way to pass arbitrary libpq connection parameters
to these applications. It's a bit strange that the option is called
-d/--dbname, when in fact you can *not* pass a database name in it, but it's
consistent with other client applications where a connection string is also
passed using -d.
Original patch by Amit Kapila, heavily modified by me.
This program relies on rm_desc backend routines and the xlogreader
infrastructure to emit human-readable rendering of WAL records.
Author: Andres Freund, with many reworks by Álvaro
Reviewed (in a much earlier version) by Peter Eisentraut
There's still a lot of room for improvement, but it basically works,
and we need this to be present before we can do anything much with the
writable-foreign-tables patch. So let's commit it and get on with testing.
Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane
This generalizes the existing ALTER ROLE ... SET and ALTER DATABASE
... SET functionality to allow creating settings that apply to all users
in all databases.
reviewed by Pavel Stehule
The reason this wasn't supported before was that GiST indexes need an
increasing sequence to detect concurrent page-splits. In a regular WAL-
logged GiST index, the LSN of the page-split record is used for that
purpose, and in a temporary index, we can get away with a backend-local
counter. Neither of those methods works for an unlogged relation.
To provide such an increasing sequence of numbers, create a "fake LSN"
counter that is saved and restored across shutdowns. On recovery, unlogged
relations are blown away, so the counter doesn't need to survive that
either.
Jeevan Chalke, based on discussions with Robert Haas, Tom Lane and me.
Improve description of the vacuum_freeze_table_age bug (it's much more
serious than we realized at the time the fix was committed), and correct
attribution of pg_upgrade -O/-o fix (Marti Raudsepp contributed that,
but Bruce forgot to credit him in the commit log).
No need to back-patch right now, it'll happen when the next set of
release notes are prepared.
Instead of hardcoding a specific link, give a general link to the
download section of the web site. This gives the user more download
options and the sysadmins more flexibility. Also, the previously
presented link didn't work for devel versions.
This function was misdeclared to take cstring when it should take internal.
This at least allows crashing the server, and in principle an attacker
might be able to use the function to examine the contents of server memory.
The correct fix is to adjust the system catalog contents (and fix the
regression tests that should have caught this but failed to). However,
asking users to correct the catalog contents in existing installations
is a pain, so as a band-aid fix for the back branches, install a check
in enum_recv() to make it throw error if called with a cstring argument.
We will later revert this in HEAD in favor of correcting the catalogs.
Our thanks to Sumit Soni (via Secunia SVCRP) for reporting this issue.
Security: CVE-2013-0255
This patch changes pg_get_viewdef() and allied functions so that
PRETTY_INDENT processing is always enabled. Per discussion, only the
PRETTY_PAREN processing (that is, stripping of "unnecessary" parentheses)
poses any real forward-compatibility risk, so we may as well make dump
output look as nice as we safely can.
Also, set the default wrap length to zero (i.e, wrap after each SELECT
or FROM list item), since there's no very principled argument for the
former default of 80-column wrapping, and most people seem to agree this
way looks better.
Marko Tiikkaja, reviewed by Jeevan Chalke, further hacking by Tom Lane
In the previous coding, psql's state variable saying that output should
go to a file was only reset after successful completion of a query
returning tuples. Thus for example,
regression=# select 1/0
regression-# \g somefile
ERROR: division by zero
regression=# select 1/2;
regression=#
... huh, I wonder where that output went. Even more oddly, the state
was not reset even if it's the file that's causing the failure:
regression=# select 1/2 \g /foo
/foo: Permission denied
regression=# select 1/2;
/foo: Permission denied
regression=# select 1/2;
/foo: Permission denied
This seems to me not to satisfy the principle of least surprise.
\g is certainly not documented in a way that suggests its effects are
at all persistent.
To fix, adjust the code so that the flag is reset at exit from SendQuery
no matter what happened.
Noted while reviewing the \gset patch, which had comparable issues.
Arguably this is a bug fix, but I'll refrain from back-patching for now.
This is specified in the SQL standard. The CREATE RECURSIVE VIEW
specification is transformed into a normal CREATE VIEW statement with a
WITH RECURSIVE clause.
reviewed by Abhijit Menon-Sen and Stephen Frost
The new option specifies length of aggregation interval (in
seconds). May be used only together with -l. With this option, the log
contains per-interval summary (number of transactions, min/max latency
and two additional fields useful for variance estimation).
Patch contributed by Tomas Vondra, reviewed by Pavel Stehule. Slight
change by Tatsuo Ishii, suggested by Robert Hass to emit an error
message indicating that the option is not currently supported on
Windows.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
Beyond 21474, the number of accounts exceed the range for int4. Change the
initialization code to use bigint for account id columns when scale is large
enough, and switch to using int64s for the variables in pgbench code. The
threshold where we switch to bigints is set at 20000, because that's easier
to remember and document than 21474, and ensures that there is some headroom
when int4s are used.
Greg Smith, with various changes by Euler Taveira de Oliveira, Gurjeet
Singh and Satoshi Nagayasu.
Give away ownership of shared objects (databases, tablespaces) along
with local objects, per original code intention. Try to make the
documentation clearer, too.
Per discussion about DROP OWNED's brokenness, in bug #7748.
This is not backpatched because it'd require some refactoring of the
ALTER/SET OWNER code for databases and tablespaces.