dblink_build_sql_insert() and related functions. Now the column numbers
are treated as logical not physical column numbers. This will provide saner
behavior in the presence of dropped columns; furthermore, if we ever get
around to allowing rearrangement of logical column ordering, the original
definition would become nearly untenable from a usability standpoint.
Per recent discussion of dblink's handling of dropped columns.
Not back-patched for fear of breaking existing applications.
columns correctly. In passing, get rid of some dead logic in the
underlying get_sql_insert() etc functions --- there is no caller that
will pass null value-arrays to them.
Per bug report from Robert Voinea.
dblink_build_sql_insert() and related functions. In particular, be sure to
reject references to dropped and out-of-range column numbers. The numbers
are still interpreted as physical column numbers, though, for backward
compatibility.
This patch replaces Joe's patch of 2010-02-03, which handled only some aspects
of the problem.
lock the target relation just once per SQL function call. The original coding
obtained and released lock several times per call. Aside from saving a
not-insignificant number of cycles, this eliminates possible race conditions
if someone tries to modify the relation's schema concurrently. Also
centralize locking and permission-checking logic.
Problem noted while investigating a trouble report from Robert Voinea --- his
problem is still to be fixed, though.
hardcoding a 'template0' check, per suggestion from Alvaro.
This might fix a problem where someone has allowed 'template0'
connections, but it is a cleaner approach even if doesn't fix the
bug.
* There is no chmod() on Windows.
* Must always use the 3-parameter version of open()
* There is no dynloader.h - but it also appears unnecessary on all platforms
* Don't include shlobj.h because it causes compile errors, and from what I can
see it's not actually used. This may need to be added back for mingw
and/or cygwin in the worst case.
cmp parameter for pg_scandir(). The code failed to support this anyway
for Sun/Windows, so pretending we could accept a parameter other than
NULL was just asking for trouble.
rather than returning NULL for some-but-not-all failures as they used to.
Remove now-redundant tests for NULL from call sites.
We had to do something about this because many call sites were failing to
check for NULL; and changing it like this seems a lot more useful and
mistake-proof than adding checks to the call sites without them.
unsatisfiable query, such as indexcol && empty_array. It should return -1
to tell GIN no scan is required; but silly typo disabled the logic for that,
resulting in unnecessary "GIN indexes do not support whole-index scans" error.
Per bug report from Jeff Trout.
Back-patch to 8.3 where the logic was introduced.
writes. The first worker still uses "pgbench_log.<pid>" for the name, but
additional workers use "pgbench_log.<pid>.<serial-number>" instead.
Reported by Greg Smith.
in versions >= 8.3). The core code is more robust and efficient than what
was there before, and this also reduces risks involved in swapping different
libxml error handler settings.
Before 8.3, there is still some risk of problems if add-on modules such as
Perl invoke libxml without setting their own error handler. Given the lack
of reports I'm not sure there's a risk in practice, so I didn't take the
step of actually duplicating the core code into older contrib/xml2 branches.
Instead I just tweaked the existing code to ensure it didn't leave a dangling
pointer to short-lived memory when throwing an error.
This involves modifying the module to have a stable ABI, that is, the
xslt_process() function still exists even without libxslt. It throws a
runtime error if called, but doesn't prevent executing the CREATE FUNCTION
call. This is a good thing anyway to simplify cross-version upgrades.
These are unnecessary and probably dangerous. I don't see any immediate
risk situations in the core XML support or contrib/xml2 itself, but there
could be issues with external uses of libxml2, and in any case it's an
accident waiting to happen.
Get rid of the code that attempted to funnel libxml2's memory allocations
into palloc. We already knew from experience with the core xml datatype
that trying to do this is simply not reliable. Unlike the core code, I
did not bother adding a lot of PG_TRY/PG_CATCH logic to try to ensure that
everything is cleaned up on error exit. Hence, we might leak some memory
if one of these functions fails partway through. Given the deprecated
status of this contrib module and the fact that errors partway through
the functions shouldn't be too common, it doesn't seem worth worrying about.
Also fix a separate bug in xpath_table, that it did the wrong things
if given a result tuple descriptor with less than 2 columns. While
such a case isn't very useful in practice, we shouldn't fail or stomp
memory when it occurs.
Add some simple regression tests based on all the reported crash cases
that I have on hand.
This should be back-patched, but let's see if the buildfarm likes it first.
The main motivation for changing this is bug #4921, in which it's pointed out
that it's no longer safe to apply ltree operations to the result of
ARRAY(SELECT ...) if the sub-select might return no rows. Before 8.3,
the ARRAY() construct would return NULL, which might or might not be helpful
but at least it wouldn't result in an error. Now it returns an empty array
which results in a failure for no good reason, since the ltree operations
are all perfectly capable of dealing with zero-element arrays.
As far as I can find, these ltree functions are the only places where zero
array dimensionality is rejected unnecessarily.
Back-patch to 8.3 to prevent behavioral regression of queries that worked
in older releases.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
being called as aggregates, and to get the aggregate transition state memory
context if needed. Use it instead of poking directly into AggState and
WindowAggState in places that shouldn't know so much.
We should have done this in 8.4, probably, but better late than never.
Revised version of a patch by Hitoshi Harada.
of shared or nailed system catalogs. This has two key benefits:
* The new CLUSTER-based VACUUM FULL can be applied safely to all catalogs.
* We no longer have to use an unsafe reindex-in-place approach for reindexing
shared catalogs.
CLUSTER on nailed catalogs now works too, although I left it disabled on
shared catalogs because the resulting pg_index.indisclustered update would
only be visible in one database.
Since reindexing shared system catalogs is now fully transactional and
crash-safe, the former special cases in REINDEX behavior have been removed;
shared catalogs are treated the same as non-shared.
This commit does not do anything about the recently-discussed problem of
deadlocks between VACUUM FULL/CLUSTER on a system catalog and other
concurrent queries; will address that in a separate patch. As a stopgap,
parallel_schedule has been tweaked to run vacuum.sql by itself, to avoid
such failures during the regression tests.
exceed the total number of non-dropped source table fields for
dblink_build_sql_*(). Addresses bug report from Rushabh Lathia.
Backpatch all the way to the 7.3 branch.
(SFRM_Materialize mode) to return tuples. Since we don't return from the
dblink function in tuplestore mode, release the PGresult with a PG_CATCH
block on error. Also rearrange to share the same code to materialize the
tuplestore. Patch by Takahiro Itagaki.
This uses the same infrastructure with EXPLAIN BUFFERS to support
{shared|local}_blks_{hit|read|written} andtemp_blks_{read|written}
columns in the pg_stat_statements view. The dumped file format
also updated.
Thanks to Robert Haas for the review.
we're not going to support that anymore.
I did keep the 64-bit-CRC-with-32-bit-arithmetic code, since it has a
performance excuse to live. It's a bit moot since that's all ifdef'd
out, of course.
Variables must consist of only alphabets, numerals and underscores.
We had allowed to set variables with invalid names, but could not
refer them in queries.
Thanks to Robert Haas for the review.
PL/pgSQL function within an exception handler. Make sure we use the right
resource owner when we create the tuplestore to hold returned tuples.
Simplify tuplestore API so that the caller doesn't need to be in the right
memory context when calling tuplestore_put* functions. tuplestore.c
automatically switches to the memory context used when the tuplestore was
created. Tuplesort was already modified like this earlier. This patch also
removes the now useless MemoryContextSwitch calls from callers.
Report by Aleksei on pgsql-bugs on Dec 22 2009. Backpatch to 8.1, like
the previous patch that broke this.
\shell command runs an external shell command.
\setshell also does the same and sets the result to a variable.
original patch by Michael Paquier with some editorialization by Itagaki,
and reviewed by Greg Smith.
This patch also removes buffer-usage statistics from the track_counts
output, since this (or the global server statistics) is deemed to be a better
interface to this information.
Itagaki Takahiro, reviewed by Euler Taveira de Oliveira.
Without these functions, anyone outside of explain.c can't actually use
ExplainPrintPlan, because the ExplainState won't be initialized properly.
The user-visible result of this was a crash when using auto_explain with
the JSON output format.
Report by Euler Taveira de Oliveira. Analysis by Tom Lane. Patch by me.
processes of a pgbench run, when we are using -j > 1 and are emulating
threads via fork(). Otherwise the children all inherit the same random
sequence state and produce the same random-number sequence.
In the threaded case the different threads will share one RNG state, so
they will produce different subsets of one sequence, which is maybe more
correlated than a purist would like but will not be "the same". So we
leave that case alone.
First noticed by Takahiro Itagaki, and is also part of the explanation
for the pgbench misbehavior recently reported by Jaime Casanova.
in the formerly-always-blank columns just to left and right of the data.
Different marking is used for a line break caused by a newline in the data
than for a straight wraparound. A newline break is signaled by a "+" in the
right margin column in ASCII mode, or a carriage return arrow in UNICODE mode.
Wraparound is signaled by a dot in the right margin as well as the following
left margin in ASCII mode, or an ellipsis symbol in the same places in UNICODE
mode. "\pset linestyle old-ascii" is added to make the previous behavior
available if anyone really wants it.
In passing, this commit also cleans up a few regression test files that
had unintended spacing differences from the current actual output.
Roger Leigh, reviewed by Gabrielle Roth and other members of PDXPUG.
strength of database passwords, and create a sample implementation of
such a hook as a new contrib module "passwordcheck".
Laurenz Albe, reviewed by Takahiro Itagaki
Windows doesn't do signal processing like other platforms do. It never
really worked, but recent changes to the signal handling made it crash.
This fixes bug #4961. Patch by Fujii Masao.
Remove the 64K limit on the lengths of keys and values within an hstore.
(This changes the on-disk format, but the old format can still be read.)
Add support for btree/hash opclasses for hstore --- this is not so much
for actual indexing purposes as to allow use of GROUP BY, DISTINCT, etc.
Add various other new functions and operators.
Andrew Gierth
dblink generates orphaned connections when called with a connection string,
fail_on_error = true, and an ERROR occurs. Discovery and patch by
Tatsuhito Kasahara. Introduced in 8.4.
are used to populate the tables with -i, but when running actual benchmark
it has values separately hard-coded in the query metacommands. This patch
makes the metacommands obtain their values from the relevant #defines.
Patch provided by Jeff Janes.
source directory even for out-of-tree builds. They are now alsl built in
the build tree. This should be more convenient for certain developers'
workflows, and shouldn't really break anything else.
script.
To do this, have pg_ctl pass down its parent shell's PID in an environment
variable PG_GRANDPARENT_PID, and teach CreateLockFile() to disregard that PID
as a false match if it finds it in postmaster.pid. This allows us to cope
with one level of postgres-owned shell process even with pg_ctl in the way,
so it's just as safe as starting the postmaster directly. You still have to
be careful about how you write the initscript though.
Adjust the comments in contrib/start-scripts/ to not deprecate use of
pg_ctl. Also, fix the ROTATELOGS option in the OSX script, which was
indulging in exactly the sort of unsafe coding that renders this fix
pointless :-(. A pipe inside the "sudo" will probably result in more
than one postgres-owned process hanging around.
Test coverage support now covers the entire source tree, including
contrib, instead of just src/backend. In a related but independent
development, the commands make coverage and make coverage-html can be run
in any directory.
This turned out to be much easier than feared. Besides a few ad hoc fixes
to pass the make target down the tree, change all affected makefiles to
list their directories in the SUBDIRS variable, changed from variants like
DIRS and WANTED_DIRS. MSVC build fix was attempted as well.
Adds the ability to retrieve async notifications using dblink,
via the addition of the function dblink_get_notify(). Original patch
by Marcus Kempe, suggestions by Tom Lane and Alvaro Herrera, patch
review and adjustments by Joe Conway.
reviewed by Greg Smith and Josh Williams.
Following is the proposal from ITAGAKI Takahiro:
Pgbench is a famous tool to measure postgres performance, but nowadays
it does not work well because it cannot use multiple CPUs. On the other
hand, postgres server can use CPUs very well, so the bottle-neck of
workload is *in pgbench*.
Multi-threading would be a solution. The attached patch adds -j
(number of jobs) option to pgbench. If the value N is greater than 1,
pgbench runs with N threads. Connections are equally-divided into
them (ex. -c64 -j4 => 4 threads with 16 connections each). It can
run on POSIX platforms with pthread and on Windows with win32 threads.
Here are results of multi-threaded pgbench runs on Fedora 11 with intel
core i7 (8 logical cores = 4 physical cores * HT). -j8 (8 threads) was
the best and the tps is 4.5 times of -j1, that is a traditional result.
$ pgbench -i -s10
$ pgbench -n -S -c64 -j1 => tps = 11600.158593
$ pgbench -n -S -c64 -j2 => tps = 17947.100954
$ pgbench -n -S -c64 -j4 => tps = 26571.124001
$ pgbench -n -S -c64 -j8 => tps = 52725.470403
$ pgbench -n -S -c64 -j16 => tps = 38976.675319
$ pgbench -n -S -c64 -j32 => tps = 28998.499601
$ pgbench -n -S -c64 -j64 => tps = 26701.877815
Is it acceptable to use pthread in contrib module?
If ok, I will add the patch to the next commitfest.
values being complained of.
In passing, also remove the arbitrary length limitation in the similar
error detail message for foreign key violations.
Itagaki Takahiro
The original syntax made it difficult to add options without making them
into reserved words. This change parenthesizes the options to avoid that
problem, and makes provision for an explicit (and perhaps non-Boolean)
value for each option. The original syntax is still supported, but only
for the two original options ANALYZE and VERBOSE.
As a test case, add a COSTS option that can suppress the planner cost
estimates. This may be useful for including EXPLAIN output in the regression
tests, which are otherwise unable to cope with cross-platform variations in
cost estimates.
Robert Haas
last pair of parameter name/value strings, even when there are MAXPARAMS
of them. Aboriginal bug in contrib/xml2, noted while studying bug #4912
(though I'm not sure whether there's something else involved in that
report).
This might be thought a security issue, since it's a potential backend
crash; but considering that untrustworthy users shouldn't be allowed
to get their hands on xslt_process() anyway, it's probably not worth
getting excited about.
file to be a symlink. We tried to fix this issue with an earlier server-side
patch, but it didn't fix the whole issue.
The same bug is present in older releases as well, but the 8.4 train is
about to leave the station, and I'm not sure if have consensus on whether
we can remove the -l option in back-branches or do we need to attempt a
server-side fix to make symlinking safe.
Patch by Simon Riggs, per discussion on bug identified by Fujii Masao.
create an ABI break between 8.3 and 8.4. It is still just a wrapper around
the built-in current_query() function, but at a different implementation
level. Per my proposal.
Note: this change doesn't break 8.4beta installations, since their
SQL-language definition of the function still works fine.
The original implementation of the 3-argument form of get_raw_page() risked
core dumps if the 8.3 SQL function definition was mistakenly used with the
8.4 module, which is entirely likely after a dump-and-reload upgrade. To
protect 8.4 beta testers against upgrade problems, add a check on PG_NARGS.
In passing, fix missed additions to the uninstall script, and polish the
docs a trifle.
the <@ and @> operators. These are not in fact equivalent to the built-in
anyarray operators of the same names, because they have different behavior for
empty arrays, namely they don't think empty arrays are contained in anything.
That is mathematically wrong, no doubt, but until we can persuade GIN indexes
to implement the mathematical definition we should probably not change this.
Another reason for not changing it now is that we can't yet ensure the
opclasses will be updated correctly in a dump-and-reload upgrade. Per
recent discussions.
while we're at it per request by Tom Lane. Specifically, don't try to
perform dblink_send_query() via dblink_record_internal() -- it was
inappropriate and ugly.
is run at the end of archive recovery, providing a chance to do external
cleanup. Modify pg_standby so that it no longer removes the trigger file,
that is to be done using the recovery_end_command now.
Provide a "smart" failover mode in pg_standby, where we don't fail over
immediately, but only after recovering all unapplied WAL from the archive.
That gives you zero data loss assuming all WAL was archived before
failover, which is what most users of pg_standby actually want.
recovery_end_command by Simon Riggs, pg_standby changes by Fujii Masao and
myself.
pgbench_history, and pgbench_tellers, rather than just accounts, branches,
history, and tellers. This is to prevent accidental conflicts with real
application tables, as has been reported to happen at least once. Also
remove the automatic "SET search_path = public" that it did at startup,
as this seems to restrict testing flexibility without actually buying much.
Per proposal by Joshua Drake and ensuing discussion.
Joshua Drake and Tom Lane
than having some whitespace discrepancy. Although whitespace is supposed
to be ignored in our regression tests, for some reason buildfarm member
spoonbill doesn't like it.
any negative or positive number, not just -1 or 1. Fix comment on
varstr_cmp and citext test case accordingly.
As pointed out by Zdenek Kotala, and buildfarm member gothic moth.
don't cause confusion with the built-in anyarray versions of those operators.
Adjust the module's index opclasses to support the built-in operators in place
of the private ones.
The private implementations are still available under their historical
names @ and ~, so no functionality is lost. Some quick testing suggests
that they offer no real benefit over the core operators, however.
Per a complaint from Rusty Conover.
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
method to pass extra data to the consistent() and comparePartial() methods.
This is the core infrastructure needed to support the soon-to-appear
contrib/btree_gin module. The APIs are still upward compatible with the
definitions used in 8.3 and before, although *not* with the previous 8.4devel
function definitions.
catversion bump for changes in pg_proc entries (although these are just
cosmetic, since GIN doesn't actually look at the function signature before
calling it...)
Teodor Sigaev and Oleg Bartunov
not global variables of anonymous enum types. This didn't actually hurt
much because most linkers will just merge the duplicated definitions ...
but some will complain. Per bug #4731 from Ceriel Jacobs.
Backpatch to 8.1 --- the declarations don't exist before that.
reinstalling the default signal handler doesn't work as it is on Windows.
Presumably core dumps on SIGQUIT are not a problem on Windows, so rather
than figure out what header files or other changes are required to make it
work, just don't bother.
postmaster uses for immediate shutdown. Trap SIGUSR1 as the preferred
signal for that.
Per report by Fujii Masao and subsequent discussion on -hackers.
not include postgres.h nor anything else it doesn't directly need. Add
#includes to calling files as needed to compensate. Per my proposal of
yesterday.
This should be noted as a source code change in the 8.4 release notes,
since it's likely to require changes in add-on modules.
and change auto_explain's custom GUC variables to be named auto_explain.xxx
not just explain.xxx. Per discussion in connection with the
pg_stat_statements patch, it seems like a good idea to have the convention
that custom variable classes are named the same as their defining module.
Committing separately since this should happen regardless of what happens
with pg_stat_statements itself.
value-per-call mode. This should be more efficient in normal usage,
but the real problem with the prior coding was that it returned with
a SPI call still active. That could cause problems if execution was
interleaved with anything else that might use SPI.
the basic representational details (typlen, typalign, typbyval, typstorage)
to be copied from an existing type rather than listed explicitly in the
CREATE TYPE command. The immediate reason for this is to provide a simple
solution for add-on modules that want to define types represented as int8,
float4, or float8: as of 8.4 the appropriate PASSEDBYVALUE setting is
platform-specific and so it's hard for a SQL script to know what to do.
This patch fixes the contrib/isn breakage reported by Rushabh Lathia.
it was using too soon. In a situation where pg_do_encoding_conversion is
a no-op, this led to garbage data returned.
In HEAD, also modify the code that's ensuring null termination to make it
a tad more obvious what's happening.
citext-to-and-from-xml tests, since those caused variation between
installations with or without libxml without really proving much. Instead
repurpose citext_1.out as the expected results in glibc en_US (and probably
other) locales.
and heap_deformtuple in favor of the newer functions heap_form_tuple et al
(which do the same things but use bool control flags instead of arbitrary
char values). Eliminate the former duplicate coding of these functions,
reducing the deprecated functions to mere wrappers around the newer ones.
We can't get rid of them entirely because add-on modules probably still
contain many instances of the old coding style.
Kris Jurka