Currently, three conversion format specifiers are supported: %s for a
string, %L for an SQL literal, and %I for an SQL identifier. The latter
two are deliberately designed not to overlap with what sprintf() already
supports, in case we want to add more of sprintf()'s functionality here
later.
Patch by Pavel Stehule, heavily revised by me. Reviewed by Jeff Janes
and, in earlier versions, by Itagaki Takahiro and Tom Lane.
We no longer need the terminating zero entry in opfamily[], so get rid of
it. Also replace assorted ad-hoc looping logic with simple for and foreach
constructs. This code is now noticeably more readable than it was an hour
ago; credit to Robert for seeing that it could be simplified.
Avoid depending on LL notation, which is likely to not work in pre-C99
compilers; don't pointlessly use INT32_MIN/INT64_MIN in code that has
the numerical value hard-wired into it anyway; remove some gratuitous
style inconsistencies between pg_ltoa and pg_lltoa; fix int2 test case
so it actually tests int2.
This eliminates the need for inefficient implementions of this
functionality in both contrib/dblink and contrib/tablefunc, so remove
them. The upcoming patch implementing an in-core format() function
will also require this functionality.
In passing, add some regression tests.
Use INT_MIN rather than INT32_MIN as we do elsewhere in the code, and
try to work around nonexistence of INT64_MIN if necessary. Adjust the
new regression tests to something hopefully saner, per observation by
Tom Lane.
When using default autovacuum_vac_cost_limit, autovac_balance_cost relied
on VacuumCostLimit to contain the correct global value ... but after the
first time through in a particular worker process, it didn't, because we'd
trashed it in previous iterations. Depending on the state of other autovac
workers, this could result in a steady reduction of the effective
cost_limit setting as a particular worker processed more and more tables,
causing it to go slower and slower. Spotted by Simon Poole (bug #5759).
Fix by saving and restoring the GUC variables in the loop in do_autovacuum.
In passing, improve a few comments.
Back-patch to 8.3 ... the cost rebalancing code has been buggy since it was
put in.
A hand-coded implementation turns out to be much faster than calling
printf(). In passing, add a few more regresion tests.
Andres Freund, with assorted, mostly cosmetic changes.
As per the ancient comment for set_rel_width, it really wasn't much good
for relations that aren't plain tables: it would never find any stats and
would always fall back on datatype-based estimates, which are often pretty
silly. Fix that by copying up width estimates from the subquery planning
process.
At some point we might want to do this for CTEs too, but that would be a
significantly more invasive patch because the sub-PlannerInfo is no longer
accessible by the time it's needed. I refrained from doing anything about
that, partly for fear of breaking the unmerged CTE-related patches.
In passing, also generate less bogus width estimates for whole-row Vars.
Per a gripe from Jon Nelson.
Given a column reference foo.bar, where there is a composite plpgsql
variable foo but it doesn't contain a column bar, the pre-9.0 coding would
immediately throw a "record foo has no field bar" error. In 9.0 the parser
hook instead falls through to let the core parser see if it can resolve the
reference. If not, you get a complaint about "missing FROM-clause entry
for table foo", which while in some sense correct isn't terribly helpful.
Complicate things a bit so that we can throw the old error message if
neither the core parser nor the hook are able to resolve the column
reference, while not changing the behavior in any other case.
Per bug #5757 from Andrey Galkin.
If we have Limit->Result->Sort, the Result might be projecting a tlist
that contains a set-returning function. If so, it's possible for the
SRF to sometimes return zero rows, which means we could need to fetch
more than N rows from the Sort in order to satisfy LIMIT N.
So top-N sorting cannot be used in this scenario.
Fix things so that top-N sorting can be used in child Sort nodes of a
MergeAppend node, when there is a LIMIT and no intervening joins or
grouping. Actually doing this on the executor side isn't too bad,
but it's a bit messier to get the planner to cost it properly.
Per gripe from Robert Haas.
In passing, fix an oversight in the original top-N-sorting patch:
query_planner should not assume that a LIMIT can be used to make an
explicit sort cheaper when there will be grouping or aggregation in
between. Possibly this should be back-patched, but I'm not sure the
mistake is serious enough to be a real problem in practice.
In the previous coding, we simply issued ALTER SEQUENCE RESTART commands,
which do not roll back on error. This meant that an error between
truncating and committing left the sequences out of sync with the table
contents, with potentially bad consequences as were noted in a Warning on
the TRUNCATE man page.
To fix, create a new storage file (relfilenode) for a sequence that is to
be reset due to RESTART IDENTITY. If the transaction aborts, we'll
automatically revert to the old storage file. This acts just like a
rewriting ALTER TABLE operation. A penalty is that we have to take
exclusive lock on the sequence, but since we've already got exclusive lock
on its owning table, that seems unlikely to be much of a problem.
The interaction of this with usual nontransactional behaviors of sequence
operations is a bit weird, but it's hard to see what would be completely
consistent. Our choice is to discard cached-but-unissued sequence values
both when the RESTART is executed, and at rollback if any; but to not touch
the currval() state either time.
In passing, move the sequence reset operations to happen before not after
any AFTER TRUNCATE triggers are fired. The previous ordering was not
logically sensible, but was forced by the need to minimize inconsistency
if the triggers caused an error. Transactional rollback is a much better
solution to that.
Patch by Steve Singer, rather heavily adjusted by me.
Add some additional dependencies to constrain the build order to prevent
parallel make from failing. In the case of src/Makefile, this is likely to be
too complicated to be worth maintaining, so just add .NOTPARALLEL to get the
old for-loop-like behavior.
More fine-tuning might be necessary for some platforms or configurations.
The handle to the shared memory segment containing startup
parameters was sent as 32-bit even on 64-bit systems. Since
HANDLEs appear to be allocated sequentially this shouldn't
be a problem until we reach 2^32 open handles in the postmaster,
but a 64-bit value should be sent across as 64-bit, and not
zero out the top 32 bits.
Noted by Tom Lane.
temporary indexes are not WAL-logged. We used a constant LSN for temporary
indexes, on the assumption that we don't need to worry about concurrent page
splits in temporary indexes because they're only visible to the current
session. But that assumption is wrong, it's possible to insert rows and
split pages in the same session, while a scan is in progress. For example,
by opening a cursor and fetching some rows, and INSERTing new rows before
fetching some more.
Fix by generating fake increasing LSNs, used in place of real LSNs in
temporary GiST indexes.
We must stay in the function's SPI context until done calling the iterator
that returns the set result. Otherwise, any attempt to invoke SPI features
in the python code called by the iterator will malfunction. Diagnosis and
patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot.
Back-patch to 8.2; there was no support for SRFs in previous versions of
plpython.
This new field counts the number of times that a backend which writes a
buffer out to the OS must also fsync() it. This happens when the
bgwriter fsync request queue is full, and is generally detrimental to
performance, so it's good to know when it's happening. Along the way,
log a new message at level DEBUG1 whenever we fail to hand off an fsync,
so that the problem can also be seen in examination of log files
(if the logging level is cranked up high enough).
Greg Smith, with minor tweaks by me.
Similar conflicts were already avoided for related record types.
Massive over-caution resulted in a usability bug. Clear theoretical
basis for doing this is now confirmed by me.
Request to remove from Heikki (twice), over-caution by me.
We must not return any "okay to proceed" result code without having checked
for too many children, else we might fail later on when trying to add the
new child to one of the per-child state arrays. It's not clear whether
this oversight explains Stefan Kaltenbrunner's recent report, but it could
certainly produce a similar symptom.
Back-patch to 8.4; the logic was not broken before that.
3.80 breaks if the expansion of $(eval) is long enough to require expansion
of its internal variable_buffer. For the purposes of $(recurse) that means
it'll work so long as no single evaluation of _create_recursive_target
produces more than 195 bytes. We can manage that by looping over
subdirectories outside the call instead of complicating the generated rule.
This coding is simpler and more readable anyway.
Or at least, this works for me. We'll see if the buildfarm likes it.
This is needed to support debug_print_parse, per report from Jon Nelson.
Cursory testing via the regression tests suggests we aren't missing
anything else.
Having this in src/include/port.h makes no sense, now that copydir.c lives
in src/backend/strorage rather than src/port. Along the way, remove an
obsolete comment from contrib/pg_upgrade that makes reference to the old
location.
Once we have found a non-null constant argument, there is no need to
examine additional arguments of the COALESCE. The previous coding got it
right only if the constant was in the first argument position; otherwise
it tried to simplify following arguments too, leading to unexpected
behavior like this:
regression=# select coalesce(f1, 42, 1/0) from int4_tbl;
ERROR: division by zero
It's a minor corner case, but a bug is a bug, so back-patch all the way.
Replace for loops in makefiles with proper dependencies. Parallel
make can now span across directories. Also, make -k and make -q work
properly.
GNU make 3.80 or newer is now required.
belonging to a user at DROP OWNED BY. Foreign data wrappers and servers
don't do anything useful yet, which is why no-one has noticed, but since we
have them, seems prudent to fix this. Per report from Chetan Suttraway.
Backpatch to 9.0, 8.4 has the same problem but this patch didn't apply
there so I'm not going to bother.
location read from backup label file can be found: wasShutdown was set
incorrectly when a backup label file was found.
Jeff Davis, with a little tweaking by me.
This code was just plain wrong: what you got was not a line through the
given point but a line almost indistinguishable from the Y-axis, although
not truly vertical. The only caller that tries to use this function with
m == DBL_MAX is dist_ps_internal for the case where the lseg is horizontal;
it would end up producing the distance from the given point to the place
where the lseg's line crosses the Y-axis. That function is used by other
operators too, so there are several operators that could compute wrong
distances from a line segment to something else. Per bug #5745 from
jindiax.
Back-patch to all supported branches.