This prevents the word "waiting" from briefly disappearing from the ps
status line when ResolveRecoveryConflictWithVirtualXIDs begins a new
iteration of the outer loop.
Along the way, remove some useless pgstat_report_waiting() calls;
the startup process doesn't appear in pg_stat_activity.
Fujii Masao
These comments were not updated when we added the EXEC_BACKEND
mechanism for Windows, even though it rendered them inaccurate.
Also unify two unnecessarily-separate #ifdef __alpha code blocks.
We don't actually need optreset, because we can easily fix the code to
ensure that it's cleanly restartable after having completed a scan over the
argv array; which is the only case we need to restart in. Getting rid of
it avoids a class of interactions with the system libraries and allows
reversion of my change of yesterday in postmaster.c and postgres.c.
Back-patch to 8.4. Before that the getopt code was a bit different anyway.
libpq connection, per report from Magnus. This happens only on GIT
master and only on Win32 because that is the platform where "" maps to
an IP address (localhost).
The mingw people don't appear to care about compatibility with non-GNU
versions of getopt, so force use of our own copy of getopt on Windows.
Also, ensure that we make use of optreset when using our own copy.
Per report from Andrew Dunstan. Back-patch to all versions supported
on Windows.
This patch replaces Guttman's generalized split method with a simple
sort-by-center-points algorithm. Since the data is only one-dimensional
we don't really need the slow and none-too-stable Guttman method.
This is in part a bug fix, since seg has the same size_alpha versus
size_beta typo that was recently fixed in contrib/cube. It seems
prudent to apply this rather aggressive fix only in HEAD, though.
Back branches will just get the typo fix.
Alexander Korotkov, reviewed by Yeb Havinga
Allow INSERT INTO, UPDATE, and DELETE FROM to be completed with
either the name of a table (as before) or the name of a view with
an appropriate INSTEAD OF rule.
Along the way, allow CREATE TRIGGER to be completed with INSTEAD OF,
as well as BEFORE and AFTER.
David Fetter, reviewed by Itagaki Takahiro
This commit replaces pg_class.relistemp with pg_class.relpersistence;
and also modifies the RangeVar node type to carry relpersistence rather
than istemp. It also removes removes rd_istemp from RelationData and
instead performs the correct computation based on relpersistence.
For clarity, we add three new macros: RelationNeedsWAL(),
RelationUsesLocalBuffers(), and RelationUsesTempNamespace(), so that we
can clarify the purpose of each check that previous depended on
rd_istemp.
This is intended as infrastructure for the upcoming unlogged tables
patch, as well as for future possible work on global temporary tables.
We were failing to zero out some pg_stat_database counters that have
been added since the initial pgstats coding. This is a bug, but not
back-patching the fix since changing this behavior in a minor release
seems a cure worse than the disease.
Report and patch by Tomas Vondra.
1. Don't reimplement S_ISDIR() and S_ISREG() badly.
2. Don't reimplement access() badly.
This code appears to have been copied from ancient versions of the
corresponding backend routines, and not patched to incorporate subsequent
fixes (see my commits of 2008-03-31 and 2010-01-14 respectively).
It might be a good idea to change it to just *call* those routines,
but for now I'll just transpose these fixes over.
mkdir_p and check_data_dir will be useful in CREATE TABLESPACE, since we
have agreed that that command should handle subdirectory creation just like
initdb creates the PGDATA directory. Push them into src/port/ so that they
are available to both initdb and the backend. Rename to pg_mkdir_p and
pg_check_dir, just to be on the safe side. Add FreeBSD's copyright notice
to pgmkdirp.c, since that's where the code came from originally (this
really should have been in initdb.c). Very marginal code/comment cleanup.
Purely cosmetic patch to make our coding standards more consistent ---
we were doing symbolic some places and octal other places. This patch
fixes all C-coded uses of mkdir, chmod, and umask. There might be some
other calls I missed. Inconsistency noted while researching tablespace
directory permissions issue.
The original coding in tuplestore_trim() was only meant to work efficiently
in cases where each trim call deleted most of the tuples in the store.
Which, in fact, was the pattern of the original usage with a Material node
supporting mark/restore operations underneath a MergeJoin. However,
WindowAgg now uses tuplestores and it has considerably less friendly
trimming behavior. In particular it can attempt to trim one tuple at a
time off a large tuplestore. tuplestore_trim() had O(N^2) runtime in this
situation because of repeatedly shifting its tuple pointer array. Fix by
avoiding shifting the array until a reasonably large number of tuples have
been deleted. This can waste some pointer space, but we do still reclaim
the tuples themselves, so the percentage wastage should be pretty small.
Per Jie Li's report of slow percent_rank() evaluation. cume_dist() and
ntile() would certainly be affected as well, along with any other window
function that has a moving frame start and requires reading substantially
ahead of the current row.
Back-patch to 8.4, where window functions were introduced. There's no
need to tweak it before that.
With hundreds of thousands of TOC entries, the repeated searches in
reduce_dependencies() become the dominant cost. Get rid of that searching
by constructing reverse-dependency lists, which we can do in O(N) time
during the fix_dependencies() preprocessing. I chose to store the reverse
dependencies as DumpId arrays for consistency with the forward-dependency
representation, and keep the previously-transient tocsByDumpId[] array
around to locate actual TOC entry structs quickly from dump IDs.
While this fixes the slow case reported by Vlad Arkhipov, there is still
a potential for O(N^2) behavior with sufficiently many tables:
fix_dependencies itself, as well as mark_create_done and
inhibit_data_for_failed_table, are doing repeated searches to deal with
table-to-table-data dependencies. Possibly this work could be extended
to deal with that, although the latter two functions are also used in
non-parallel restore where we currently don't run fix_dependencies.
Another TODO is that we fail to parallelize restore of multiple blobs
at all. This appears to require changes in the archive format to fix.
Back-patch to 9.0 where the problem was reported. 8.4 has potential issues
as well; but since it doesn't create a separate TOC entry for each blob,
it's at much less risk of having enough TOC entries to cause real problems.
Hot Standby conflicts only with tuples that were visible at
some point. So ignore tuples from aborted transactions or for
tuples updated/deleted during the inserting transaction when
generating the conflict transaction ids.
Following detailed analysis and test case by Noah Misch.
Original report covered btree delete records, correctly observed
by Heikki Linnakangas that this applies to other cases also.
Fix covers all sources of cleanup records via common code.
Recent versions of the Linux system header files cause xlogdefs.h to
believe that open_datasync should be the default sync method, whereas
formerly fdatasync was the default on Linux. open_datasync is a bad
choice, first because it doesn't actually outperform fdatasync (in fact
the reverse), and second because we try to use O_DIRECT with it, causing
failures on certain filesystems (e.g., ext4 with data=journal option).
This part of the patch is largely per a proposal from Marti Raudsepp.
More extensive changes are likely to follow in HEAD, but this is as much
change as we want to back-patch.
Also clean up confusing code and incorrect documentation surrounding the
fsync_writethrough option. Those changes shouldn't result in any actual
behavioral change, but I chose to back-patch them anyway to keep the
branches looking similar in this area.
In 9.0 and HEAD, also do some copy-editing on the WAL Reliability
documentation section.
Back-patch to all supported branches, since any of them might get used
on modern Linux versions.
First, avoid scanning the whole ProcArray once we know there
are at least commit_siblings active; second, skip the check
altogether if commit_siblings = 0.
Greg Smith
an old transaction running in the master, and a lot of transactions have
started and finished since, and a WAL-record is written in the gap between
the creating the running-xacts snapshot and WAL-logging it, recovery will fail
with "too many KnownAssignedXids" error. This bug was reported by
Joachim Wieland on Nov 19th.
In the same scenario, when fewer transactions have started so that all the
xids fit in KnownAssignedXids despite the first bug, a more serious bug
arises. We incorrectly initialize the clog code with the oldest still running
transaction, and when we see the WAL record belonging to a transaction with
an XID larger than one that committed already before the checkpoint we're
recovering from, we zero the clog page containing the already committed
transaction, leading to data loss.
In hindsight, trying to track xids in the known-assigned-xids array before
seeing the running-xacts record was too complicated. To fix that, hold
XidGenLock while the running-xacts snapshot is taken and WAL-logged. That
ensures that no transaction can begin or end in that gap, so that in recvoery
we know that the snapshot contains all transactions running at that point in
WAL.
There are some code paths, such as SPI_execute(), where we invoke
copyObject() on raw parse trees before doing parse analysis on them. Since
the bison grammar is capable of building heavily nested parsetrees while
itself using only minimal stack depth, this means that copyObject() can be
the front-line function that hits stack overflow before anything else does.
Accordingly, it had better have a check_stack_depth() call. I did a bit of
performance testing and found that this slows down copyObject() by only a
few percent, so the hit ought to be negligible in the context of complete
processing of a query.
Per off-list report from Toshihide Katayama. Back-patch to all supported
branches.
This doesn't involve any user-visible change in behavior, but will be
useful when the COPY routines are exposed to allow their use by Foreign
Data Wrapper routines, which will be able to use these routines to read
irregular CSV files, for example.
Avoid eating quite so much memory for large inheritance trees, by
reclaiming the space used by temporary copies of the original parsetree and
range table, as well as the workspace needed during planning. The cost is
needing to copy the finished plan trees out of the child memory context.
Although this looks like it ought to slow things down, my testing shows
it actually is faster, apparently because fewer interactions with malloc()
are needed and/or we can do the work within a more readily cacheable amount
of memory. That result might be platform-dependent, but I'll take it.
Per a gripe from John Papandriopoulos, in which it was pointed out that the
memory consumption actually grew as O(N^2) for sufficiently many child
tables, since we were creating N copies of the N-element range table.
1. Complain, rather than silently doing nothing, if an "invalid" tuple
is found on a leaf page. Per off-list discussion with Heikki.
2. Fix oversight in code that removes a GISTSearchItem from the search
queue: we have to reset lastHeap if this was the last heap item in the
parent GISTSearchTreeItem. Otherwise subsequent additions will do the
wrong thing. This was probably masked in early testing because in typical
cases the parent item would now be completely empty and would be deleted on
next call. You'd need a queued non-leaf page at exactly the same distance
as a heap tuple to expose the bug.