Post-commit review identified a number of places where addition was
used instead of multiplication or memory wasn't zeroed where it should
have been. This commit also fixes one case where a structure member
was mis-initialized, and moves another memory allocation closer to
the place where the allocated storage is used for clarity.
Andres Freund
This code really needs to be refactored so that there aren't so many copies
that can diverge. Not to mention that this whole approach is probably
wrong. But for the moment I'll just stick my finger in the dike.
Per report from Michael Paquier.
Fix JSONB_MAX_ELEMS and JSONB_MAX_PAIRS macros to use CB_MASK in the
calculation. JENTRY_POSMASK happens to have the same value at the moment,
but that's just coincidental.
Refactor jsonb iterator functions, for readability.
Get rid of the JENTRY_ISFIRST flag. Whenever we handle JEntrys, we have
access to the whole array and have enough context information to know
which entry is the first. This frees up one bit in the JEntry header for
future use. While we're at it, shuffle the JEntry bits so that boolean
true and false go together, for aesthetic reasons.
Bump catalog version as this changes the on-disk format slightly.
Change the key representation so that values that would exceed 127 bytes
are hashed into short strings, and so that the original JSON datatype of
each value is recorded in the index. The hashing rule eliminates the major
objection to having this opclass be the default for jsonb, namely that it
could fail for plausible input data (due to GIN's restrictions on maximum
key length). Preserving datatype information doesn't really buy us much
right now, but it requires no extra space compared to the previous way,
and it might be useful later.
Also, change the consistency-checking functions to request recheck for
exists (jsonb ? text) and related operators. The original analysis that
this is an exactly checkable query was incorrect, since the index does
not preserve information about whether a key appears at top level in
the indexed JSON object. Add a test case demonstrating the problem.
Make some other, mostly cosmetic improvements to the code in jsonb_gin.c
as well.
catversion bump due to on-disk data format change in jsonb_ops indexes.
Move the functions around to group related functions together. Remove
binequal argument from lengthCompareJsonbStringValue, moving that
responsibility to lengthCompareJsonbPair. Fix typo in comment.
Ensure that ecpg preprocessor output files are rebuilt when re-testing
after a change in the ecpg preprocessor itself, or a change in any of
several include files that get copied verbatim into the output files.
The lack of these dependencies was what created problems for Kevin Grittner
after the recent pgindent run. There's no way for --enable-depend to
discover these dependencies automatically, so we've gotta put them into
the Makefiles by hand.
While at it, reduce the amount of duplication in the ecpg invocations.
Back in 8.3, we installed permissions checks in these functions (see
commits 8bc225e799 and cc26599b72). But we forgot to document that
anywhere in the user-facing docs; it did get mentioned in the 8.3 release
notes, but nobody's looking at that any more. Per gripe from Suya Huang.
Per discussion, the old value of 128MB is ridiculously small on modern
machines; in fact, it's not even any larger than the default value of
shared_buffers, which it certainly should be. Increase to 4GB, which
is unlikely to be any worse than the old default for anyone, and should
be noticeably better for most. Eventually we might have an autotuning
scheme for this setting, but the recent attempt crashed and burned,
so for now just do this.
This reverts commit ee1e5662d8, as well as
a remarkably large number of followup commits, which were mostly concerned
with the fact that the implementation didn't work terribly well. It still
doesn't: we probably need some rather basic work in the GUC infrastructure
if we want to fully support GUCs whose default varies depending on the
value of another GUC. Meanwhile, it also emerged that there wasn't really
consensus in favor of the definition the patch tried to implement (ie,
effective_cache_size should default to 4 times shared_buffers). So whack
it all back to where it was. In a followup commit, I'll do what was
recently agreed to, which is to simply change the default to a higher
value.
Commit 4318daecc9 broke it. The change in
sub-second precision at extreme dates is normal. The inconsistent
truncation vs. rounding is essentially a bug, albeit a longstanding one.
Back-patch to 8.4, like the causative commit.
Previous commit was confused about the case we're handling: actually,
what the patch is dealing with is platforms that have optreset, *and*
have <getopt.h>, but the latter fails to declare the former. Because
we use a linking probe to set HAVE_INT_OPTRESET, we need to be sure we
have a declaration even if <getopt.h> doesn't think it exists.
Reportedly, some versions of mingw are like that, and it seems plausible
in general that older platforms might be that way. However, we'd
determined experimentally that just doing "extern int" conflicts with
the way Cygwin declares these variables, so explicitly exclude Cygwin.
Michael Paquier, tweaked by me to hopefully not break Cygwin
To-be-deleted list pages contain no useful information, as they are being
deleted, but we must still protect the writes from being torn by a crash
after a partial write. To do that, re-initialize the pages on WAL replay.
Jeff Janes caught this with a test program to test partial writes.
Backpatch to all supported versions.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message. After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.
This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1
in 2008. Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size. The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.
Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution. Back-patch to all supported branches.
Just as we would start bgworkers immediately after an initial startup
of the server, we should restart them immediately when reinitializing.
Petr Jelinek and Robert Haas
The main target of this cleanup is the convertJsonb() function, but I also
touched a lot of other things that I spotted into in the process.
The new convertToJsonb() function uses an output buffer that's resized on
demand, so the code to estimate of the size of JsonbValue is removed.
The on-disk format was not changed, even though I refactored the structs
used to handle it. The term "superheader" is replaced with "container".
The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
array. That was a premature optimization, the idea being that if there are
duplicates in the input array, you only need to check them once. Also,
sorting the array saves some effort in the binary search used to find a key
within an object. But there were drawbacks too: the sorting and
deduplicating obviously isn't free, and in the typical case there are no
duplicates to remove, and the gain in the binary search was minimal. Remove
all that, which makes the code simpler too.
This includes a bug-fix; the total length of the elements in a jsonb array
or object mustn't exceed 2^28. That is now checked.
Since the postmaster won't perform a crash-and-restart sequence
for background workers which don't request shared memory access,
we'd better make sure that they can't corrupt shared memory.
Patch by me, review by Tom Lane.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid. Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.
Per report from Leif Jensen. It's been broken for a long time, so
back-patch to all active branches.
The motivation for a crash and restart cycle when a backend dies is
that it might have corrupted shared memory on the way down; and we
can't recover reliably except by reinitializing everything. But that
doesn't apply to processes that don't touch shared memory. Currently,
there's nothing to prevent a background worker that doesn't request
shared memory access from touching shared memory anyway, but that's a
separate bug.
Previous to this commit, the coding in postmaster.c was inconsistent:
an exit status other than 0 or 1 didn't provoke a crash-and-restart,
but failure to release the postmaster child slot did. This change
makes those cases consistent.
Commit 4318daecc9 introduced a test that
couldn't be made consistent between integer and floating-point
timestamps.
It was designed to test the longest possible interval output length,
so removing four zeros from the number of hours, as this patch does,
is not ideal. But the test still has some utility for its original
purpose, and there aren't a lot of other good options.
Noah Misch suggested a different approach where we test that the
output either matches what we expect from integer timestamps or what
we expect from floating-point timestamps. That seemed to obscure an
otherwise simple test, however.
Reviewed by Tom Lane and Noah Misch.
The coding in JsonbHashScalarValue might have accidentally failed to fail
given current representational choices, but the key word there would be
"accidental". Insert the appropriate datatype conversion macro. And
use the right conversion macro for hash_numeric's result, too.
In passing make the code a bit cleaner and less repetitive by factoring
out the xor step from the switch.
Index-only scans avoid taking a lock on the VM buffer, which would
cause a lot of contention. To be correct, that requires some intricate
assumptions that weren't completely documented in the previous
comment.
Reviewed by Robert Haas.
The main problem is that DocBook SGML allows indexterm elements just
about everywhere, but DocBook XML is stricter. For example, this common
pattern
<varlistentry>
<indexterm>...</indexterm>
<term>...</term>
...
</varlistentry>
needs to be changed to something like
<varlistentry>
<term>...<indexterm>...</indexterm></term>
...
</varlistentry>
See also bb4eefe7bf.
There is currently nothing in the build system that enforces that things
stay valid, because that requires additional tools and will receive
separate consideration.
The previous coding would potentially cause attaching to segment A to
fail if segment B was at the same time in the process of going away.
Andres Freund, with a comment tweak by me
Commit d298b50a3b by Heikki Linnakangas
requested that the version check message be updated at next release, suggesting
that the appropriate text would be “9.3 or later”. The logic used for the check
indicates that the correct text for 9.4 is “9.3 or 9.4”, since the logic would
cause this to fail for later releases.
When array of char * was used as target for a FETCH statement returning more
than one row, it tried to store all the result in the first element. Instead it
should dump array of char pointers with right offset, use the address instead
of the value of the C variable while reading the array and treat such variable
as char **, instead of char * for pointer arithmetic.
Patch by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>