This change allows us to eliminate the previous limit on stored query
length, and it makes the shared-memory hash table very much smaller,
allowing more statements to be tracked. (The default value of
pg_stat_statements.max is therefore increased from 1000 to 5000.)
In typical scenarios, the hash table can be large enough to hold all the
statements commonly issued by an application, so that there is little
"churn" in the set of tracked statements, and thus little need to do I/O
to the file.
To further reduce the need for I/O to the query-texts file, add a way
to retrieve all the columns of the pg_stat_statements view except for
the query text column. This is probably not of much interest for human
use but it could be exploited by programs, which will prefer using the
queryid anyway.
Ordinarily, we'd need to bump the extension version number for the latter
change. But since we already advanced pg_stat_statements' version number
from 1.1 to 1.2 in the 9.4 development cycle, it seems all right to just
redefine what 1.2 means.
Peter Geoghegan, reviewed by Pavel Stehule
This makes it possible to store lwlocks as part of some other data
structure in the main shared memory segment, or in a dynamic shared
memory segment. There is still a main LWLock array and this patch does
not move anything out of it, but it provides necessary infrastructure
for doing that in the future.
This change is likely to increase the size of LWLockPadded on some
platforms, especially 32-bit platforms where it was previously only
16 bytes.
Patch by me. Review by Andres Freund and KaiGai Kohei.
Fix integer overflow issue noted by Magnus Hagander, as well as a bunch
of other infelicities in commit ee1e5662d8
and its unreasonably large number of followups.
Move allocation to after we check the remote server version, to avoid
a possible, very minor, memory leak. This makes us more consistent
throughout as most places in pg_dump are done in the same way (due, in
part, to previous fixes like this).
Spotted by the Coverity scanner.
Consistently check the dup2() call results throughout syslogger.c.
It's pretty unlikely that they'll error out, but if they do,
ereport(FATAL) instead of blissfully continuing on.
Spotted by the Coverity scanner.
From the Department of Nitpicking, be consistent with other escaping
and use 'E' instead of 'e' to escape the string in the example docs
for GET DISAGNOSTICS stack = PG_CONTEXT.
Noticed by Department Chief Magnus Hagander.
Mention that CREATE TABLE LIKE INCLUDING DEFAULTS creates a link between
the original and new tables if a default function modifies the database,
like nextval().
This allows ending recovery as a consistent state has been reached. Without
this, there was no easy way to e.g restore an online backup, without
replaying any extra WAL after the backup ended.
MauMau and me.
Per report from Jeffrey Walton, libpq has been accepting only TLSv1
exactly. Along the lines of the backend code, libpq will now support
new versions as OpenSSL adds them.
Marko Kreen, reviewed by Wim Lewis.
During parallel pg_dump, a worker process closing the connection caused
a minor memory leak (particularly minor as we are likely about to exit
anyway). Instead, free the memory in this case prior to returning NULL
to indicate connection closed.
Spotting by the Coverity scanner.
Back patch to 9.3 where this was introduced.
The maxoff field is not used in the new, compressed page format. Let's
reset it when converting an old-format page to the new format. The code
won't care either way, but this makes it possible to use the field for
something else in the future.
When vacuuming a data leaf page, any compressed posting lists that are not
modified, are copied back to the buffer from a later location in the same
buffer rather than from a palloc'd copy. IOW, they are just moved
downwards in the same buffer. Because the source and destination addresses
can overlap, we must use memmove rather than memcpy.
Report and fix by Alexander Korotkov.
Add the ability to specify the objects to move by who those objects are
owned by (as relowner) and change ALL to mean ALL objects. This
makes the command always operate against a well-defined set of objects
and not have the objects-to-be-moved based on the role of the user
running the command.
Per discussion with Simon and Tom.
Since C99, it's been standard for printf and friends to accept a "z" size
modifier, meaning "whatever size size_t has". Up to now we've generally
dealt with printing size_t values by explicitly casting them to unsigned
long and using the "l" modifier; but this is really the wrong thing on
platforms where pointers are wider than longs (such as Win64). So let's
start using "z" instead. To ensure we can do that on all platforms, teach
src/port/snprintf.c to understand "z", and add a configure test to force
use of that implementation when the platform's version doesn't handle "z".
Having done that, modify a bunch of places that were using the
unsigned-long hack to use "z" instead. This patch doesn't pretend to have
gotten everyplace that could benefit, but it catches many of them. I made
an effort in particular to ensure that all uses of the same error message
text were updated together, so as not to increase the number of
translatable strings.
It's possible that this change will result in format-string warnings from
pre-C99 compilers. We might have to reconsider if there are any popular
compilers that will warn about this; but let's start by seeing what the
buildfarm thinks.
Andres Freund, with a little additional work by me
The Sparc machines in the buildfarm are crashing because of misaligned
access to posting lists stored in entry tuples.
I accidentally removed a critical SHORTALIGN() from ginFormTuple, as part
of the packed posting lists patch. Perhaps I thought it was unnecessary,
because the index_form_tuple() call above the SHORTALIGN already aligned
the size, missing the fact that the null-category byte makes it misaligned
again (I think the SHORTALIGN is indeed unnecessary if there's no null-
category byte, but let's just play it safe...)
Some cases were still reporting errors and aborting, instead of a NOTICE
that the object was being skipped. This makes it more difficult to
cleanly handle pg_dump --clean, so change that to instead skip missing
objects properly.
Per bug #7873 reported by Dave Rolsky; apparently this affects a large
number of users.
Authors: Pavel Stehule and Dean Rasheed. Some tweaks by Álvaro Herrera
There was a bug in the psql's meta command \conninfo. When the
IP address was specified in the hostaddr and psql used it to create
a connection (i.e., psql -d "hostaddr=xxx"), \conninfo could not
display that address. This is because \conninfo got the connection
information only from PQhost() which could not return hostaddr.
This patch adds PQhostaddr(), and changes \conninfo so that it
can display not only the host name that PQhost() returns but also
the IP address which PQhostaddr() returns.
The bug has existed since 9.1 where \conninfo was introduced.
But it's too late to add new libpq function into the released versions,
so no backpatch.
In the platform that doesn't support Unix-domain socket, when
neither host nor hostaddr are specified, the default host
'localhost' is used to connect to the server and PQhost() must
return that, but it didn't. This patch fixes PQhost() so that
it returns the default host in that case.
Also this patch fixes PQhost() so that it doesn't return
Unix-domain socket directory path in the platform that doesn't
support Unix-domain socket.
Back-patch to all supported versions.
GIN posting lists are now encoded using varbyte-encoding, which allows them
to fit in much smaller space than the straight ItemPointer array format used
before. The new encoding is used for both the lists stored in-line in entry
tree items, and in posting tree leaf pages.
To maintain backwards-compatibility and keep pg_upgrade working, the code
can still read old-style pages and tuples. Posting tree leaf pages in the
new format are flagged with GIN_COMPRESSED flag, to distinguish old and new
format pages. Likewise, entry tree tuples in the new format have a
GIN_ITUP_COMPRESSED flag set in a bit that was previously unused.
This patch bumps GIN_CURRENT_VERSION from 1 to 2. New indexes created with
version 9.4 will therefore have version number 2 in the metapage, while old
pg_upgraded indexes will have version 1. The code treats them the same, but
it might be come handy in the future, if we want to drop support for the
uncompressed format.
Alexander Korotkov and me. Reviewed by Tomas Vondra and Amit Langote.
A while back, 2c92edad48 allowed
type_func_name_keywords to be used in more places, including role
identifiers. Unfortunately, that commit missed out on cases where
name_list was used for lists-of-roles, eg: for DROP ROLE. This
resulted in the unfortunate situation that you could CREATE a role
with a type_func_name_keywords-allowed identifier, but not DROP it
(directly- ALTER could be used to rename it to something which
could be DROP'd).
This extends allowing type_func_name_keywords to places where role
lists can be used.
Back-patch to 9.0, as 2c92edad48 was.
All these constructs generate parse trees consisting of a Const and
a run-time type coercion (perhaps a FuncExpr or a CoerceViaIO). Modify
the raw parse output so that we end up with the original token's location
attached to the type coercion node while the Const has location -1;
before, it was the other way around. This makes no difference in terms
of what exprLocation() will say about the parse tree as a whole, so it
should not have any user-visible impact. The point of changing it is that
we do not want contrib/pg_stat_statements to treat these constructs as
replaceable constants. It will do the right thing if the Const has
location -1 rather than a valid location.
This is a pretty ugly hack, but then this code is ugly already; we should
someday replace this translation with special-purpose parse node(s) that
would allow ruleutils.c to reconstruct the original query text.
(See also commit 5d3fcc4c2e, which also
hacked location assignment rules for the benefit of pg_stat_statements.)
Back-patch to 9.2 where pg_stat_statements grew the ability to recognize
replaceable constants.
Kyotaro Horiguchi
Unlike our other array functions, this considers the total number of
elements across all dimensions, and returns 0 rather than NULL when the
array has no elements. But it seems that both of those behaviors are
almost universally disliked, so hopefully that's OK.
Marko Tiikkaja, reviewed by Dean Rasheed and Pavel Stehule
Commit a5bca4ef03 accidentally changed
the semantics when the "skipping missing configuration file" is
emitted, because it forced OK to true instead of leaving the value
untouched.
Spotted by Tom Lane.
Commit 138184adc5 plugged some but not
all of the leaks from commit 2a0c81a12c.
This tightens things up some more.
Amit Kapila, per an observation by Tom Lane
When there are consecutive spaces (or other non-format-code characters) in
the format, we should advance over exactly that many characters of input.
The previous coding mistakenly did a "skip whitespace" action between such
characters, possibly allowing more input to be skipped than the user
intended. We only need to skip whitespace just before an actual field.
This is really a bug fix, but given the minimal number of field complaints
and the risk of breaking applications coded to expect the old behavior,
let's not back-patch it.
Jeevan Chalke
Previously the presence of a nextval() prevented the
use of batch-mode COPY. This patch introduces a
special case just for nextval() functions. In future
we will introduce a general case solution for
labelling volatile functions as safe for use.
In the MSVC build system we've never separated krb5 from gss,
and always built them both. Since the removal of native krb5
support, this parameter only controls GSSAPI, so rename it
accordingly.
krb5 has been deprecated since 8.3, and the recommended way to do
Kerberos authentication is using the GSSAPI authentication method
(which is still fully supported).
libpq retains the ability to identify krb5 authentication, but only
gives an error message about it being unsupported. Since all authentication
is initiated from the backend, there is no need to keep it at all
in the backend.