By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively. Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available. Now this is checked in configure, with an
error if no shared library is available.
The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built. The procedural
languages were added later, and shared libraries were often not
available in the beginning. So it was decided skip the builds of the
procedural languages in those cases. The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.
Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
Eliminate the separate 'len' variable from the loops, and also use the 4
byte instruction. This shaves off a few more cycles. Even though this
routine that uses the special SSE 4.2 instructions is much faster than a
generic routine, it's still a hot spot, so let's make it as fast as
possible.
Change the configure test to not test _mm_crc32_u64. That variant is only
available in the 64-bit x86-64 architecture, not in 32-bit x86. Modify
pg_comp_crc32c_sse42 so that it only uses _mm_crc32_u64 on x86-64. With
these changes, the SSE accelerated CRC-32C implementation can also be used
on 32-bit x86 systems.
This also fixes the 32-bit MSVC build.
Modern x86 and x86-64 processors with SSE 4.2 support have special
instructions, crc32b and crc32q, for calculating CRC-32C. They greatly
speed up CRC calculation.
Whether the instructions can be used or not depends on the compiler and the
target architecture. If generation of SSE 4.2 instructions is allowed for
the target (-msse4.2 flag on gcc and clang), use them. If they are not
allowed by default, but the compiler supports the -msse4.2 flag to enable
them, compile just the CRC-32C function with -msse4.2 flag, and check at
runtime whether the processor we're running on supports it. If it doesn't,
fall back to the slicing-by-8 algorithm. (With the common defaults on
current operating systems, the runtime-check variant is what you get in
practice.)
Abhijit Menon-Sen, heavily modified by me, reviewed by Andres Freund.
We will, for the foreseeable future, not expose 128 bit datatypes to
SQL. But being able to use 128bit math will allow us, in a later patch,
to use 128bit accumulators for some aggregates; leading to noticeable
speedups over using numeric.
So far we only detect a gcc/clang extension that supports 128bit math,
but no 128bit literals, and no *printf support. We might want to expand
this in the future to further compilers; if there are any that that
provide similar support.
Discussion: 544BB5F1.50709@proxel.se
Author: Andreas Karlsson, with significant editorializing by me
Reviewed-By: Peter Geoghegan, Oskari Saarenmaa
This speeds up WAL generation and replay. The new algorithm is
significantly faster with large inputs, like full-page images or when
inserting wide rows. It is slower with tiny inputs, i.e. less than 10 bytes
or so, but the speedup with longer inputs more than make up for that. Even
small WAL records at least have 24 byte header in the front.
The output is identical to the current byte-at-a-time computation, so this
does not affect compatibility. The new algorithm is only used for the
CRC-32C variant, not the legacy version used in tsquery or the
"traditional" CRC-32 used in hstore and ltree. Those are not as performance
critical, and are usually only applied over small inputs, so it seems
better to not carry around the extra lookup tables to speed up those rare
cases.
Abhijit Menon-Sen
We had code that supposed that some platforms might offer a nonstandard
version of getpwuid_r() with only four arguments. However, the 5-argument
definition has been standardized at least since the Single Unix Spec v2,
which is our normal reference for what's portable across all Unix-oid
platforms. (What's more, this wasn't the only pre-standardization version
of getpwuid_r(); my old HPUX 10.20 box has still another signature.)
So let's just get rid of the now-useless configure step.
Several upcoming performance/scalability improvements require atomic
operations. This new API avoids the need to splatter compiler and
architecture dependent code over all the locations employing atomic
ops.
For several of the potential usages it'd be problematic to maintain
both, a atomics using implementation and one using spinlocks or
similar. In all likelihood one of the implementations would not get
tested regularly under concurrency. To avoid that scenario the new API
provides a automatic fallback of atomic operations to spinlocks. All
properties of atomic operations are maintained. This fallback -
obviously - isn't as fast as just using atomic ops, but it's not bad
either. For one of the future users the atomics ontop spinlocks
implementation was actually slightly faster than the old purely
spinlock using implementation. That's important because it reduces the
fear of regressing older platforms when improving the scalability for
new ones.
The API, loosely modeled after the C11 atomics support, currently
provides 'atomic flags' and 32 bit unsigned integers. If the platform
efficiently supports atomic 64 bit unsigned integers those are also
provided.
To implement atomics support for a platform/architecture/compiler for
a type of atomics 32bit compare and exchange needs to be
implemented. If available and more efficient native support for flags,
32 bit atomic addition, and corresponding 64 bit operations may also
be provided. Additional useful atomic operations are implemented
generically ontop of these.
The implementation for various versions of gcc, msvc and sun studio have
been tested. Additional existing stub implementations for
* Intel icc
* HUPX acc
* IBM xlc
are included but have never been tested. These will likely require
fixes based on buildfarm and user feedback.
As atomic operations also require barriers for some operations the
existing barrier support has been moved into the atomics code.
Author: Andres Freund with contributions from Oskari Saarenmaa
Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com,
20131015123303.GH5300@awork2.anarazel.de,
20131028205522.GI20248@awork2.anarazel.de
The PGAC_FUNC_SNPRINTF_SIZE_T_SUPPORT test was broken by
ce486056ec. Among others it made the UINT64_FORMAT macro to be
defined in c.h, instead of directly being defined by configure.
This lead to the replacement printf being used on all platforms for a
while. Which seems to work, because this was only used due to
different profiles ;)
Fix by relying on INT64_MODIFIER instead.
We have had INT64_FORMAT and UINT64_FORMAT for a long time, but that's not
good enough if you want something more exotic, like "%20lld".
Abhijit Menon-Sen, per Andres Freund's suggestion.
Almost ten years ago, commit e48322a6d6 broke
the logic in ACX_PTHREAD by looping through all the possible flags rather
than stopping with the first one that would work. This meant that
$acx_pthread_ok was no longer meaningful after the loop; it would usually
be "no", whether or not we'd found working thread flags. The reason nobody
noticed is that Postgres doesn't actually use any of the symbols set up
by the code after the loop. Rather than complicate things some more to
make it work as designed, let's just remove all that dead code, and thereby
save a few cycles in each configure run.
Bison >=3.0 issues warnings about
%name-prefix="base_yy"
instead of the now preferred
%name-prefix "base_yy"
but the latter doesn't work with Bison 2.3 or less. So for now we
silence the deprecation warnings.
As of Xcode 5.0, Apple isn't including the Python framework as part of the
SDK-level files, which means that linking to it might fail depending on
whether Xcode thinks you've selected a specific SDK version. According to
their Tech Note 2328, they've basically deprecated the framework method of
linking to libpython and are telling people to link to the shared library
normally. (I'm pretty sure this is in direct contradiction to the advice
they were giving a few years ago, but whatever.) Testing says that this
approach works fine at least as far back as OS X 10.4.11, so let's just
rip out the framework special case entirely. We do still need a special
case to decide that OS X provides a shared library at all, unfortunately
(I wonder why the distutils check doesn't work ...). But this is still
less of a special case than before, so it's fine.
Back-patch to all supported branches, since we'll doubtless be hearing
about this more as more people update to recent Xcode.
Usually the search would find plain "tclsh" without any trouble,
but some installations might only have the version-numbered flavor
of that program.
No compatibility problems have been reported with 8.6, so we might
as well back-patch this to all active branches.
Christoph Berg
This test used to just define an unused static inline function and check
whether that causes a warning. But newer clang versions warn about
unused static inline functions when defined inside a .c file, but not
when defined in an included header, which is the case we care about.
Change the test to cope.
Andres Freund
Since C99, it's been standard for printf and friends to accept a "z" size
modifier, meaning "whatever size size_t has". Up to now we've generally
dealt with printing size_t values by explicitly casting them to unsigned
long and using the "l" modifier; but this is really the wrong thing on
platforms where pointers are wider than longs (such as Win64). So let's
start using "z" instead. To ensure we can do that on all platforms, teach
src/port/snprintf.c to understand "z", and add a configure test to force
use of that implementation when the platform's version doesn't handle "z".
Having done that, modify a bunch of places that were using the
unsigned-long hack to use "z" instead. This patch doesn't pretend to have
gotten everyplace that could benefit, but it catches many of them. I made
an effort in particular to ensure that all uses of the same error message
text were updated together, so as not to increase the number of
translatable strings.
It's possible that this change will result in format-string warnings from
pre-C99 compilers. We might have to reconsider if there are any popular
compilers that will warn about this; but let's start by seeing what the
buildfarm thinks.
Andres Freund, with a little additional work by me
make maintainer-check was obscure and rarely called in practice, and
many breakages were missed. Fold everything that make maintainer-check
used to do into the normal build. Specifically:
- Call duplicate_oids when genbki.pl is called.
- Check for tabs in SGML files when the documentation is built.
- Run msgfmt with the -c option during the regular build. Add an
additional configure check to see whether we are using the GNU
version. (make maintainer-check probably used to fail with non-GNU
msgfmt.)
Keep maintainer-check as around as phony target for the time being in
case anyone is calling it. But it won't do anything anymore.
This is just neatnik-ism, since all the tests in the code are #ifdefs,
but we shouldn't specify symbols as "Define to 1 ..." and then not
actually define them that way.
In commit 71450d7fd6, we added code to inform
suitably-intelligent compilers that ereport() doesn't return if the elevel
is ERROR or higher. This patch extends that to elog(), and also fixes a
double-evaluation hazard that the previous commit created in ereport(),
as well as reducing the emitted code size.
The elog() improvement requires the compiler to support __VA_ARGS__, which
should be available in just about anything nowadays since it's required by
C99. But our minimum language baseline is still C89, so add a configure
test for that.
The previous commit assumed that ereport's elevel could be evaluated twice,
which isn't terribly safe --- there are already counterexamples in xlog.c.
On compilers that have __builtin_constant_p, we can use that to protect the
second test, since there's no possible optimization gain if the compiler
doesn't know the value of elevel. Otherwise, use a local variable inside
the macros to prevent double evaluation. The local-variable solution is
inferior because (a) it leads to useless code being emitted when elevel
isn't constant, and (b) it increases the optimization level needed for the
compiler to recognize that subsequent code is unreachable. But it seems
better than not teaching non-gcc compilers about unreachability at all.
Lastly, if the compiler has __builtin_unreachable(), we can use that
instead of abort(), resulting in a noticeable code savings since no
function call is actually emitted. However, it seems wise to do this only
in non-assert builds. In an assert build, continue to use abort(), so that
the behavior will be predictable and debuggable if the "impossible"
happens.
These changes involve making the ereport and elog macros emit do-while
statement blocks not just expressions, which forces small changes in
a few call sites.
Andres Freund, Tom Lane, Heikki Linnakangas
This means we can now construct a configure test for the library
presence. Previously these parameters were only figured out at
build time in plperl's GnuMakefile.
Pre-Lion versions of Apple's linker don't allow space between -F and its
argument. (Snow Leopard is nice enough to tell you that in so many words,
but older versions just fail with very obscure link errors, as seen on
buildfarm member locust for instance.) Oversight in commit
fc8745070a.
The PL/Python build on OS X was previously hardcoded to use the system
installation of Python, ignoring whatever was specified to configure.
Except that it would use the header files from configure, which could
lead to mismatches. It was not possible to build against a custom
Python installation.
Now, we check in configure how the specified Python installation was
built and use that, supporting framework and non-framework builds.
The former name was too likely to conflict with symbols from external
headers; and, as seen in recent buildfarm failures in member spoonbill,
it has now happened at least in plpython.
Currently, the macros only work with fairly recent gcc versions, but there
is room to expand them to other compilers that have comparable features.
Heavily revised and autoconfiscated version of a patch by Andres Freund.
Python can be built to have two separate include directories: one for
platform-independent files and one for platform-specific files. So
far, this has apparently never mattered for a PL/Python build. But
with the new multi-arch Python packages in Debian and Ubuntu, this is
becoming the standard configuration on these platforms, so we must
check these directories separately to be able to build there.
Also add a bit of reporting in configure to be able to see better what
is going on with this.
There was a hack put into install-sh to call strip with the correct
options on Mac OS X. But that never worked, because configure
disabled stripping on that platform altogether. So remove that dead
code, and while we're at it, update install-sh to the latest upstream
source (from Automake).
Instead, set up the right strip options in programs.m4, so this now
actually works the way it was originally intended.
The BSD-ish members of the buildfarm all seem to think removing this
was a bad idea. It looks to me like it resulted in omitting the system
header inclusion necessary to detect the fields of struct tm correctly.
ENABLE_DTRACE unused as of a7b7b07af3
HAVE_ERR_SET_MARK unused as of 4ed4b6c54e
HAVE_FCVT unused as of 4553e1d80f
HAVE_STRUCT_SOCKADDR_UN unused as of b4cea00a1f
HAVE_SYSCONF unused as of f83356c7f5
TM_IN_SYS_TIME never used, obsolescent per Autoconf documentation
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for). Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.
Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script. Per recent trouble report from John Lumby.
Original patch by Lars Kanis, reviewed by Nishiyama Tomoaki and tweaked some by me.
This compiler, or at least the latest version of it, is currently broken, and
only passes the regression tests if built with -O0.
Because of ABI tagging, the library version number might no longer be
exactly the Python version number, so do extra lookups. This affects
installations without a shared library, such as ActiveState's
installer.
Also update the way to detect the location of the 'config' directory,
which can also be versioned.
Ashesh Vashi
This is reported to be necessary on some versions of that OS. In service
of this, cause PGAC_PROG_CC_CFLAGS_OPT to reject switches that result in
compiler warnings, since on yet other versions of that OS, the switch does
nothing except provoke a warning.
Report and patch by Ibrar Ahmed, further tweaking by me.
When testing the stderr produced by various thread-support flags, also
run a compilation in addition to a link, because clang warns on
certain flags when compiling but not when linking.
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.
Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
This can be used to build 64 bit Windows binaries, not only on 64 bit
Windows but on supported cross-compiling hosts including 32 bit Windows,
Cygwin, Darwin and Linux.