per Andrew Dunstan. Also, don't override the user's value of PGHOST
in the 'make installcheck' case. I think the latter was an ill-considered
workaround for the Windows code back when libpq didn't properly default
to localhost on Unix-socket-less platforms.
MemoryContextAllocZero back to MemoryContextAlloc, same as it was in 7.4.
The zeroing is unnecessary since all the meaningful fields are filled in
just below. I had made it do that out of neatnik-ism, but some testing
with an example provided by Pavel Stehule showed that the zeroing was
accounting for about 5% of the runtime in a compute-intensive plpgsql
function. That seems a bit high of a price for neatnik-ism...
got it wrong when the JOIN was in an outer query level. Per example from
Laurie Burrow. Also fix same issue in markTargetListOrigin. I think the
latter is only a latent bug since we currently don't apply markTargetListOrigin
except at the outer level ... but should do it right anyway.
CASE 'a' WHEN 'a' THEN 1 ELSE 2 END. This worked in 7.4 and before
but had been broken due to premature freezing of the type of the test
expression. Per gripe from GÄbor SzÃcs.
few 'listen_addresses' as possible --- on most systems, none at all,
just the Unix socket. This avoids spurious check failures due to bogus
DNS setups, and is probably a good idea from a security standpoint anyway.
Per trouble report from Jean-GÅrard Pailloncy.
so that we can get the size of a shared inval message back down to what it
was in 7.4 (and simplify the logic too). Phase 2 of fixing the
'SMgrRelation hashtable corrupted' problem.
is the minimum required fix. I want to look next at taking advantage of
it by simplifying the message semantics in the shared inval message queue,
but that part can be held over for 8.1 if it turns out too ugly.
releases, a nonzero 'c' argument meant that the input string could be
terminated by either that character or \0. Recent refactoring broke
that, causing the thing to scan for 'c' only. This went undetected
because no part of the main code actually passes nonzero 'c'. However
it broke tsearch2 and possibly other user-written code that assumed
the old definition. Per report from Tom Hebbron.
request packet, use pqReadData(). This has the same effect since
conn->ssl isn't set yet and we aren't expecting more than one byte.
The advantage is that we will correctly detect loss-of-connection
instead of going into an infinite loop. Per report from Hannu Krosing.
discussion on pgsql-hackers-win32 list. Documentation still needs to
be tweaked --- I'm not sure how to refer to the APPDATA folder in
user documentation.
consistent. On Unix we now always consult getpwuid(); $HOME isn't used
at all. On Windows the code currently consults $USERPROFILE, or $HOME
if that's not defined, but I expect this will change as soon as the win32
hackers come to a consensus. Nothing done yet about changing the file
names used underneath $USERPROFILE.
subroutine that can hide platform dependencies. The WIN32 path is still
a stub, but I await a fix from one of the win32 hackers.
Also clean up unnecessary #ifdef WIN32 ugliness in a couple of places.
share lock on a buffer being written out before releasing BufMgrLock in
the BufferAlloc code path; if we do it later we might block on someone
who's re-pinned the buffer. I believe this is only an issue for BufferAlloc
and not the other places that call FlushBuffer. BufferSync must continue
to do it the old way since it may well be trying to write buffers that
other backends have pinned; but it should not be holding any conflicting
locks. FlushRelationBuffers is okay since it's got exclusive lock at the
relation level.
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
to shared memory as soon as possible, ie, right after read_backend_variables.
The effective difference from the original code is that this happens
before instead of after read_nondefault_variables(), which loads GUC
information and is apparently capable of expanding the backend's memory
allocation more than you'd think it should. This should fix the
failure-to-attach-to-shared-memory reports we've been seeing on Windows.
Also clean up a few bits of unnecessarily grotty EXEC_BACKEND code.
that is, files are sought in the same directory as the referencing file.
Also allow absolute paths in @file constructs. Improve documentation
to actually say what is allowed in an included file.
executable file isn't itself a symlink. We still need to run the
algorithm so that any directory symlinks in the path to the
executable are replaced by a true path. Noticed this on seeing
pg_config give me a completely wrong answer for --pkglibdir when
I called it through a symlink to the installation bindir.
the remainder of the current clog page during system startup. While
this was a good idea, it turns out the code fails if nextXid is
exactly at a page boundary, because we won't have created the "current"
clog page yet in that case. Since the page will be correctly zeroed
when we execute the first transaction on it, the solution is just to
do nothing when exactly at a page boundary. Per trouble report from
Dave Hartwig.
its presence. This amounts to desupporting Kerberos 5 releases 1.0.*,
which is small loss, and simplifies use of our Kerberos code on platforms
with Red-Hat-style include file layouts. Per gripe from John Gray and
followup discussion.
advancing ActiveSnapshot when we are inside a volatile function.
Per example from Gaetano Mendola. Add a regression test to catch
similar problems in future.
after an unknown or failed psql backslash command, and also while
discarding "extra" arguments of a putatively valid backslash command.
In the case of an unknown/failed command, make sure we discard the
whole rest of the line, rather than trying to resume at the next
backslash. Per discussion with Thomer Gil.
several reports of users being confused when they attempt to use ELSEIF
and run into trouble due to PL/PgSQL's lax parser. The parser will be
improved for 8.1, but we can fix most of the problem by allowing ELSEIF
for now.
silently ignored, allowing one to write bizarre things like
DECLARE x setof int;
in plpgsql. This has misled at least one novice into thinking that
plpgsql variables could be sets ...
thought there couldn't be any, but the folly of this was exposed by an
example from andrew@supernews.com 5-Dec-2004. The patch applies the
identical logic already used for table constraints and defaults to ON
SELECT rules, so I have reasonable confidence in it even though it might
look like complicated logic.
be emitted too soon. The previous code got this right in the case where
the CHECK was emitted as a separate ALTER TABLE command, but not in the
case where the CHECK is emitted right in CREATE TABLE. Per report from
Slawomir Sudnik.
Note: this code is pretty ugly; it'd perhaps be better to treat comments
as independently sortable dump objects. That'd be much too invasive a
change for RC time though.
had to do in DECLARE CURSOR. AFAICS these are all the places affected.
PREPARE case per example from Michael Fuhr, EXPLAIN case located by
grepping for planner calls ...
(rd_att) field of a nailed-in-cache relcache entry. This fixes the bug
reported by Alvaro 8-Dec-2004; I believe it probably also explains
Grant Finnemore's report of 10-Sep-2004.
In an unrelated change in the same file, put back 7.4's response to
failure to rename() the relcache init file, ie, unlink the useless
temp file. I did not put back the warning message, since there might
actually be some reason not to have that.
of an inheritance child table is binary-compatible with the rowtype of
its parent, invent an expression node type that does the conversion
correctly. Fixes the new bug exhibited by Kris Shannon as well as a
lot of old bugs that would only show up when using multiple inheritance
or after altering the parent table.
better make sure the sort order is totally specified; else we get burnt
by platform-specific behavior of qsort() with equal keys. Per buildfarm
results.
is null-terminated. I think this is not a real bug because the parser
would always have truncated the identifier to NAMEDATALEN-1 already,
but let's be safe. Per report from Klocwork.
> throughout to the spellings suggested by your book.
Great.
A follow-up patch for current CVS HEAD is attached, and available at
http://troels.arvin.dk/db/pgsql/conformance/pgsql-sql-conformance-
followup.patch
The patch
- includes a core feature ID that had been left
out by mistake (C011)
- updates the sql_feature_packages.txt table to
reflect changes in SQL:2003 which were not
covered properly in my last patch
Troels Arvin
> seconds to 10 seconds. The original number was plucked from thin air
> some months ago, and I'd like to review that now based upon further
> thought, observation and experience.
>
> This change has little or no effect on performance, since the interval
> is there mainly to avoid repeated respawn attempts if archiver fails at
> startup. Archiver start-up time is very quick, so there is little danger
> of exceeding 10 seconds.
>
> On a busy system, if the archiver does die, then many files can build up
> in the 60 seconds before respawning. That xlog file backlog could take
> some time to clear. This then leaves a larger than normal window of data
> loss for a possibly long period.
>
> It's a minor change only, with no other effect on function.
Simon Riggs
the "ps" argument list on Unix - meaning that there is no way to
identify for example the stats processors or the bgwriter.
This patch adds this functionality, in a bit of a crufty way. It creates
a kernel Event object with the name of what would be in the title. This
can be viewed using for example Process Explorer.
It's been very handy for me during both debugging and using. I haven't
figured a better way, but perhaps someone has one that's less crufty? If
not, here is at least a working patch :-)
Magnus Hagander
reasons I outlined in pghackers a few days ago.
Also, undo someone's overly optimistic decision to reduce tuple state
checks from if (...) elog() to Asserts. If I trusted this code more,
I might think it was a good idea to disable these checks in production
installations. But I don't.
escapes --- they aren't simply quoted characters. Problem noted by
Antti Salmela. Also fix problem with incorrect handling of multibyte
characters when followed by a quantifier.
In particular, there was a mathematical tie between the two possible
nestloop-with-materialized-inner-scan plans for a join (ie, we computed
the same cost with either input on the inside), resulting in a roundoff
error driven choice, if the relations were both small enough to fit in
sort_mem. Add a small cost factor to ensure we prefer materializing the
smaller input. This changes several regression test plans, but with any
luck we will now have more stability across platforms.
a relation's number of blocks, rather than the possibly-obsolete value
in pg_class.relpages. Scale the value in pg_class.reltuples correspondingly
to arrive at a hopefully more accurate number of rows. When pg_class
contains 0/0, estimate a tuple width from the column datatypes and divide
that into current file size to estimate number of rows. This improved
methodology allows us to jettison the ancient hacks that put bogus default
values into pg_class when a table is first created. Also, per a suggestion
from Simon, make VACUUM (but not VACUUM FULL or ANALYZE) adjust the value
it puts into pg_class.reltuples to try to represent the mean tuple density
instead of the minimal density that actually prevails just after VACUUM.
These changes alter the plans selected for certain regression tests, so
update the expected files accordingly. (I removed join_1.out because
it's not clear if it still applies; we can add back any variant versions
as they are shown to be needed.)
useful than just \'failed\' when there's a problem. Per gripe from
Chris Albertson.
In an unrelated change, use VACUUM FULL; VACUUM FREEZE; rather than
a single VACUUM FULL FREEZE command, to respond to my worries of a
couple days ago about the reliability of doing this in one go.
/*
* Some compilers with throw a warning knowing this test can never be
* true because off_t can't exceed the compared maximum.
*/
if (th->fileLen > MAX_TAR_MEMBER_FILELEN)
die_horribly(AH, modulename, "archive member too large for tar format\n");
prevents problems when the DECLARE is in a portal and is executed
repeatedly, as is possible in v3 protocol. Per analysis by Oliver
Jowett, though I didn't use his patch exactly.
error conditions during regexp compile, but not during regexp execution;
any sort of "can't happen" errors would be treated as no-match instead
of being reported as they should be. Noticed while trying to duplicate
a reported Tcl bug.
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
data returned from Perl. Consolidate multiple bits of code to convert
a Perl hash to a tuple, and drive the conversion off the keys present
in the hash rather than the tuple column names, so we detect error if
the hash contains keys it shouldn't. (This means keys not in the hash
will silently default to NULL, which seems ok to me.) Fix a bunch of
reference-count leaks too.
fill factor has been exceeded. We usually run with ffactor == 1, but
the way the test was coded, it wouldn't split a bucket until the actual
fill factor reached 2.0, because of use of integer division. Change
from > to >= so that it will split more aggressively when the table
starts to get full.
few cycles during transaction exit. A typical session probably
wouldn't have as many as half a dozen portals open at once, so the
original value of 64 seems far larger than needed.
subtransactions quite right either: the ReleaseCurrentSubTransaction
call should occur inside the PG_TRY, so that the proper path is taken
if an error occurs during subtransaction commit. This assumes that
AbortSubTransaction can cope with the state left behind if
CommitSubTransaction fails partway through, but we were already
requiring that.
operations are now run as subtransactions, so that errors in them
can be reported as ordinary Perl or Tcl errors and caught by the
normal error handling convention of those languages. Also do some
minor code cleanup in pltcl.c: extract a large chunk of duplicated
code in pltcl_SPI_execute and pltcl_SPI_execute_plan into a shared
subroutine.
no need for it to be nearly as big as the global hash table, and since
it's not in shared memory it can grow if it does need to be bigger.
By reducing the size, we speed up hash_seq_search(), which saves a
significant fraction of subtransaction entry/exit overhead.