no need for it to be nearly as big as the global hash table, and since
it's not in shared memory it can grow if it does need to be bigger.
By reducing the size, we speed up hash_seq_search(), which saves a
significant fraction of subtransaction entry/exit overhead.
rather than longjmp'ing clear out of Perl and thereby leaving Perl in
a broken state. Also some minor prettification of error messages.
Still need to do something with spi_exec_query() error handling.
to the original List; per report from Sebastian BÎck. I think this is
the last such bug --- I examined every lcons() call in the backend and
the rest seem OK --- but it's nervous-making that we're still finding
'em so many months after the List rewrite went in.
collector until the transaction commits. Per recent discussion, this
should avoid confusing autovacuum when an updating transaction runs for
a long time.
for the languages even when not installed in a standard directory.
pltcl may need this treatment as well, but we don't have the right path
conveniently available, so I'll leave it alone as long as there aren't
actual reports of trouble.
in terms of macro 'rpathdir', as I proposed a few weeks ago. In itself
this commit shouldn't change the behavior at all, but it opens the door
to using special rpaths for the PL shared libraries, as seems to be
needed for plperl in particular.
may expand the Perl stack, therefore we must SPAGAIN to reload the local
stack pointer after calling it. Also a couple other marginal readability
improvements.
this is to avoid scenarios where incoming backends find no live copies
of a database's row because the only live copy is in an as-yet-unwritten
shared buffer, which they can't see. Also, use FlushRelationBuffers()
for forcing out pg_database, instead of the much more expensive BufferSync().
There's no need to write out pages belonging to other relations.
some of the bugs exposed thereby. The remaining 'might be used uninitialized'
warnings look like live bugs, but I am not familiar enough with Perl/C hacking
to tell how to fix them.
Rather than using ReadBuffer() to increment the reference count on an
already-pinned buffer, we should use IncrBufferRefCount() as it is
faster and does not require acquiring the BufMgrLock.
even uglier than it was already :-(. Also, on Windows only, use temporary
shared memory segments instead of ordinary files to pass over critical
variable values from postmaster to child processes. Magnus Hagander
more than 65K columns, or when the created table has more than 65K columns
due to adding inherited columns from parent relations. Fix a similar
crash when processing SELECT queries with more than 65K target list
entries. In all three cases we would eventually detect the error and
elog, but the check was being made too late.
We don't really want to start a new SPI connection, just keep using the old
one; otherwise we have memory management problems as illustrated by
John Kennedy's bug report of today. This requires a bit of a hack to
ensure the SPI stack state is properly restored, but then again what we
were doing before was a hack too, strictly speaking. Add a regression
test to cover this case.