handling many-way scans: instead of re-evaluating all prior indexscan
quals to see if a tuple has been fetched more than once, use a hash table
indexed by tuple CTID. But fall back to the old way if the hash table
grows to exceed SortMem.
as well as the hash function (formerly the comparison function was hardwired
as memcmp()). This makes it possible to eliminate the special-purpose
hashtable management code in execGrouping.c in favor of using dynahash to
manage tuple hashtables; which is a win because dynahash knows how to expand
a hashtable when the original size estimate was too small, whereas the
special-purpose code was too stupid to do that. (See recent gripe from
Stephan Szabo about poor performance when hash table size estimate is way
off.) Free side benefit: when using string_hash, the default comparison
function is now strncmp() instead of memcmp(). This should eliminate some
part of the overhead associated with larger NAMEDATALEN values.
the trigger is attached to in the hashkey. This ensures that we will
create separate compiled trees for each table the trigger is used with,
avoiding possible datatype-mismatch problems if the tables have different
rowtypes. This is essentially the same bug recently identified in plpython
--- though plpgsql doesn't seem as prone to crash when the rowtype changes
underneath it. But failing robustly is no substitute for just working.
be anything yielding an array of the proper kind, not only sub-ARRAY[]
constructs; do subscript checking at runtime not parse time. Also,
adjust array_cat to make array || array comply with the SQL99 spec.
Joe Conway
datatype by array_eq and array_cmp; use this to solve problems with memory
leaks in array indexing support. The parser's equality_oper and ordering_oper
routines also use the cache. Change the operator search algorithms to look
for appropriate btree or hash index opclasses, instead of assuming operators
named '<' or '=' have the right semantics. (ORDER BY ASC/DESC now also look
at opclasses, instead of assuming '<' and '>' are the right things.) Add
several more index opclasses so that there is no regression in functionality
for base datatypes. initdb forced due to catalog additions.
spelling mistake in the PREPARE ref page (2) Makes some
English more consistent, in the ref pages for some of the
client apps (3) Adds a link to the libpq docs in the
vacuumdb ref page.
Neil Conway
have cursors that might outlive their creating transactions. A
patch is attached that fixes this (suggestions on better wording
are welcome).
Neil Conway
"syslog" option.)
By the way: The "virtual_host" parameter is a bad name for that
particular option, I think. "Virtual host" signals that PostgreSQL will
behave differently according to which IP address it's contacted (like
Apache's virtual host support which makes the web-server serve different
sites according to different criteria). A better word for the options
would be "tcpip_listen_addr" or something like that.
Troels Arvin
and test them, in addition to testing the underlying LargeObject API methods.
Modified Files:
jdbc/build.xml jdbc/org/postgresql/test/jdbc2/BlobTest.java