hold true for operators in a btree operator family. This is mostly to
clarify my own thinking about what the planner can assume for optimization
purposes. (blowing dust off an old abstract-algebra textbook...)
(or other types of pg_class entry): the function pgstat_vacuum_tabstat,
invoked during VACUUM startup, had runtime proportional to the number of
stats table entries times the number of pg_class rows; in other words
O(N^2) if the stats collector's information is reasonably complete.
Replace list searching with a hash table to bring it back to O(N)
behavior. Per report from kim at myemma.com.
Back-patch as far as 8.1; 8.0 and before use different coding here.
Made this option mark the .c files, so the environment variable is no longer needed.
Created a special MinGW file with the special error message.
Do not print port into log file when running regression tests.
which comparison operators to use for plan nodes involving tuple comparison
(Agg, Group, Unique, SetOp). Formerly the executor looked up the default
equality operator for the datatype, which was really pretty shaky, since it's
possible that the data being fed to the node is sorted according to some
nondefault operator class that could have an incompatible idea of equality.
The planner knows what it has sorted by and therefore can provide the right
equality operator to use. Also, this change moves a couple of catalog lookups
out of the executor and into the planner, which should help startup time for
pre-planned queries by some small amount. Modify the planner to remove some
other cavalier assumptions about always being able to use the default
operators. Also add "nulls first/last" info to the Plan node for a mergejoin
--- neither the executor nor the planner can cope yet, but at least the API is
in place.
1) gendef works from inside visual studio - use a tempfile instead of
redirection, because for some reason you can't redirect dumpbin from
inside (patch from Joachim Wieland)
2) gendef must process only *.obj, or you get weird errors in some build
scenarios when it tries to process a logfile
Magnus Hagander
the same output level that was used when building a single project
before, and really needed to get reasonable information about what
happens (non-verbose just says "starting build of foo" and "done
building foo", more or less).
Magnus Hagander
management. The paper clearly describes many of the ideas embodied in
our current hashing code, but as far as I could find out there is not
a direct code heritage. (Mike Olsen recalls discussion of this paper
at Postgres meetings but believes it "informed the Postgres implementation
probably just at the design level". Margo herself says she wasn't
involved with Postgres' hash code.) Credit where credit is due 'n all
that, even if fifteen years after the fact.
< * Allow the creation of indexes with mixed ascending/descending
> * -Allow the creation of indexes with mixed ascending/descending
<
< This is possible now by creating an operator class with reversed sort
< operators. One complexity is that NULLs would then appear at the start
< of the result set, and this might affect certain sort types, like
< merge join.
<
per-column options for btree indexes. The planner's support for this is still
pretty rudimentary; it does not yet know how to plan mergejoins with
nondefault ordering options. The documentation is pretty rudimentary, too.
I'll work on improving that stuff later.
Note incompatible change from prior behavior: ORDER BY ... USING will now be
rejected if the operator is not a less-than or greater-than member of some
btree opclass. This prevents less-than-sane behavior if an operator that
doesn't actually define a proper sort ordering is selected.
when collapsing of JOIN trees is stopped by join_collapse_limit. For instance
a list of 11 LEFT JOINs with limit 8 now produces something like
((1 2 3 4 5 6 7 8) 9 10 11 12)
instead of
(((1 2 3 4 5 6 7 8) (9)) 10 11 12)
The latter structure is really only required for a FULL JOIN.
Noted while studying an example from Shane Ambler.
hash joins with the estimated-larger relation on the inside. There are
several cases where doing that makes perfect sense, and in cases where it
doesn't, the regular cost computation really ought to be able to figure that
out. Make some marginal tweaks in said computation to try to get results
approximating reality a bit better. Per an example from Shane Ambler.
Also, fix an oversight in the original patch to add seq_page_cost: the costs
of spilling a hash join to disk should be scaled by seq_page_cost.