We used to have externs for getopt() and its API variables scattered
all over the place. Now that we find we're going to need to tweak the
variable declarations for Cygwin, it seems like a good idea to have
just one place to tweak.
In this commit, the variables are declared "#ifndef HAVE_GETOPT_H".
That may or may not work everywhere, but we'll soon find out.
Andres Freund
This prevents the client from gobbling up too much memory when the
number of large objects to be removed is very large.
Andrew Dunstan, reviewed by Josh Kupershmidt
A materialized view has a rule just like a view and a heap and
other physical properties like a table. The rule is only used to
populate the table, references in queries refer to the
materialized data.
This is a minimal implementation, but should still be useful in
many cases. Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed. At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.
Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
Before, some places didn't document the short options (-? and -V),
some documented both, some documented nothing, and they were listed in
various orders. Now this is hopefully more consistent and complete.
Instead of just stopping after removing an arbitrary subset of orphaned
large objects, commit and start a new transaction after each -l objects.
This is just as effective as the original patch at limiting the number of
locks used, and it doesn't require doing the OID collection process
repeatedly to get everything. Since the option no longer changes the
fundamental behavior of vacuumlo, and it avoids a known server-side
limitation, enable it by default (with a default limit of 1000 LOs per
transaction).
In passing, be more careful about properly quoting the names of tables
and fields, and do some other cosmetic cleanup.
Also, handle failure better: don't just blindly keep trying to delete
stuff after the transaction has already failed.
Tim Lewis, reviewed by Josh Kupershmidt, with further hacking by me.
switch optional, as is the case for every other one of our programs.
I had already documented its -W as being optional, so this is bringing
the code into line with the docs ...
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
warnings:
- remove pointless "extern" keyword from some function definitions in
contrib/tsearch2
- use "NULL" not "0" as NULL pointer in contrib/tsearch,
contrib/tsearch2, contrib/pgbench, and contrib/vacuumlo
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
Create objects in public schema.
Make spacing/capitalization consistent.
Remove transaction block use for object creation.
Remove unneeded function GRANTs.
snprintf() in contrib/. I didn't touch the places where pointer
arithmatic was being used, or other areas where the fix wasn't
trivial. I would think that few, if any, of the usages of sprintf()
were actually exploitable, but it's probably better to be paranoid...
Neil Conway