code in prepqual.c had a small drawback: the flatten_andors code was
able to cope with deeply nested AND/OR structures (like 10000 ORs in
a row), whereas eval_const_expressions tends to recurse until it
overruns the stack. Revise eval_const_expressions so that it doesn't
choke on deeply nested ANDs or ORs.
make some estimate of which available indexes to AND together, rather
than blindly taking 'em all. This could probably stand further
improvement, but it seems to do OK in simple tests.
but the code is basically working. Along the way, rewrite the entire
approach to processing OR index conditions, and make it work in join
cases for the first time ever. orindxpath.c is now basically obsolete,
but I left it in for the time being to allow easy comparison testing
against the old implementation.
< Currently indexes do not have enough tuple tuple visibility
< information to allow data to be pulled from the index without
< also accessing the heap. One way to allow this is to set a bit
< to index tuples to indicate if a tuple is currently visible to
< all transactions when the first valid heap lookup happens. This
< bit would have to be cleared when a heap tuple is expired.
> Currently indexes do not have enough tuple visibility information
> to allow data to be pulled from the index without also accessing
> the heap. One way to allow this is to set a bit to index tuples
> to indicate if a tuple is currently visible to all transactions
> when the first valid heap lookup happens. This bit would have to
> be cleared when a heap tuple is expired.
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
< Bitmap indexes index single columns that can be combined with other bitmap
< indexes to dynamically create a composite index to match a specific query.
< Each index is a bitmap, and the bitmaps are bitwise AND'ed or OR'ed to be
< combined. They can index by tid or can be lossy requiring a scan of the
< heap page to find matching rows, or perhaps use a mixed solution where
< tids are recorded for pages with only a few matches and per-page bitmaps
< are used for more dense pages. Another idea is to use a 32-bit bitmap
< for every page and set a bit based on the item number mod(32).
> This feature allows separate indexes to be ANDed or ORed together. This
> is particularly useful for data warehousing applications that need to
> query the database in an many permutations. This feature scans an index
> and creates an in-memory bitmap, and allows that bitmap to be combined
> with other bitmap created in a similar way. The bitmap can either index
> all TIDs, or be lossy, meaning it records just page numbers and each
> page tuple has to be checked for validity in a separate pass.
scans, using in-memory tuple ID bitmaps as the intermediary. The planner
frontend (path creation and cost estimation) is not there yet, so none
of this code can be executed. I have tested it using some hacked planner
code that is far too ugly to see the light of day, however. Committing
now so that the bulk of the infrastructure changes go in before the tree
drifts under me.
>>>
>>>No, and I think it should be in the manual as an example.
>>>
>>>You will need to enter a loop that uses exception handling to detect
>>>unique_violation.
>>
>>Pursuant to an IRC discussion to which Dennis Bjorklund and
>>Christopher Kings-Lynne made most of the contributions, please find
>>enclosed an example patch demonstrating an UPSERT-like capability.
>>
David Fetter
>
> No, and I think it should be in the manual as an example.
>
> You will need to enter a loop that uses exception handling to detect
> unique_violation.
Pursuant to an IRC discussion to which Dennis Bjorklund and
Christopher Kings-Lynne made most of the contributions, please find
enclosed an example patch demonstrating an UPSERT-like capability.
David Fetter
command line. We find this useful because we frequently deal with
thousands of tables in an environment where neither the databases nor
the tables are updated frequently. This helps allow us to cut down on
the overhead of updating the list for every other primary loop of
pg_autovacuum.
I chose -i as the command-line argument and documented it briefly in
the README.
The patch was applied to the 7.4.7 version of pg_autovacuum in contrib.
Thomas F.O'Connell
* Changes the APIs to the timezone functions to take a pg_tz pointer as
an argument, representing the timezone to use for the selected
operation.
* Adds a global_timezone variable that represents the current timezone
in the backend as set by SET TIMEZONE (or guc, or env, etc).
* Implements a hash-table cache of loaded tables, so we don't have to
read and parse the TZ file everytime we change a timezone. While not
necesasry now (we don't change timezones very often), I beleive this
will be necessary (or at least good) when "multiple timezones in the
same query" is eventually implemented. And code-wise, this was the time
to do it.
There are no user-visible changes at this time. Implementing the
"multiple zones in one query" is a later step...
This also gets rid of some of the cruft needed to "back out a timezone
change", since we previously couldn't check a timezone unless it was
activated first.
Passes regression tests on win32, linux (slackware 10) and solaris x86.
Magnus Hagander
< failure.
> failure. This could be triggered by a user command or a timer.
< * Force archiving of partially-full WAL files when pg_stop_backup() is
< called or the server is stopped
> * Automatically force archiving of partially-filled WAL files when
> pg_stop_backup() is called or the server is stopped