additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
See attached mail for more details.
-------------------------------------------------------------------
From: "Vadim Mikheev" <vadim@krs.ru>
To: "Hiroshi Inoue" <Inoue@tpf.co.jp>
References: <000201befa94$42fe04c0$2801007e@cadzone.tpf.co.jp>
Subject: Re: elog(ERROR) in vacuum
Date: Fri, 10 Sep 1999 10:27:10 +0900
Organization: OJSC Rostelecom (Krasnoyarsk)
Message-ID: <37D85E6E.5AFA126D@krs.ru>
Hiroshi Inoue wrote:
>
> Hello Vadim,
>
> I have a question about vacuum.
>
> VACUUM has a phase like commit which calls TransactionIdCommit().
> But if elog(ERROR) occured after that,the status of transaction is
> changed from XID_COMMIT to XID_ABORT.
>
> Seems to me this causes inconsistency.
> Shoudn't AbortTransaction() be changed not to call TransacionIdAbort()
> in case of vacuum.
You're right!
As usual -:)
Vadim
Almost worked before, but forgot one place to check.
Reported by Tatsuo Ishii.
Still does not do the right thing if inserting into a non-string target
column. Should look for a type coersion later, but doesn't.
message under a kernel that only returns one packet per recv() call. This
didn't use to matter much, but it starts to get annoying with multi-megabyte
EXPLAIN VERBOSE responses...
conditions. There are some pretty bogus heuristics in prepqual.c that
try to decide whether to output CNF or DNF format; they need to be replaced,
likely. Right now the code is probably too willing to choose DNF form,
which might hurt performance in some cases that used to work OK.
But at least we have a foundation to build on.
in or_normalize, remove detection of duplicate subexpressions (since it's
highly unlikely to be worth the amount of time it takes), and introduce
a dnfify() entry point so that unintelligible backwards logic in UNION
processing can be eliminated. This is just an intermediate step ---
next thing is to look at not forcing the qual into CNF form when it would
be better off in DNF form.
This change seems necessary in conjunction with long queries, and it
cleans up some bogosity in connection with long EXPLAIN texts anyway.
Note that current libpq will accept any length error message (at least
until it runs out of memory); prior versions have a limit of 8K, but
will cleanly discard excess error text, so there shouldn't be any
big compatibility problems with old clients.
transaction abort --- before it only worked if there was exactly one level
of allocation context stacked in the blank portal. Now it does the right
thing for any depth, including zero...
was rejecting negative attnums as bogus, which of course they are not.
Add code to get_attdisbursion to produce a useful value for OID attribute,
since VACUUM does not store stats for system attributes.
Also, repair bug that's been in eqjoinsel for a long time: it was taking
the max of the two columns' disbursions, whereas it should use the min.
space consumption in pull_args, and avoid doing the full CNF transform on
operands of operator clauses, where it's really not particularly helpful.
This answers the TODO item about large numbers of OR clauses, at least
partially. I was able to do a ten-thousand-OR-clause query with about
20Mb memory consumption ... it took an obscenely long time, but it worked...
corrects flex myinput() routine so that it doesn't assume there is only
one bufferload of data. We still have the issue of getting rid of
YY_USES_REJECT so that the scanner can cope with tokens larger than its
initial buffer size.
before comparison; if fields being joined are different widths then hashing
will yield wrong answer. Also, remove hashjoinable mark from all uses of
array_eq, because array structures may have padding bytes between elements
and the pad bytes are of uncertain content. This could be revisited if
array code is cleaned up.
Modify opr_sanity regress test to complain if array_eq operator is marked
hashjoinable.
offended my aesthestic sensibility that there was so much unreadable code
doing so little. Rewritten code is about half the size, faster, and
(I hope) much more intelligible.
current transaction) are not flushed by shared-cache-inval reset message.
SI reset actually works now, for probably the first time in a long time.
I was able to run initdb and regression tests with a 16-element SI message
array, with a lot of NOTICE: cache state reset messages but no crashes.