special hack to ensure it would close its output file even after failure
due to elog(ERROR) partway through the copy. This is now unnecessary
because fd.c takes care of cleaning up open files at transaction abort;
worse, after fd.c closed the file copy.c would try to do so *again* at
the start of the next COPY command. This would result in havoc in most
implementations of stdio library.
tlist and qual are NULL. It ought to handle these the same as the cases
where tlist contains only constant expressions, ie, be willing to generate
a Result-node plan. This did not use to matter, but it does now because
union_planner will flatten the tlist when aggregates are present. Thus,
'select count(1) from table' now causes query_planner to be given a null
tlist, and to duplicate 6.4's behavior we need it to give back a Result
plan rather than refusing the query. 6.4 was arguably doing the Wrong
Thing for this query, but I'm not going to open a semantics issue right
before 6.5 release ... can revisit that problem later.
returned NULL, which it will do in some cases where an elog(ERROR) would
probably be more appropriate. For the moment, generate a not-very-
informative error message rather than proceeding to certain coredump.
Probably ought to think about making query_planner elog instead of
returning NULL, but this is at least a safe change for now.
pointer to palloc'd but uninitialized memory. This is not cool; anyone looking
at the returned 'tuple' would at best coredump and at worst behave in a
bizarre and irreproducible way. Fix it to return a predictable value,
namely a correctly-set-up palloc'd tuple containing zero attributes.
I believe this fix is both safe and critical.
this one could be useful for people experiencing out-of-memory crashes while
executing queries which retrieve or use a very large number of tuples.
The problem happens when storage is allocated for functions results used in
a large query, for example:
select upper(name) from big_table;
select big_table.array[1] from big_table;
select count(upper(name)) from big_table;
This patch is a dirty hack that fixes the out-of-memory problem for the most
common cases, like the above ones. It is not the final solution for the
problem but it can work for some people, so I'm posting it.
The patch should be safe because all changes are under #ifdef. Furthermore
the feature can be enabled or disabled at runtime by the `free_tuple_memory'
options in the pg_options file. The option is disabled by default and must
be explicitly enabled at runtime to have any effect.
To enable the patch add the follwing line to Makefile.custom:
CUSTOM_COPT += -DFREE_TUPLE_MEMORY
To enable the option at runtime add the following line to pg_option:
free_tuple_memory=1
Massimo
/*
* Read above about cases when !ItemIdIsUsed(Citemid)
* (child item is removed)... Due to the fact that
* at the moment we don't remove unuseful part of
* update-chain, it's possible to get too old
* parent row here. Like as in the case which
* caused this problem, we stop shrinking here.
* I could try to find real parent row but want
* not to do it because of real solution will
* be implemented anyway, latter, and we are too
* close to 6.5 release. - vadim 06/11/99
*/
if (Ptp.t_data->t_xmax != tp.t_data->t_xmin)
...
1. Using 100 digits after decimal point on the default
make runtest.
2. Using 1000 digits after decimal point in a new target
make bigtest.
At the end of 'make runtest', a hint about the new bigtest is
printed.
Jan
and possibly for other cases too:
DO NOT cache status of transaction in unknown state
(i.e. non-committed and non-aborted ones)
Example:
T1 reads row updated/inserted by running T2 and cache T2 status.
T2 commits.
Now T1 reads a row updated by T2 and with HEAP_XMAX_COMMITTED
in t_infomask (so cached T2 status is not changed).
Now T1 EvalPlanQual gets updated row version without HEAP_XMIN_COMMITTED
-> TransactionIdDidCommit(t_xmin) and TransactionIdDidAbort(t_xmin)
return FALSE and T2 decides that t_xmin is not committed and gets
ERROR above.
It's too late to find more smart way to handle such cases and so
I just changed xact status caching and got rid TransactionIdFlushCache()
from code.
Changed: transam.c, xact.c, lmgr.c and transam.h - last three
just because of TransactionIdFlushCache() is removed.
2. heapam.c:
T1 marked a row for update. T2 waits for T1 commit/abort.
T1 commits. T3 updates the row before T2 locks row page.
Now T2 sees that new row t_xmax is different from xact id (T1)
T2 was waiting for. Old code did Assert here. New one goes to
HeapTupleSatisfiesUpdate. Obvious changes too.
3. Added Assert to vacuum.c
4. bufmgr.c: break
Assert(buf->r_locks == 0 && !buf->ri_lock)
into two Asserts.
after ExecEndNode. It must be done! Or we'll be out of free
tuple slots very soon, though slots are freed by ExecEndNode
and ready for reusing.
We didn't see this problem before because of
int nSlots = ExecCountSlotsNode(plan);
TupleTable tupleTable = ExecCreateTupleTable(nSlots + 10);
/* why add ten? - jolly */
code in InitPlan - i.e. extra 10 slots. Simple select uses
3 slots and so it was possible to re-use evaluation plan
3 additional times and didn't get
elog(NOTICE, "Plan requires more slots than are available");
elog(ERROR, "send mail to your local executor guru to fix this");
Changes are obvious and shouldn't be problems with them.
Though, I added Assert(epqstate->es_tupleTable->next == 0)
before EvalPlanQual():ExecInitNode and we'll notice if
something is still wrong. Is it better to change Assert
to elog(ERROR) ?
1. check whether the program is being executed in $PGDATA/.. This is
necessary if the data tree is not in the standard place, as is the
case with the Debian distribution (because of Debian policy).
2. give a clearer error message if the dumped data structure fails to
be loaded.
Oliver Elphick
SHARED_LIB:
needs to be changed to:
SHARED_LIB:-lc
I think this was also needed on AIX 4.2. Comments Please !!
If nobody objects, I suggest to make this change, since it cannot
break AIX 4.2 and is necessary on AIX 4.3
Andreas
> (native win32, not cygnus).
> It does the following:
> Patches two win32.mak files to DEFINE HAVE_VSNPRINTF and
> HAVE_STRDUP. This is required to build at all.
> Bumps the version number on libpq.dll from 6.4 to 6.5.
> Required for install programs to work.
> Adds defintions for BLCKSZ and MAXIMUM_ALIGN to "win32.h" in
> the client-side libpiq directory.
>
> All these files are only used when building on native win32,
> so it should be safe I think.
>
> Again, really sorry to throw this in so late, but I would
> hate to do the same thing as with 6.4 (which required 6.4.1
> to at all compile on Win32).
>
> Thanks,
>
> //Magnus
a non-leading % would be put into the >=/<= patterns. Also, repair
longstanding confusion about whether %% means a literal %%. The SQL92
doesn't say any such thing, and textlike() knows that, but gram.y didn't.
2. varsup.c:ReadNewTransactionId(): don't read nextXid from disk -
this func doesn't allocate next xid, so ShmemVariableCache->nextXid
may be used (but GetNewTransactionId() must be called first).
3. vacuum.c: change elog(ERROR, "Child item....") to elog(NOTICE) -
this is not ERROR, proper handling is just not implemented, yet.
4. s_lock.c: increase S_MAX_BUSY by 2 times.
5. shmem.c:GetSnapshotData(): have to call ReadNewTransactionId()
_after_ SpinAcquire(ShmemIndexLock).
I have updated my contrib code for version 6.5. In the attachment you will
find the directories array, datetime, miscutil, string, tools and userlocks
which replace the corresponding directories under contrib.
In contrib/tools you will find some developement scripts which I use while
hacking the sources. I hope they will be useful for some other people.
I have also added a contrib/Makefile which tries to compile and install all
the contribs. Unfortunately many of them don't have a Makefile or don't
compile cleanly.
--
Massimo Dal Zotto