current state of development...namely, we are on 2.0
NOTE:
BTW, the is also a check in postmaster which won't let you use an older
version of the database by checking the version number. The version number
of a database is in data/PG_VERSION (a plain ASCII file).
- Andrew
attributes as tcl arrays. The previous code had problems with some chars
used as delimiter by Tcl. The new code should be more robust.
By: Massimo Dal Zotto <dz@cs.unitn.it>
Async notifies received while a backend is in the middle of a begin/end
transaction block are lost by libpq when the final end command is issued.
The bug is in the routine PQexec of libpq. The routine throws away any
message from the backend when a message of type 'C' is received. This
type of message is sent when the result of a portal query command with
no tuples is returned. Unfortunately this is the case of the end command.
As all async notification are sent only when the transaction is finished,
if they are received in the middle of a transaction they are lost in the
libpq library. I added some tracing code to PQexec and this is the output:
Submitted by: Massimo Dal Zotto <dz@cs.unitn.it>
execute an sql function containing an utility command (create, notify, ...).
The bug is part in the planner, which returns a number of plans different
than the number of commands if there are utility commands in the query, and
in part in the function executor which assumes that all commands are normal
query commands and causes a SIGSEGV trying to execute commands without plan.
Submitted by: Massimo Dal Zotto <dz@cs.unitn.it>
|
|Here's a patch for Version 2 only. It just adds an Assert to catch some
|inconsistencies in the catalog classes.
|
|--
|Bryan Henderson Phone 408-227-6803
|San Jose, California
|
The problem is that the function arguments are not considered as possible key
candidates for index scan and so only a sequential scan is possible inside
the body of a function. I have therefore made some patches to the optimizer
so that indices are now used also by functions. I have also moved the plan
debug message from pg_eval to pg_plan so that it is printed also for plans
genereated for function execution. I had also to add an index rescan to the
executor because it ignored the parameters set in the execution state, they
were flagged as runtime variables in ExecInitIndexScan but then never used
by the executor so that the scan were always done with any key=1. Very odd.
This means that an index rescan is now done twice for each function execution
which uses an index, the first time when the index scan is initialized and
the second when the actual function arguments are finally available for the
execution. I don't know what is the cost of an double index scan but I
suppose it is anyway less than the cost of a full sequential scan, at leat
for large tables. This is my patch, you must also add -DINDEXSCAN_PATCH in
Makefile.global to enable the changes.
Submitted by: Massimo Dal Zotto <dz@cs.unitn.it>
The comparison routines for text and char data type give incorrect results
if the input data contains characters greater than 127. As these routines
perform the comparison using signed char variables all character codes
greater than 127 are interpreted as less than 0. These codes are used to
encode the iso8859 char sets.
The other text-like data types seem to work as expected as they use unsigned
chars in comparisons.
Submitted by: Massimo Dal Zotto <dz@cs.unitn.it>
tree, instead of having include files all over the place...
Immediate goal...a 'config.h' file so that we can make #ifdef's
being used throughout the code more a rarity as far as porting
is concerned
conditions are always met. The patch can be applied to any version
of Postgres95 from 1.02 to 1.05. After applying the patch, queries
using indices on bpchar and varchar fields should (hopefully ;-) )
always return the same tuple set regardless to the fact whether
indices are used or not.
Submitted by: Gerhard Reithofer <tbr_laa@AON.AT>
In a catalog class that has a "name" type attribute, UPDATEing of an
instance of that class may destroy all of the attributes of that
instance that are stored as or after the "name" attribute.
This is caused by the alignment value of the "name" type being set to
"double" in Class pg_type, but "integer" in Class pg_attribute.
Postgres constructs a tuple using double alignment, but interprets it
using integer alignment.
The fix is to change the alignment to integer in pg_type.
Note that this corrects the problem for new Postgres systems. Existing
databases already contain the error and it can't easily be repaired because
this very bug prevents updating the class that contains it.
--
Bryan Henderson Phone 408-227-6803
San Jose, California
It adds a WITH OIDS option to the copy command, which allows
dumping and loading of oids.
If a copy command tried to load in an oid that is greater than
its current system max oid, the system max oid is incremented. No
checking is done to see if other backends are running and have cached
oids.
pg_dump as its first step when using the -o (oid) option, will
copy in a dummy row to set the system max oid value so as rows are
loaded in, they are certain to be lower than the system oid.
pg_dump now creates indexes at the end to speed loading
Submitted by: Bruce Momjian <maillist@candle.pha.pa.us>
This presumably corrects a problem of initdb failing on systems that have
an awk that is sensitive to this.
--
Bryan Henderson Phone 408-227-6803
San Jose, California