Everytime if I do PQconsumeInput (when the backend channel gets
readable) I check for the return value. (0 == error) and generate a
notification manually, e.g. fixed string connection_closed) and pass it to the
offending token more efficiently (per your suggestion of using
scanbuf). The new patch does the same as before:
template1=# select * frum pg_class;
ERROR: parser: parse error at or near "frum" at character 10
It also implement's Tom's suggestion:
template1=# select * from pg_class where\g
ERROR: parse: parse error at end of input
Gavin Sherry
This patch is an updated version of the lock listing patch. I've made
the following changes:
- write documentation
- wrap the SRF in a view called 'pg_locks': all user-level
access should be done through this view
- re-diff against latest CVS
One thing I chose not to do is adapt the SRF to use the anonymous
composite type code from Joe Conway. I'll probably do that eventually,
but I'm not really convinced it's a significantly cleaner way to
bootstrap SRF builtins than the method this patch uses (of course, it
has other uses...)
Neil Conway
Everytime if I do PQconsumeInput (when the backend channel gets
readable) I check for the return value. (0 == error) and generate a
notification manually, e.g. fixed string connection_closed) and pass it to the
TCL event queue. The only other thing I had to do is to comment out removing
all pending events in PgStopNotifyEventSource whenever the connection was
unexpectedly closed (so the manually generated event will not be deleted).
A broken backend connection triggers a notify event to the client (fixed
notification string "connection_closed") so proper action can be taken to switch
to another database server etc. Remember that this is event driven. If you have
applications, that have idle database connections most of the time, you'll get
immediate feedback of a dying server. Upon connection to the server issue a
pg_notify for notify event "connection_closed" and whenever the backend crashes
(which it does do in very very rare cases) you get an event driven recovery. (of
course the Tcl-Event loop has to be processed). Issuing a notification
"connection_closed" on a still working database could be used for switching to
another db-server (which I've actually impelemented right now).
Gerhard Hintermayer
sets of triggers. Also modify psql \d command to show foreign key
constraints as such and hide the triggers. pg_get_constraintdef()
function added to backend to support these. From Rod Taylor, code
review and some editorialization by Tom Lane.
<
> * Prevent mismatch of frontend/backend encodings from converting bytea
> data from being interpreted as encoded strings
512a514,515
> * Fix glibc's mktime() to handle pre-1970's dates
>
> * -Improve control over user privileges, including table creation
> * -Add PGPASSWORDFILE environment variable or ~/.pgpass to store
> o -Compile under jdk 1.4
> There's no longer a separate call to heap_storage_create in that routine
> --- the right place to make the test is now in the storage_create
> boolean parameter being passed to heap_create. A simple change, but
> it passeth patch's understanding ...
Thanks.
Attached is a patch against cvs tip as of 8:30 PM PST or so. Turned out
that even after fixing the failed hunks, there was a new spot in
bufmgr.c which needed to be fixed (related to temp relations;
RelationUpdateNumberOfBlocks). But thankfully the regression test code
caught it :-)
Joe Conway
syscat.py scripts were both modified. pg.py uses it to cache a list of
pks (which is seemingly does for every db connection) and various
attributes. syscat uses it to walk the list of system tables and
queries the various attributes from these tables.
In both cases, it seemingly makes sense to apply what you've requested.
Greg Copeland
saw a fix offered up. Since I'm gearing up to use Postgres and Python
soon, I figured I'd have a hand at trying to get this sucker addressed.
Apologies if this has already been plugged. I looked in the archives
and never saw a response.
At any rate, I must admit I don't think I fully understand the
implications of some of the changes I made even though they appear to be
straight forward. We all know the devil is in the details. Anyone more
knowledgeable is requested to review my changes. :(
I also updated the advanced.py script in a somewhat nonsensical fashion
to make use of an int8 field in an effort to test this change. It seems
to run okay, however, this is by no means an all exhaustive test. So,
it's possible that a bumpy road may lay ahead for some. On the other
hand...overflows (hopefully) previously lurked (long -> int conversion).
Greg Copeland