typing error in src/backend/libpq/be-secure.c ???
Long Description
In src/backend/libpq/be-secure.c: secure_write
on SSL_ERROR_WANT_WRITE call secure_read instead
secure_write again. May be is this a typing error?
Sergey N. Yatskevich (syatskevich@n21lab.gosniias.msk.ru)
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
include output that vary depending on the python build one is
running. Basically, the order of keys in a dictionary is
non-deterministic, and that part of the test fails for me regularly.
I rewrote the test to work around this problem, and include a patch
file with that change and the change to the expected otuput as well.
Mike Meyer
Example:
test=# \d test
Table "public.test"
Column | Type | Modifiers
--------+---------+-----------
a | integer | not null
Indexes:
"test_pkey" PRIMARY KEY btree (a)
Check Constraints:
"$2" CHECK (a > 1)
Foreign Key Constraints:
"$1" FOREIGN KEY (a) REFERENCES parent(b)
Rules:
myrule AS ON INSERT TO test DO INSTEAD NOTHING
Triggers:
"asdf asdf" AFTER INSERT OR DELETE ON test FOR EACH STATEMENT EXECUTE
PROCEDURE update_pg_pwd_and_pg_group(),
mytrigger AFTER INSERT OR DELETE ON test FOR EACH ROW EXECUTE PROCEDURE
update_pg_pwd_and_pg_group()
I have minimised the double quoting of identifiers as much as I could
easily, and I will submit another patch when I have time to work on it that
will use a 'fmtId' function to determine it exactly.
I think it's a significant improvement in legibility...
Obviously the table example above is slightly degenerate in that not many
tables in production have heaps of (non-constraint) triggers and rules.
Christopher Kings-Lynne
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
functions
* Document pg_conversion_is_visible() which was created in one of my
previous patches and didn't get documented for some reason
Christopher Kings-Lynne
The first cleans up a couple of minor errors and ommissions
and adds tab completion support to more slash commands, e.g.
\dv.
The second is an attempt to add tab completion for schemas
and fully qualified relation names (e.g. public.mytable ).
I think this covers the TODO-item:
"Allow psql to do table completion for SELECT * FROM schema_part and table
completion for SELECT * FROM schema_name."
This happens via union selects querying:
- relation_name in current search path;
- schema_name;
- schema.relation_name
matching the current input string.
E.g:
SELECT p[TAB]
will produce a list of all appropriate relation names in the current search
path which begin with 'p', and also all schema names which begin with 'p';
\d pub[TAB]
will produce any relation names in the current search path and also
any schema names beginning with 'pub';
\d public.[TAB]
will produce a list of all relations in the schema 'public';
\d public.my[TAB]
produces all relation names beginning with 'my' in schema 'public'.
It seems to work for me; comments, suggestions, particularly regarding
the coding and queries, are very welcome.
Note that tables, indexes, views and sequences relations in the
'pg_catalog' namespace are excluded even though they are in
the current search path. I found not doing this produced annoying behaviour
when expanding names beginning with 'p'. People who work with system
tables a lot may not like this though; I can look for another solution
if necessary.
Ian Barwick
x[42] := whatever;
The facility is pretty primitive because it doesn't do array slicing and
it has the same semantics as array update in SQL (array must already
be non-null, etc). But it's a start.
full privs), also updated the regression test for this case.
Modified Files:
jdbc/org/postgresql/jdbc1/AbstractJdbc1DatabaseMetaData.java
jdbc/org/postgresql/test/jdbc2/DatabaseMetaDataTest.java
NULL key pointer, indicating that the existing scan key should be reused.
This behavior isn't used yet but will be needed for my planned fix to
the keys_are_unique code.
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.