(notify/SI-overrun interrupt) while it is in process of doing proc_exit,
it is possible for Async_NotifyHandler() to try to start a transaction
when one is already running. This leads to Asserts() or worse. I think
it may only be possible to occur when frontend synchronization is lost
(ie, the elog(FATAL) in SocketBackend() fires), but that is a standard
occurrence after error during COPY. In any case, I have seen this
failure occur during regression tests, so it is definitely possible.
non-standard regression test, and adds standard installcheck regression test
support.
The test creates a second database (regression_slave) and drops it again, in
order to avoid the cheesy-ness of connecting back to the same database ;-)
Joe Conway
contrib/tablefunc. Specifically it replaces the use of VIEWs (for needed
composite type creation) with use of CREATE TYPE. It also performs GRANT
EXECUTE ON FUNCTION foo() TO PUBLIC for all of the created functions. There
was also a cosmetic change to two regression files.
Joe Conway
and fixed a bug found by the regression test
Modified Files:
jdbc/org/postgresql/jdbc1/AbstractJdbc1Statement.java
jdbc/org/postgresql/test/jdbc2/Jdbc2TestSuite.java
Added Files:
jdbc/org/postgresql/test/jdbc2/ServerPreparedStmtTest.java
fmgr.h - it's discouraged to access fcinfo directly but there is no
macro to get the number of arguments passed to the function. Checking
the number of arguments is often useful when you have a function which
can be called like:
func('arg');
func(null);
func();
all mapping to the same C function.
the macro has a function-like appearance to match the other PG_*
macros.
Lee Kindness.
attached to the same message with the Earth Distance patches.
Recent changes include changing the subscript in one place I forgot
in the previous bugfix patch. A couple of added regression tests, which
should help catch this mistake if it reappears.
I also put in a limit of 100 dimensions in cube_large and cube_in to
prevent making it easy to create very large cubes. Changing one define
in cubedata.h will raise the limit if some needs more dimensions.
Bruno Wolff III
> arrays using largely-similar code. But while intarray fails its
> regression test, I find ltree still passes. So I'm confused about what
> that code is really doing and don't want to touch it.
Please, apply attached patch, it solves the problem.
Teodor Sigaev
>
>>::sigh:: Is it me or does it look like all
>>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc
>
>
> Yeah. The group of routines parse_word, parse_dblword, etc that are
> called by the lexer certainly all need work. There are some
> definitional issues to think about, too --- plpgsql presently relies on
> the number of names to give it some idea of what to look for, and those
> rules are probably all toast now. Please come up with a sketch of what
> you think the behavior should be before you start hacking code.
Attached is a diff -c format proposal to fix this. I've also attached a short
test script. Seems to work OK and passes all regression tests.
Here's a breakdown of how I understand plpgsql's "Special word rules" -- I
think it illustrates the behavior reasonably well. New functions added by this
patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:
Joe Conway
> Hannu Krosing wrote:
>
>> It seems that my last mail on this did not get through to the list
>> ;(
>>
>> Please consider renaming the new builtin function
>> split(text,text,int)
>>
>> to something else, perhaps
>>
>> split_part(text,text,int)
>>
>> (like date_part)
>>
>> The reason for this request is that 3 most popular scripting
>> languages (perl, python, php) all have also a function with similar
>> signature, but returning an array instead of single element and the
>> (optional) third argument is limit (maximum number of splits to
>> perform)
>>
>> I think that it would be good to have similar function in (some
>> future release of) postgres, but if we now let in a function with
>> same name and arguments but returning a single string instead an
>> array of them, then we will need to invent a new and not so easy to
>> recognise name for the "real" split function.
>>
>
> This is a good point, and I'm not opposed to changing the name, but
> it is too bad your original email didn't get through before beta1 was
> rolled. The change would now require an initdb, which I know we were
> trying to avoid once beta started (although we could change it
> without *requiring* an initdb I suppose).
>
> I guess if we do end up needing an initdb for other reasons, we
> should make this change too. Any other opinions? Is split_part an
> acceptable name?
>
> Also, if we add a todo to produce a "real" split function that
> returns an array, similar to those languages, I'll take it for 7.4.
No one commented on the choice of name, so the attached patch changes
the name of split(text,text,int) to split_part(text,text,int) per
Hannu's recommendation above. This can be applied without an initdb if
current beta testers are advised to run:
update pg_proc set proname = 'split_part' where proname = 'split';
in the case they want to use this function. Regression and doc fix is
also included in the patch.
Joe Conway
> be a useful function for many users. However, I found the fact that
> if connectby_tree has the following data, connectby() tries to search the end
> of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-)
.
> I hope connectby() supports a check routine to find infinite relations.
>
>
> CREATE TABLE connectby_tree(keyid int, parent_keyid int);
> INSERT INTO connectby_tree VALUES(1,NULL);
> INSERT INTO connectby_tree VALUES(2,1);
> INSERT INTO connectby_tree VALUES(3,1);
> INSERT INTO connectby_tree VALUES(4,2);
> INSERT INTO connectby_tree VALUES(5,2);
> INSERT INTO connectby_tree VALUES(6,4);
> INSERT INTO connectby_tree VALUES(7,3);
> INSERT INTO connectby_tree VALUES(8,6);
> INSERT INTO connectby_tree VALUES(9,5);
>
> INSERT INTO connectby_tree VALUES(10,9);
> INSERT INTO connectby_tree VALUES(11,10);
> INSERT INTO connectby_tree VALUES(9,11); <-- infinite
>
The attached patch fixes the infinite recursion bug in
contrib/tablefunc/tablefunc.c:connectby found by Masaru Sugawara.
test=# SELECT * FROM connectby('connectby_tree', 'keyid',
'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level
int, branch text);
keyid | parent_keyid | level | branch
-------+--------------+-------+-------------
2 | | 0 | 2
4 | 2 | 1 | 2~4
6 | 4 | 2 | 2~4~6
8 | 6 | 3 | 2~4~6~8
5 | 2 | 1 | 2~5
9 | 5 | 2 | 2~5~9
10 | 9 | 3 | 2~5~9~10
11 | 10 | 4 | 2~5~9~10~11
(8 rows)
test=# SELECT * FROM connectby('connectby_tree', 'keyid',
'parent_keyid', '2', 5, '~') AS t(keyid int, parent_keyid int, level
int, branch text);
ERROR: infinite recursion detected
I implemented it by checking the branch string for repeated keys
(whether or not the branch is returned). The performance hit was pretty
minimal -- about 1% for a moderately complex test case (220000 record
table, 9 level tree with 3800 members).
Joe Conway
> where more than one schema is in use, because it doesn't trouble to
> schema-qualify table names.
Ok, the following patch should solve this concern. It also tries to
connect as little times as possible (the previous one would connect one
time per table plus one per database; this one connects two times per
database).
Alvaro Herrera
for contrib/intarray.
The cause was that the library uses its own function to construct a new
array, new_intArrayType, and that function did not set the new array
struct attribute elemtype.
Joe Conway
that are explicitly JOINed are not considered dependencies unless they
are actually used in the query: mere presence in the joinaliasvars
list of a JOIN RTE doesn't count as being used. The patch touches
a number of files because I needed to generalize the API of
query_tree_walker to support an additional flag bit, but the changes
are otherwise quite small.
- Properly drop tables in jdbc regression tests with cascade for 7.3
- problem with Statement.execute() and executeUpdate() not clearing binds
- problem with ResultSet not correctly handling default encoding
- changes to correctly support show transaction isolation level in 7.3
- changed DatabaseMetaDataTest to handle differences in FK names in 7.3
- better fix for dynamically checking server NAME data length
(With the fixes above the jdbc regression tests pass on jdbc2 and jdbc3
against both a 7.2 and 7.3 server)
Patchs submitted by David Wall (d.wall@computer.org):
- problem with getBlob when largeobject oid is null
- improvements to BlobOutputStream
Patch submitted by Haris Peco (snpe@snpe.co.yu):
- problem with callable statement not supporting prepared statement methods
Modified Files:
jdbc/org/postgresql/Driver.java.in
jdbc/org/postgresql/jdbc1/AbstractJdbc1Connection.java
jdbc/org/postgresql/jdbc1/AbstractJdbc1DatabaseMetaData.java
jdbc/org/postgresql/jdbc1/AbstractJdbc1Statement.java
jdbc/org/postgresql/jdbc2/AbstractJdbc2ResultSet.java
jdbc/org/postgresql/jdbc2/AbstractJdbc2Statement.java
jdbc/org/postgresql/jdbc2/Jdbc2ResultSet.java
jdbc/org/postgresql/jdbc3/Jdbc3ResultSet.java
jdbc/org/postgresql/largeobject/BlobOutputStream.java
jdbc/org/postgresql/largeobject/LargeObject.java
jdbc/org/postgresql/test/TestUtil.java
jdbc/org/postgresql/test/jdbc2/DatabaseMetaDataTest.java
jdbc/org/postgresql/test/jdbc2/UpdateableResultTest.java
jdbc/org/postgresql/test/jdbc2/optional/BaseDataSourceTest.java
jdbc/org/postgresql/test/jdbc2/optional/ConnectionPoolTest.java
jdbc/org/postgresql/test/jdbc2/optional/SimpleDataSourceTest.java