NULL key pointer, indicating that the existing scan key should be reused.
This behavior isn't used yet but will be needed for my planned fix to
the keys_are_unique code.
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
refers to a non-DISTINCT output column of a DISTINCT ON subquery, or
if it refers to a function-returning-set, we cannot push it down.
But the old implementation refused to push down *any* quals if the
subquery had any such 'dangerous' outputs. Now we just look at the
output columns actually referenced by each qual expression. More code
than before, but probably no slower since we don't make unnecessary checks.
some of the algorithms for higher functions. I see about a factor of ten
speedup on the 'numeric' regression test, but it's unlikely that that test
is representative of real-world applications.
initdb forced due to change of on-disk representation for NUMERIC.
effectively used to mean a default value that could also be spelled
out explicitly. (ACLs behave that way, and useconfig/datconfig
do too IIRC.)
It's a bit of a hack, but it saves table space and backend code ---
without this convention the default would have to be inserted "manually"
since we have no mechanism to supply defaults when C code is forming a
new catalog tuple.
I'm inclined to leave the code alone. But Alvaro is right that it'd be
good to point out the 'infinity' option in the CREATE USER and ALTER
USER man pages. (Doc patch please?)
Alvaro Herrera
join is defined as:
from_item [ NATURAL ] join_type from_item
[ ON join_condition | USING ( join_column_list ) ]
However, if the join_type is an INNER or OUTER join, an ON, USING, or
NATURAL clause *must* be specified (it's not optional, as that segment
of the docs suggest).
I'm not exactly sure what the best way to fix this is, so I've attached
a patch adding a FIXME comment to the relevant section of the SGML. If
anyone has any ideas on the proper way to outline join syntax, please
speak up.
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
changed as per discussion on the patches list).
This version should be a good bit better. It addresses all the issues
pointed out by Neil Conway. Vacuum and Analyze are now handled
separately. It now monitors for xid wraparound. The number of database
connections and queries has been significantly reduced compared the
previous version. I have moved it from bin to contrib. More detail on
the changes are in the TODO file.
I have not tested the xid wraparound code as I have to let my AthlonXP
1600 run select 1 in a tight loop for approx. two days in order to
perform the required 500,000,000 xacts.
Matthew T. O'Connor
default datestyle. This is not portable between installations.
This patch sets DATESTYLE to ISO at the start of a pg_dump, so that the
dates written into the dump will be restorable onto any database,
regardless of how its default datestyle is set.
Oliver Elphick
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
version of crosstab. This fixes a major deficiency in real-world use of
the original version. Easiest to undestand with an illustration:
Data:
-------------------------------------------------------------------
select * from cth;
id | rowid | rowdt | attribute | val
----+-------+---------------------+----------------+---------------
1 | test1 | 2003-03-01 00:00:00 | temperature | 42
2 | test1 | 2003-03-01 00:00:00 | test_result | PASS
3 | test1 | 2003-03-01 00:00:00 | volts | 2.6987
4 | test2 | 2003-03-02 00:00:00 | temperature | 53
5 | test2 | 2003-03-02 00:00:00 | test_result | FAIL
6 | test2 | 2003-03-02 00:00:00 | test_startdate | 01 March 2003
7 | test2 | 2003-03-02 00:00:00 | volts | 3.1234
(7 rows)
Original crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
'SELECT rowid, attribute, val FROM cth ORDER BY 1,2',4)
AS c(rowid text, temperature text, test_result text, test_startdate
text, volts text);
rowid | temperature | test_result | test_startdate | volts
-------+-------------+-------------+----------------+--------
test1 | 42 | PASS | 2.6987 |
test2 | 53 | FAIL | 01 March 2003 | 3.1234
(2 rows)
Hashed crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
'SELECT rowid, attribute, val FROM cth ORDER BY 1',
'SELECT DISTINCT attribute FROM cth ORDER BY 1')
AS c(rowid text, temperature int4, test_result text, test_startdate
timestamp, volts float8);
rowid | temperature | test_result | test_startdate | volts
-------+-------------+-------------+---------------------+--------
test1 | 42 | PASS | | 2.6987
test2 | 53 | FAIL | 2003-03-01 00:00:00 | 3.1234
(2 rows)
Notice that the original crosstab slides data over to the left in the
result tuple when it encounters missing data. In order to work around
this you have to be make your source sql do all sorts of contortions
(cartesian join of distinct rowid with distinct attribute; left join
that back to the real source data). The new version avoids this by
building a hash table using a second distinct attribute query.
The new version also allows for "extra" columns (see the README) and
allows the result columns to be coerced into differing datatypes if they
are suitable (as shown above).
In testing a "real-world" data set (69 distinct rowid's, 27 distinct
categories/attributes, multiple missing data points) I saw about a
5-fold improvement in execution time (from about 2200 ms old, to 440 ms
new).
I left the original version intact because: 1) BC, 2) it is probably
slightly faster if you know that you have no missing attributes.
README and regression test adjustments included. If there are no
objections, please apply.
Joe Conway
now, my changes seem to work. Some possible minor bugs got squished
on the way but I can't be sure without more feedback from people who
really put the code to the test.
The new patch mostly simplifies variable handling and reduces code
duplication. Changes in the command parser eliminate some redundant
variables (boolean state + depth counter), replaces some
"else if" constructs with switches, and so on. It is meant to be
applied together with my previous patch, although I hope they don't
conflict; I went back to the CVS version for this one.
One more thing I thought should perhaps be changed: an IGNOREEOF
value of n will ignore only n-1 EOFs. I didn't want to touch this
for fear of breaking existing applications, but it does seem a tad
illogical.
Jeroen T. Vermeulen
changes to the SQL to retrieve attributes for older versions of Postgres is
probably wise. Also, please make sure that I have mapped the storage types
to the correct storage names, as this is relatively poorly documented.
I think that this patch might need to be considered for back-porting to
7.3.3 since at the moment, people will be losing valuable information after
upgrades.
Will dump:
CREATE TABLE test (
a text,
b text,
c text,
d text
);
ALTER TABLE ONLY test ALTER COLUMN a SET STATISTICS 55;
ALTER TABLE ONLY test ALTER COLUMN a SET STORAGE PLAIN;
ALTER TABLE ONLY test ALTER COLUMN b SET STATISTICS 1000;
ALTER TABLE ONLY test ALTER COLUMN c SET STORAGE EXTERNAL;
ALTER TABLE ONLY test ALTER COLUMN d SET STORAGE MAIN;
Christopher Kings-Lynne
what is capable using integer-datatime timestamps. It does attempt
to exercise the maximum allowable timestamp range.
Also is a small error check when converting a timestamp from external
to internal format that prevents out of range timestamps from being
entered.
Files patched:
Index: src/backend/utils/adt/timestamp.c
Added range check to prevent out of range timestamps
from being used.
Index: src/test/regress/sql/horology.sql
Index: src/test/regress/expected/horology-no-DST-before-1970.out
Index: src/test/regress/expected/horology-solaris-1947.out
Limited range of timestamps being checked to
Jan 1, 4713 BC to Dec 31, 294276
In creating this patch, I have seen some definite problems with integer
timestamps and how they react when used near their limits. For example,
the following statement gives the correct result:
SELECT timestamp without time zone 'Jan 1, 4713 BC'
+ interval '109203489 days' AS "Dec 31, 294276";
However, this statement which is the logical inverse of the above
gives incorrect results:
SELECT timestamp without time zone '12/31/294276'
- timestamp without time zone 'Jan 1, 4713 BC' AS "109203489 Days";
John Cochran
7.3.2). It removes some code duplication and #ifdeffing, and some
unstructured ugliness such as tacky breaks and an unneeded continue.
Breaks up a large function into smaller functions and reduces required
nesting levels, and kills a variable or two.
Jeroen T. Vermeulen
patch fix it -- but this patch doesn't contains tests or docs fixes. I
will send it later.
Fixed outputs:
select to_char(x, '9999.999') as x,
to_char(x, 'S9999.999') as s,
to_char(x, 'SG9999.999') as sg,
to_char(x, 'MI9999.999') as mi,
to_char(x, 'PL9999.999') as pl,
to_char(x, 'PLMI9999.999') as plmi,
to_char(x, '9999.999SG') as sg2,
to_char(x, '9999.999PL') as pl2,
to_char(x, '9999.999MI') as mi2 from num;
Karel Zak
> >
> > - Add check in pg_dump to see if the value returned is the max /min
> > values and replace with NO MAXVALUE, NO MINVALUE.
> >
> > - Change START and INCREMENT to use START WITH and INCREMENT BY syntax.
> > This makes it a touch easier to port to other databases with sequences
> > (Oracle). PostgreSQL supports both syntaxes already.
>
> + char bufm[100],
> + bufx[100];
>
> This seems to be an arbitary size. Why not set it to the actual maximum
> length?
>
> Also:
>
> + snprintf(bufm, 100, INT64_FORMAT, SEQ_MINVALUE);
> + snprintf(bufx, 100, INT64_FORMAT, SEQ_MAXVALUE);
>
> sizeof(bufm), sizeof(bufx) is probably the more
> maintenance-friendly/standard way to do it.
I changed the code to use sizeof - but will wait for a response from
Peter before changing the size. It's consistent throughout the sequence
code to be 100 for this purpose.
Rod Taylor <rbt@rbt.ca>