The 'word' variable there is initialised from
the prs->words array, but immediately after,
that array may be reallocated, thus leaving
word pointing to unallocated memory.
tests) when using flex 2.5.31. The fix is to *not* try to use palloc
and pfree for allocations within the lexer; when you do that, the
yy_buffer_stack gets freed at inopportune times. The code is already
set up to do manual deallocation, so I see no particular advantage to
using palloc anyway.
This was the last piece of code that took it upon itself to reset the
random number sequence --- now we only have srandom() in postmaster start,
backend start, and explicit setseed() operations.
> patch for pg_autovacuum. This patch assumes that the stats system will
> be fixed so that all inserts, updates and deletes performed on shared
> tables reguardless of what database those commands were executed from,
> will show up in the stats shown in each database.
I had to make a further change to this to take quotes off the 'last
ANALYZE' in order for it to not overquote the relation name, so
there's a _little_ work left to get it to play well.
I have deployed it onto several boxes that should be doing some
vacuuming over the weekend, and it is now certainly hitting pg_
tables.
I would like to present a CVS-oriented patch; unfortunately, I had to
change the indentation patterns when editing some of it :-(. The
following _may_ be good; not sure...
Matthew T. O'Connor
Christopher Browne
> keywords for clarity.
Yeah, this is basically what I meant, sorry I didn't get to it quicker.
However, I tested it out a little and the patch you made doesn't work
because it produces commands like:
VACUUM ANALYZE "public.FooBar"
Which doesn't work, so I made my own patch that creates commands like:
VACUUM ANALYZE "public"."FooBar"
This allows for mixed case schema names as well as tables.
Adam, can you please give this a test as you are the person who caught
the bug in the first place.
Thanks,
Matthew T. O'Connor
ALTER TABLE mytable drop column last_column_of_table;
the timetravel trigger say on UPDATE/DELETE:
ERROR: parser: parse error at end of input
Here is the patch for this bug
B?jthe Zolt?n
getopt_long(). This is more or less the same problem as we saw earlier
with getaddrinfo() and struct addrinfo, and for the same reason: random
user-added libraries might contain the subroutine, but there's no
guarantee we will find the matching header files.
There is an option "-s oldname=newname", which changes the old field name of
the dbf-file to the newname in PostgeSQL. If the length of the new name is 0,
the field is skiped. If you want to skip the first field of the dbf-file,
you get the wildest error-messages from the backend.
dbf2pg load the dbf-file via "COPY tablename FROM STDIN". If you skip the
first field, it is an \t to much in STDIN.
A fix could be an counter j=0, which increments only, if a field is imported
(IF (strlen(fields[h].db_name)> 0) j++. And only if j > 1 (if an other field is
imported) the \t is printed.
An other small bug in the README:
-s start
Specify the first record-number in the xBase-file
we will insert.
should be
-e start
Specify the first record-number in the xBase-file
we will insert.
Thomas Behr
the new timetravel.c,
new timetravel.README (cut from spi/README and modified),
modified timetravel.sql.in
and modified timetravel.example.
Features:
- optionally 3 parameter for insert/update/delete user name
- work with CREATE UNIQUE INDEX ixxx on table xxx
(unique_field,time_off);
(the original version was work with unique index on 6.5.0-6.5.3,
and not work on 7.3.2,7.3.3)
(before 6.5.0 and between 6.5.3 and 7.3.2 I dont know)
- get_timetravel(tablename) function for check timetravel-status.
- timetravel trigger not change oid of the active record. (it is not a
good feature, because the old version is automatice prevent the paralel
update with "where oid=nnn")
B?jthe Zolt?n
>>Sounds like all that's needed for your case. But to be complete, in
>>addition to changing tablefunc.c we'd have to:
>>1) come up with a new function call signature that makes sense and does
>>not cause backward compatibility problems for other people
>>2) make needed changes to tablefunc.sql.in
>>3) adjust the README.tablefunc appropriately
>>4) adjust the regression test for new functionality
>>5) be sure we don't break any of the old cases
>>
>>If you want to submit a complete patch, it would be gratefully accepted
>>-- for review at least ;-)
>
> Here's the patch, at least for steps 1-3
Nabil Sayegh
Joe Conway
- LIKE <subtable> [ INCLUDING DEFAULTS | EXCLUDING DEFAULTS ]
- Quick cleanup of analyze.c function prototypes.
- New non-reserved keywords (INCLUDING, EXCLUDING, DEFAULTS), SQL 200X
Opted not to extend for check constraints at this time.
As per the definition that it's user defined columns, OIDs are NOT
inherited.
Doc and Source patches attached.
--
Rod Taylor <rbt@rbt.ca>
slave
servers. I haven't tested it very well, so use at your own risk (and I
recommend against using it in production).
Basically, I have a central database server that has 4 summary tables
inside
it replicated to a remote slave (these database tables are for my mail
server
authentication, so these are replicated to another server tuned for many
connections, and so I don't have postgres connections opened straight to
my
back-end database server).
Unfortunately, I also wanted to implement a replication database server
for
hot-backups. I realized, too late, that the replication process is
pretty
greedy and will try to replicate all tables marked as a
"MasterAddTable".
To make a long story, I made a patch to RServ.pm and Replicate that
allows you
to specify, on the command line, a list of tables that you want to
replicate...it'll ignore all others.
I haven't finished, since this has to be integrated with CleanLog for
instance, but this should (and does) suffice for the moment.
I have yet to test it with two slaves, but at least my mail server
replication
database now works (it was failing every time it tried to replicate, for
a
variety of reasons).
Anyone have any suggestions on how to improve on this? (or, if someone
more
familiar with this code wants to take the ball and run with it, you're
welcome to).
--
Michael A Nachbaur <mike@nachbaur.com>
- Don't attempt to convert partial or expressional unique indexes
- Don't attempt to convert unique indexes based on a non-default
opclasses
- Untested prevention of conversion of non-btree indexes unique
indexes. Untested as postgresql doesn't allow hash, gist, or rtree
based indexes to be unique.
rbt=# create unique index t on a using hash (col);
ERROR: DefineIndex: access method "hash" does not support UNIQUE
indexes
rbt=# create unique index t on a using gist (col);
ERROR: DefineIndex: access method "gist" does not support UNIQUE
indexes
rbt=# select version();
version
------------------------------------------------------------------------
PostgreSQL 7.4devel on i386-unknown-freebsd4.8, compiled by GCC 2.95.4
Rod Taylor
> Second argument to metaphone is suposed to set the limit on the
> number of characters to return, but it breaks on some phrases:
>
> usps=# select metaphone(a,3),metaphone(a,4),metaphone(a,20) from
> (select 'Hello world'::varchar AS a) a;
> HLW | HLWR | HLWRLT
>
> usps=# select metaphone(a,3),metaphone(a,4),metaphone(a,20) from
> (select 'A A COMEAUX MEMORIAL'::varchar AS a) a;
> AKM | AKMKS | AKMKSMMRL
>
> In every case I've found that does this, the 4th and 5th letters are
> always 'KS'.
Nice catch.
There was a bug in the original metaphone algorithm from CPAN. Patch
attached (while I was at it I updated my email address, changed the
copyright to PGDG, and removed an unnecessary palloc). Here's how it
looks now:
regression=# select metaphone(a,4) from (select 'A A COMEAUX
MEMORIAL'::varchar AS a) a;
metaphone
-----------
AKMK
(1 row)
regression=# select metaphone(a,5) from (select 'A A COMEAUX
MEMORIAL'::varchar AS a) a;
metaphone
-----------
AKMKS
(1 row)
Joe Conway
* A few bug fixes
* fixes solaris compile and crash issue
* decouple vacuum analyze and analyze thresholds
* detach from tty (dameonize)
* improved logging layout
* more conservative default configuration
* improved, expanded and updated README
please apply and 1st convenience, or before code freeze which ever comes
first :-)
At this point I think I have brought pg_autovacuum and its client side
design as far as I think it should go. It works, keeping file sizes in
check, helps performance and give the administrator a fair amount
flexibility in configuring it.
Next up is to do the FSM based design that is integrated into the back
end.
p.s. Thanks to Christopher Browne for his help.
Matthew T. O'Connor
1 intarray: bugfix for int[]-int[] operation
2 intarray: split _int.c to several files (_int.c now is unused)
3 ntarray (gist__intbig_ops opclass): use special type for index storage
4 ltree (gist__ltree_ops opclass), intarray (gist__intbig_ops): optimize
GiST's
penalty and picksplit interface functions, now use Hemming distance.
Teodor Sigaev
yy_fatal_error() call results in elog(ERROR) not exit(). This was
already fixed in the main lexer and plpgsql, but extend same technique
to all the other dot-l files. Also, on review of the possible calls
to yy_fatal_error(), it seems safe to use elog(ERROR) not elog(FATAL).
of an index can now be a computed expression instead of a simple variable.
Restrictions on expressions are the same as for predicates (only immutable
functions, no sub-selects). This fixes problems recently introduced with
inlining SQL functions, because the inlining transformation is applied to
both expression trees so the planner can still match them up. Along the
way, improve efficiency of handling index predicates (both predicates and
index expressions are now cached by the relcache) and fix 7.3 oversight
that didn't record dependencies of predicate expressions.
blanks, in hopes of reducing the surprise factor for newbies. Remove
redundant operators for VARCHAR (it depends wholly on TEXT operations now).
Clean up resolution of ambiguous operators/functions to avoid surprising
choices for domains: domains are treated as equivalent to their base types
and binary-coercibility is no longer considered a preference item when
choosing among multiple operators/functions. IsBinaryCoercible now correctly
reflects the notion that you need *only* relabel the type to get from type
A to type B: that is, a domain is binary-coercible to its base type, but
not vice versa. Various marginal cleanup, including merging the essentially
duplicate resolution code in parse_func.c and parse_oper.c. Improve opr_sanity
regression test to understand about binary compatibility (using pg_cast),
and fix a couple of small errors in the catalogs revealed thereby.
Restructure "special operator" handling to fetch operators via index opclasses
rather than hardwiring assumptions about names (cleans up the pattern_ops
stuff a little).
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
changed as per discussion on the patches list).
This version should be a good bit better. It addresses all the issues
pointed out by Neil Conway. Vacuum and Analyze are now handled
separately. It now monitors for xid wraparound. The number of database
connections and queries has been significantly reduced compared the
previous version. I have moved it from bin to contrib. More detail on
the changes are in the TODO file.
I have not tested the xid wraparound code as I have to let my AthlonXP
1600 run select 1 in a tight loop for approx. two days in order to
perform the required 500,000,000 xacts.
Matthew T. O'Connor
version of crosstab. This fixes a major deficiency in real-world use of
the original version. Easiest to undestand with an illustration:
Data:
-------------------------------------------------------------------
select * from cth;
id | rowid | rowdt | attribute | val
----+-------+---------------------+----------------+---------------
1 | test1 | 2003-03-01 00:00:00 | temperature | 42
2 | test1 | 2003-03-01 00:00:00 | test_result | PASS
3 | test1 | 2003-03-01 00:00:00 | volts | 2.6987
4 | test2 | 2003-03-02 00:00:00 | temperature | 53
5 | test2 | 2003-03-02 00:00:00 | test_result | FAIL
6 | test2 | 2003-03-02 00:00:00 | test_startdate | 01 March 2003
7 | test2 | 2003-03-02 00:00:00 | volts | 3.1234
(7 rows)
Original crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
'SELECT rowid, attribute, val FROM cth ORDER BY 1,2',4)
AS c(rowid text, temperature text, test_result text, test_startdate
text, volts text);
rowid | temperature | test_result | test_startdate | volts
-------+-------------+-------------+----------------+--------
test1 | 42 | PASS | 2.6987 |
test2 | 53 | FAIL | 01 March 2003 | 3.1234
(2 rows)
Hashed crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
'SELECT rowid, attribute, val FROM cth ORDER BY 1',
'SELECT DISTINCT attribute FROM cth ORDER BY 1')
AS c(rowid text, temperature int4, test_result text, test_startdate
timestamp, volts float8);
rowid | temperature | test_result | test_startdate | volts
-------+-------------+-------------+---------------------+--------
test1 | 42 | PASS | | 2.6987
test2 | 53 | FAIL | 2003-03-01 00:00:00 | 3.1234
(2 rows)
Notice that the original crosstab slides data over to the left in the
result tuple when it encounters missing data. In order to work around
this you have to be make your source sql do all sorts of contortions
(cartesian join of distinct rowid with distinct attribute; left join
that back to the real source data). The new version avoids this by
building a hash table using a second distinct attribute query.
The new version also allows for "extra" columns (see the README) and
allows the result columns to be coerced into differing datatypes if they
are suitable (as shown above).
In testing a "real-world" data set (69 distinct rowid's, 27 distinct
categories/attributes, multiple missing data points) I saw about a
5-fold improvement in execution time (from about 2200 ms old, to 440 ms
new).
I left the original version intact because: 1) BC, 2) it is probably
slightly faster if you know that you have no missing attributes.
README and regression test adjustments included. If there are no
objections, please apply.
Joe Conway
. replace CREATE OR REPLACE AGGREGATE with a separate DROP and CREATE
. add DROP for all CREATE OPERATORs
. use IMMUTABLE and STRICT instead of WITH (isStrict)
. add IMMUTABLE and STRICT to int_array_aggregate's accumulator function
Gregory Stark
entire contents of the subplan into the tuplestore before we can return
any tuples. Instead, the tuplestore holds what we've already read, and
we fetch additional rows from the subplan as needed. Random access to
the previously-read rows works with the tuplestore, and doesn't affect
the state of the partially-read subplan. This is a step towards fixing
the problems with cursors over complex queries --- we don't want to
stick in Materialize nodes if they'll prevent quick startup for a cursor.
ltree_73.patch.gz - for 7.3 :
Fix ~ operation bug: eg '1.1.1' ~ '*.1'
ltree_74.patch.gz - for current CVS
Fix ~ operation bug: eg '1.1.1' ~ '*.1'
Add ? operation
Optimize index storage
Last change needs drop/create all ltree indexes, so only for 7.4
Teodor Sigaev
in one of the earth functions so that latitude and longitude to
cartesian coordinates conversion will be more accurrate. (Previously
a text string was built to provide as input which limited the accuracy
to the number of digits printed.)
The new functions were included in a recent patch to contrib/cube that has not
as yet been accepted as of yet.
I also added check constraints to the domain 'earth' since they are now
working in 7.4.
Bruno Wolff III
directly from float8 values. (As opposed to converting the values to
strings
and then parsing the strings.)
The functions are:
cube(float8) returns cube
cube(float8,float8) returns cube
cube(cube,float8) returns cube
cube(cube,float8,float8) returns cube
Bruno Wolff III
bison 1.875 and later as we did from earlier bison releases. Eventually
we will probably want to adopt the newer message spelling ... but not yet.
Per recent discussion on pgpatches.
Note: I didn't change the build rules for bootstrap, ecpg, or plpgsql
grammars, since these do not affect regression test results.
execution state trees, and ExecEvalExpr takes an expression state tree
not an expression plan tree. The plan tree is now read-only as far as
the executor is concerned. Next step is to begin actually exploiting
this property.
in the year. This version has only the two files required by the Darwin
startup bundle design. Plus the sh script now uses Darwin-standard
functions to start up PostgreSQL, and it checks for the presence of a
variable in /etc/hostconfig, as do other Darwin startup scripts.
I suggest that a new directory be created,
contrib/start-scripts/darwin, and that these two files be put into it.
Folks who want to use the script can read the comments inside it to
figure out how to use it.
David Wheeler
exists if and only if locale of postmaster
was a different from C (or ru_RU.KOI8-R).
Please, apply patch for current CVS & 7.3.1
Magnus Naeslund(f) wrote:
> Ok, I nailed the bug, but i'm not sure what the correct fix is.
> Attached tsearch_morph.diff that remedies this problem by avoiding it.
> Also there's a debug aid patch if someone would like to know how i
> finally found it out :)
>
> There problem in the lemmatize() function is that GETDICT(...) returned
> a value not handled (BYLOCALE).
> The value (-1) and later used as an index into the dicts[] array.
> After that everything went berserk stack went crazy somehow so trapping
> the fault sent me to the wrong place, and every time i read the value it
> was positive ;)
>
> So now i just return the initial word passed to the lemmatize function,
> because i don't know what to do with it.
Magnus Naeslund
tested a patch to contrib/xml where the existing code was causing
postgres to crash when it encountered & entities in the XML. I've
enclosed a patch that John came up with to correct this problem. It
patches against 7.3 and will apply on 7.2x if the elog WARNING calls
are changed to elog NOTICE.
Michael Richards
"SET search_path" commands were added to the beginning of the script.
The attatched patch should fix the problem. It probably should be
applied against the 7.3 and 7.4 branches.
Steven Singer
with regard to the extra_float_digits setting.
Since builtins.h was already included, I just deleted the extern
statement (and accompaning comments).
Bruno Wolff III
is pgcrypto bug as it assumed too much about inner workings of OpenSSL.
Following patch stops pgcrypto using EVP* functions for ciphers and lets
it manage ciphers itself.
This patch supports Blowfish, DES and CAST5 algorithms.
Marko Kreen
points on the surface of the earth and locating points within a
specified distance using an index based on the contrib/cube package. The
new functions are all of language type sql. A couple of bugs in the old
earthdistance function based on the point datatype are fixed. A
regression test has been added for both sets of functions. The README
file has been updated to include documentation on the new stuff. There
are comments about how this package is also useful for Astronomers.
Bruno Wolff III