identifier, while some areas do not.
The attached converts be below to "name":
conversion_name
index_name
The below have an existing, initdb supplied, entity named "name". As
such, it could be confusing for the reader to see that identifier used
in the example.
domainname
typename
Rod Taylor
with advocacy and 'portal' websites.
Link to createdb / dropdb from the tutorial page about create / dropdb.
A pair of notes were asking about more info...
Rod Taylor
Regression tests for IPv6 operations added.
Documentation updated to document IPv6 bits.
Stop treating IPv4 as an "unsigned int" and IPv6 as an array of
characters. Instead, always use the array of characters so we
can have one function fits all. This makes bitncmp(), addressOK(),
and several other functions "just work" on both address families.
add family() function which returns integer 4 or 6 for IPv4 or
IPv6. (See examples below) Note that to add this new function
you will need to dump/initdb/reload or find the correct magic
to add the function to the postgresql function catalogs.
IPv4 addresses always sort before IPv6.
On disk we use AF_INET for IPv4, and AF_INET+1 for IPv6 addresses.
This prevents the need for a dump and reload, but lets IPv6 parsing
work on machines without AF_INET6.
To select all IPv4 addresses from a table:
select * from foo where family(addr) = 4 ...
Order by and other bits should all work.
Michael Graff
specific hash functions used by hash indexes, rather than the old
not-datatype-aware ComputeHashFunc routine. This makes it safe to do
hash joining on several datatypes that previously couldn't use hashing.
The sets of datatypes that are hash indexable and hash joinable are now
exactly the same, whereas before each had some that weren't in the other.
out of mind, because it'd been commented out years ago). Try to bring the
remains up to a reasonable level of currency, and give it all approximately
the same high level of abstraction.
> * Allow current datestyle to restrict dates; prevent month/day swapping
> from making invalid dates valid
> * Prevent month/day swapping of ISO dates to make invalid dates valid
character in identifiers. The first change eliminates the current need
to put spaces around parameter references, as in "x<=$2". The second
change improves compatibility with Oracle and some other RDBMSes. This
was discussed and agreed to back in January, but did not get done.
not all SQL identifiers taken from command line arguments. We decided
years ago that that was a bad idea: identifiers taken from the command
line should be treated as literally correct. Remove the inconsistent
code that has crept in recently. Also fix pg_dump so that the combination
of --schema and --table does what you'd expect, namely dump exactly one
table from exactly one schema. Per gripe from Deepak Bhole of Red Hat.
extensions to support our historical behavior. An aggregate belongs
to the closest query level of any of the variables in its argument,
or the current query level if there are no variables (e.g., COUNT(*)).
The implementation involves adding an agglevelsup field to Aggref,
and treating outer aggregates like outer variables at planning time.
> * Allow a single index to index multiple tables (for inheritance and subtables)
408a410
> * Improve the planner to use CHECK constraints to prune the plan (for subtables)
418a421
> * Allow partitioning of table into multiple subtables
419a423
> T
of an index can now be a computed expression instead of a simple variable.
Restrictions on expressions are the same as for predicates (only immutable
functions, no sub-selects). This fixes problems recently introduced with
inlining SQL functions, because the inlining transformation is applied to
both expression trees so the planner can still match them up. Along the
way, improve efficiency of handling index predicates (both predicates and
index expressions are now cached by the relcache) and fix 7.3 oversight
that didn't record dependencies of predicate expressions.
blanks, in hopes of reducing the surprise factor for newbies. Remove
redundant operators for VARCHAR (it depends wholly on TEXT operations now).
Clean up resolution of ambiguous operators/functions to avoid surprising
choices for domains: domains are treated as equivalent to their base types
and binary-coercibility is no longer considered a preference item when
choosing among multiple operators/functions. IsBinaryCoercible now correctly
reflects the notion that you need *only* relabel the type to get from type
A to type B: that is, a domain is binary-coercible to its base type, but
not vice versa. Various marginal cleanup, including merging the essentially
duplicate resolution code in parse_func.c and parse_oper.c. Improve opr_sanity
regression test to understand about binary compatibility (using pg_cast),
and fix a couple of small errors in the catalogs revealed thereby.
Restructure "special operator" handling to fetch operators via index opclasses
rather than hardwiring assumptions about names (cleans up the pattern_ops
stuff a little).
< * Update clients to use data types, typmod, schema.table.column names of
< result sets using new query protocol
453a452,453
> o Update clients to use data types, typmod, schema.table.column names of
> result sets using new query protocol
< * Allow clients to get data types, typmod, schema.table.column names from
< result sets, either via the backend protocol or a new QUERYINFO command
to:
> * Update clients to use data types, typmod, schema.table.column names of
> result sets using new query protocol
Win32 port is now called 'win32' rather than 'win'
add -lwsock32 on Win32
make gethostname() be only used when kerberos4 is enabled
use /port/getopt.c
new /port/opendir.c routines
disable GUC unix_socket_group on Win32
convert some keywords.c symbols to KEYWORD_P to prevent conflict
create new FCNTL_NONBLOCK macro to turn off socket blocking
create new /include/port.h file that has /port prototypes, move
out of c.h
new /include/port/win32_include dir to hold missing include files
work around ERROR being defined in Win32 includes
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
of Describe on a prepared statement. This was in the original 3.0
protocol proposal, but I took it out for reasons that seemed good at
the time. Put it back per yesterday's pghackers discussion.
implementation limits, do not issue an ERROR; instead issue a NOTICE and use
the max supported value. Per pgsql-general discussion of 28-Apr, this is
needed to allow easy porting from pre-7.3 releases where the limits were
higher.
Unrelated change in same area: accept GLOBAL TEMP/TEMPORARY as a synonym
for TEMPORARY, as per pgsql-hackers discussion of 15-Apr. We previously
rejected it, but that was based on a misreading of the spec --- SQL92's
GLOBAL temp tables are really closer to what we have than their LOCAL ones.
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
for tableID/columnID in RowDescription. (The latter isn't really
implemented yet though --- the backend always sends zeroes, and libpq
just throws away the data.)
initial values and runtime changes in selected parameters. This gets
rid of the need for an initial 'select pg_client_encoding()' query in
libpq, bringing us back to one message transmitted in each direction
for a standard connection startup. To allow server version to be sent
using the same GUC mechanism that handles other parameters, invent the
concept of a never-settable GUC parameter: you can 'show server_version'
but it's not settable by any GUC input source. Create 'lc_collate' and
'lc_ctype' never-settable parameters so that people can find out these
settings without need for pg_controldata. (These side ideas were all
discussed some time ago in pgsql-hackers, but not yet implemented.)
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
with variable-width fields. No more truncation of long user names.
Also, libpq can now send its environment-variable-driven SET commands
as part of the startup packet, saving round trips to server.
some message types. In particular add text/binary flag to StartCopyIn
and StartCopyOut, so that client library can know what is expected or
forthcoming.
chapters on extending types, operators, and aggregates into the extending
functions chapter. Move the information on how to call table functions
into the queries chapter. Remove some outdated information that is
already present in a better form in other parts of the documentation.
find out about it is to read the documentation that tells you how
dangerous it is. Add default_transaction_read_only to documentation;
seems to have been overlooked in patch that added read-only transactions.
Clean up check_guc comparison script, which has been suffering bit rot.
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
functions
* Document pg_conversion_is_visible() which was created in one of my
previous patches and didn't get documented for some reason
Christopher Kings-Lynne
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
some of the algorithms for higher functions. I see about a factor of ten
speedup on the 'numeric' regression test, but it's unlikely that that test
is representative of real-world applications.
initdb forced due to change of on-disk representation for NUMERIC.
effectively used to mean a default value that could also be spelled
out explicitly. (ACLs behave that way, and useconfig/datconfig
do too IIRC.)
It's a bit of a hack, but it saves table space and backend code ---
without this convention the default would have to be inserted "manually"
since we have no mechanism to supply defaults when C code is forming a
new catalog tuple.
I'm inclined to leave the code alone. But Alvaro is right that it'd be
good to point out the 'infinity' option in the CREATE USER and ALTER
USER man pages. (Doc patch please?)
Alvaro Herrera
join is defined as:
from_item [ NATURAL ] join_type from_item
[ ON join_condition | USING ( join_column_list ) ]
However, if the join_type is an INNER or OUTER join, an ON, USING, or
NATURAL clause *must* be specified (it's not optional, as that segment
of the docs suggest).
I'm not exactly sure what the best way to fix this is, so I've attached
a patch adding a FIXME comment to the relevant section of the SGML. If
anyone has any ideas on the proper way to outline join syntax, please
speak up.
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
Envrironment and Files section, explained exactly what -w
does)
This is a patch which allows pg_ctl to make an intelligent
guess as to the proper port when running 'psql -l' to
determine if the database has started up (the -w flag).
The environment variable PGPORT is used. If that is not found,
it checks if a specific port has been set inside the postgresql.conf
file. If it is has not, it uses the port that Postgres was
compiled with.
Greg Sabino Mullane greg@turnstep.com
> weird behavior across fork boundaries; (b) the additional memory space
> that has to be duplicated into child processes will cost something per
> child launch, even if the child never uses it. But these are only
> arguments that it might not *always* be a prudent thing to do, not that
> we shouldn't give the DBA the tool to do it if he wants. So fire away.
Here is a patch for the above, including a documentation update. It
creates a new GUC variable "preload_libraries", that accepts a list in
the form:
preload_libraries = '$libdir/mylib1:initfunc,$libdir/mylib2'
If ":initfunc" is omitted or not found, no initialization function is
executed, but the library is still preloaded. If "$libdir/mylib" isn't
found, the postmaster refuses to start.
In my testing with PL/R, it reduces the first call to a PL/R function
(after connecting) from almost 2 seconds, down to about 8 ms.
Joe Conway