than dividing them into 1GB segments as has been our longtime practice. This
requires working support for large files in the operating system; at least for
the time being, it won't be the default.
Zdenek Kotala
variables to it. More need to be converted, but I wanted to get this in
before it conflicts with too much...
Other than just centralising the text-to-int conversion for parameters,
this allows the pg_settings view to contain a list of available options
and allows an error hint to show what values are allowed.
With the addition of multiple autovacuum workers, our choices were to delete
the check, document the interaction with autovacuum_max_workers, or complicate
the check to try to hide that interaction. Since this restriction has never
been adequate to ensure backends can't run out of pinnable buffers, it doesn't
really have enough excuse to live to justify the second or third choices.
Per discussion of a complaint from Andreas Kling (see also bug #3888).
This commit also removes several documentation references to this restriction,
but I'm not sure I got them all.
>
> * Add comments on system tables/columns using the information in
> catalogs.sgml
>
> Ideally the information would be pulled from the SGML file
> automatically.
>
>
> * Allow client certificate names to be checked against the client
> hostname
>
> This is already implemented in
> libpq/fe-secure.c::verify_peer_name_matches_certificate() but the code
> is commented out.
> * Prevent malicious functions from being executed with the permissions
> of unsuspecting users
>
> Index functions are safe, so VACUUM and ANALYZE are safe too.
> Triggers, CHECK and DEFAULT expressions, and rules are still vulnerable.
> http://archives.postgresql.org/pgsql-hackers/2008-01/msg00268.php
>
> o Have CONSTRAINT cname NOT NULL preserve the contraint name
>
> Right now pg_attribute.attnotnull records the NOT NULL status
> of the column, but does not record the contraint name
>
<
< o To better utilize resources, restore data, primary keys, and
< indexes for a single table before restoring the next table
<
< Hopefully this will allow the CPU-I/O load to be more uniform
< for simultaneous restores. The idea is to start data restores
< for several objects, and once the first object is done, to move
< on to its primary keys and indexes. Over time, simultaneous
< data loads and index builds will be running.
< * pg_dump
> * pg_dump / pg_restore
> o Allow pg_dump to utilize multiple CPUs and I/O channels by dumping
> multiple objects simultaneously
>
> The difficulty with this is getting multiple dump processes to
> produce a single dump output file.
> http://archives.postgresql.org/pgsql-hackers/2008-02/msg00205.php
>
> o Allow pg_restore to utilize multiple CPUs and I/O channels by
> restoring multiple objects simultaneously
>
> This might require a pg_restore flag to indicate how many
> simultaneous operations should be performed. Only pg_dump's
> -Fc format has the necessary dependency information.
>
> o To better utilize resources, restore data, primary keys, and
> indexes for a single table before restoring the next table
>
> Hopefully this will allow the CPU-I/O load to be more uniform
> for simultaneous restores. The idea is to start data restores
> for several objects, and once the first object is done, to move
> on to its primary keys and indexes. Over time, simultaneous
> data loads and index builds will be running.
>
> o To better utilize resources, allow pg_restore to check foreign
> keys simultaneously, where possible
> o Allow pg_restore to create all indexes of a table
> concurrently, via a single heap scan
>
> This requires a pg_dump -Fc file because that format contains
> the required dependency information.
> http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php
>
> o Allow pg_restore to load different parts of the COPY data
> simultaneously
< single heap scan, and have a restore of a pg_dump somehow use it
> single heap scan, and have pg_restore use it
< http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php