problems with char array sizes having set a couple of constants to 0 for
unlimited query length and row length. This additional patch cleans those
problems up by defining a new constant (STD_STATEMENT_LEN) to 65536 and
using that in place of MAX_STATEMENT_LEN.
Another constant (MAX_MESSAGE_LEN) was defined as 2*BLCKSZ, but is now
65536. This is used to define the length of the message buffer in a number
of places and as I understand it (probably not that well!) therefore also
places a limit on the query length. Fixing this properly is beyond my
capabilities but 65536 should hopefully be large enough for most people.
Apologies for being over-enthusiastic and posting 3 patches in one day
rather than 1 better tested one!
Regards,
Dave Page
> Sent: 24 January 2001 16:51
> To: Dave Page
> Subject: Re: [PATCHES] ODBC Patch for OJs/Large Querys & Rows
>
>
> > SQL_OJ_LEFT = Left outer joins are supported.
>
> Yes.
<snip>
In addition to my earlier patch, this one adds support for SQLGetInfo
SQL_OJ_CAPABILITIES to the ODBC driver.
Dave Page
following but it does *not* check whether the user is connected to
PostgreSQL 7.0.x or 7.1 first (as would be required for some of the
features) - the driver doesn't do this at all afaik and it's beyond my
capabilities to implement such checking in code that doesn't look like it
was written by my 1 year old daughter!
1) The driver now reports no maximum query length (SQL_MAX_QUERY_SIZE).
2) The driver now reports no maximum row length (SQL_MAX_ROW_SIZE).
3) The driver now reports that Outer Joins are supported (SQL_OUTER_JOINS),
but still does not report oj capabilities (SQL_OJ_CAPABILITIES).
4) The version number has been incremented to 7.1.0000 in psqlodbc.h *and*
psqlodbc.rc
Regards,
Dave Page
objects that Thomas pointed out might be a problem.
PPS. I have included and updated the comments from the original patch
request to reflect the changes made in this revised patch.
> Attached is a set of patches for a couple of bugs dealing with
> timestamps in JDBC.
>
> Bug#1) Incorrect timestamp stored in DB if client timezone different
> than DB.
> The buggy implementation of setTimestamp() in PreparedStatement simply
> used the toString() method of the java.sql.Timestamp object to convert
> to a string to send to the database. The format of this is yyyy-MM-dd
> hh:mm:ss.SSS which doesn't include any timezone information. Therefore
> the DB assumes its timezone since none is specified. That is OK if the
> timezone of the client and server are the same, however if they are
> different the wrong timestamp is received by the server. For example if
> the client is running in timezone GMT and wants to send the timestamp
> for noon to a server running in PST (GMT-8 hours), then the server will
> receive 2000-01-12 12:00:00.0 and interprete it as 2000-01-12
> 12:00:00-08 which is 2000-01-12 04:00:00 in GMT. The fix is to send a
> format to the server that includes the timezone offset. For simplicity
> sake the fix uses a SimpleDateFormat object with its timezone set to GMT
> so that '+00' can be used as the timezone for postgresql. This is done
> as SimpleDateFormat doesn't support formating timezones in the way
> postgresql expects.
>
> Bug#2) Incorrect handling of partial seconds in getting timestamps from
> the DB
>
> When the SimpleDateFormat object parses a string with a format like
> yyyy-MM-dd hh:mm:ss.SS it expects the fractional seconds to be three
> decimal places (time precision in java is miliseconds = three decimal
> places). This seems like a bug in java to me, but it is unlikely to be
> fixed anytime soon, so the postgresql code needed modification to
> support the java behaviour. So for example a string of '2000-01-12
> 12:00:00.12-08' coming from the database was being converted to a
> timestamp object with a value of 2000-01-12 12:00:00.012GMT-08:00. The
> fix was to check for a '.' in the string and if one is found append on
> an extra zero to the fractional seconds part.
>
>
> I also did some cleanup in ResultSet.getTimestamp(). This method has
> had multiple patches applied some of which resulted in code that was no
> longer needed. For example the ISO timestamp format that postgresql
> uses specifies the timezone as an offset like '-08'. Code was added at
> one point to convert the postgresql format to the java one which is
> GMT-08:00, however the old code was left around which did nothing. So
> there was code that looked for yyyy-MM-dd hh:mm:sszzzzzzzzz and
> yyyy-MM-dd hh:mm:sszzz. This second format would never be encountered
> because zzz (i.e. -08) would be converted into the former (also note
> that the SimpleDateFormat object treats zzzzzzzzz and zzz the same, the
> number of z's does not matter).
>
>
> There was another problem/fix mentioned on the email lists today by
> mcannon@internet.com which is also fixed by this patch:
>
> Bug#3) Fractional seconds lost when getting timestamp from the DB
> A patch by Jan Thomea handled the case of yyyy-MM-dd hh:mm:sszzzzzzzzz
> but not the fractional seconds version yyyy-MM-dd hh:mm:ss.SSzzzzzzzzz.
> The code is fixed to handle this case as well.
Barry Lind
privkey.pem cert.pem.pw" if you just use "privkey.pem" in the following
openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem".
But there is nothing wrong with it as it is now, as far as I can see.
//Magnus
to the use of getpwuid when running in standalone mode.
this patch allocates some persistent storage (using
strdup) to store the username obtained with getpwuid
in src/backend/main/main.c. this is necessary because
later on, getpwuid is called again (in ValidateBinary).
the man pages for getpwuid on SCO OpenServer, FreeBSD,
and Darwin all have words to this effect (this is from
the SCO OpenServer man page):
Note
====
All information is contained in a static area, so it must
be copied if it is to be saved. Otherwise, it may be
overwritten on subsequent calls to these routines.
in particular, on my platform, the storage used to hold
the pw_name from the first call is overwritten such that
it looks like an empty username. this causes a problem
later on in SetSessionUserIdFromUserName.
i'd assume this isn't a problem on most platforms because
getpwuid is called with the same UID both times, and the
same thing ends up happening to that static storage each
time. however, that's not guaranteed, and is _not_ what
happens on my platform (at least :).
this is for the version of 7.1 available via anon cvs as
of Tue Jan 23 15:14:00 2001 PST:
.../src/backend/main/main.c,v 1.37 2000/12/31 18:04:35 tgl Exp
-michael thornburgh, zenomt@armory.com
timing, I know :)) At the moment the digest() function returns
hexadecimal coded hash, but I want it to return pure binary. I
have also included functions encode() and decode() which support
'base64' and 'hex' encodings, so if anyone needs digest() in hex
he can do encode(digest(...), 'hex').
Main reason for it is "to do one thing and do it well" :)
Another reason is if someone needs really lot of digesting, in
the end he wants to store the binary not the hexadecimal result.
It is really silly to convert it to hex then back to binary
again. As I said if someone needs hex he can get it.
Well, and the real reason that I am doing encrypt()/decrypt()
functions and _they_ return binary. For testing I like to see
it in hex occasionally, but it is really wrong to let them
return hex. Only now it caught my eye that hex-coding in
digest() is wrong. When doing digest() I thought about 'common
case' but hacking with psql is probably _not_ the common case :)
Marko Kreen
and psql) again. Changes are:
1) psql requires the includes of "io.h" and "fcntl.h" in command.c in order
to make a call to open() work (io.h for _open(), fcntl.h for the O_xxx)
2) PG_VERSION is no longer defined in version.h[.in], but in configure.in.
Since we don't do configure on native win32, we need to put it in
config.h.win32 :-(
3) Added define of SYSCONFDIR to config.h.win32 - libpq won't compile
without it. This functionality is *NOT* tested - it's just defined as "" for
now. May work, may not.
4) DEF_PGPORT renamed to DEF_PGPORT_STR
I have done the "basic tests" on it - it connects to a database, and I can
run queries. Haven't tested any of the fancier functions (yet).
However, I stepped on a much bigger problem when fixing psql to work. It no
longer works when linked against the .DLL version of libpq (which the
Makefile does for it). I have left it linked against this version anyway,
pending the comments I get on this mail :-)
The problem is that there are strings being allocated from libpq.dll using
PQExpBuffers (for example, initPQExpBuffer() on line 92 of input.c). These
are being allocated using the malloc function used by libpq.dll. This
function *may* be different from the malloc function used by psql.exe - only
the resulting pointer must be valid. And with the default linking methods,
it *WILL* be different. Later, psql.exe tries to free() this string, at
which point it crashes because the free() function can't find the allocated
block (it's on the allocated blocks list used by the runtime lib of
libpq.dll).
Shouldn't the right thing to do be to have psql call termPQExpBuffer() on
the data instead? As it is now, gets_fromFile() will just return the pointer
received from the PQExpBuffer.data (this may well be present at several
places - this is the one I was bitten by so far). Isn't that kind of
"accessing the internals of the PQExpBuffer structure" wrong? Instead,
perhaps it shuold make a copy of the string, adn then termPQExpBuffer() it?
In that case, the string will have been allocated from within the same
library as the free() is called.
I can get it to work just fine by doing this - changing from (around line
100 of input.c):
and the same a bit further down in the same function.
But, as I said above, this may be at more places in the code? Perhaps
someone more familiar to it could comment on that?
What do you think shuld be done about this? Personally, I go by the "If you
allocate a piece of memory using an interface, use the same interface to
free it", but the question is how to make it work :-)
Also, AFAIK this only affects psql.exe, so the changes made to the libpq
this patch are required no matter how the other issue is handled.
Regards,
Magnus
than forcing 'plain'. This probably does not matter right now, but I
think it needs to be consistent with the regular (not-functional) index
case, where attstorage is copied from the underlying table. Clean up
some other dead and infelicitous code too.
Op, so that the sequence 'a_expr Op Op a_expr' will be parsed as
a_expr Op (Op a_expr) not (a_expr Op) Op a_expr as formerly. In other
words, prefer treating user-defined operators as prefix operators to
treating them as postfix operators, when there is an ambiguity.
Also clean up a couple of other infelicities in production priority
assignment --- for example, BETWEEN wasn't being given the intended
priority, but that of AND.
Query used for checking foreign key triggers
returns too many results when there're more than one foreign
key in a table. It happens because only table's oid is used to
link between pg_trigger with INSERT check and pg_trigger with
UPDATE/DELETE check.
I think there should be enough to add following conditions
into WHERE clause of that query:
AND pt.tgconstrname = pg_trigger.tgconstrname
AND pt.tgconstrname = pg_trigger_1.tgconstrname
/Constantin
bothering to check the return value --- which meant that in case the
update or delete failed because of a concurrent update, you'd not find
out about it, except by observing later that the transaction produced
the wrong outcome. There are now subroutines simple_heap_update and
simple_heap_delete that should be used anyplace that you're not prepared
to do the full nine yards of coping with concurrent updates. In
practice, that seems to mean absolutely everywhere but the executor,
because *noplace* else was checking.
attributes in a FieldSelect node --- all the places that manipulate
these work just fine with system attribute numbers. OK, it's a new
feature, so shoot me ...