openldap/servers/slapd/back-sql/rdbms_depend/timesten/dnreverse/dnreverse.cpp

381 lines
8.1 KiB
C++
Raw Normal View History

A big bunch of improvements, contributed by Sam Drake and Raj Damani. Summary of changes is cited below. The patch still needs some cosmetic changes to be made, but is ready for testing. -----Original Message----- From: Sam Drake [mailto:drake@timesten.com] Sent: Saturday, April 07, 2001 10:40 PM To: 'mitya@seismic.ru' Cc: openldap-devel@OpenLDAP.org Subject: RE: Slapd frontend performance issues FYI, here is a short description of the changes I made. I'll package up the changes asap, but it may take a couple of days. The performance numbers quoted in this report were seen at my location with a 100,000 object database ... the slower numbers I mentioned earlier were reported by a customer with a 1,000,000 object database. I also can't explain the very poor performance I saw with OpenLDAP and LDBM with a 100,000 object database. ...Sam Drake / TimesTen Performance Software ---------- Work Performed OpenLDAP 2.0.9, including back-sql, was built successfully on Solaris 8 using gcc. The LDAP server itself, slapd, passed all tests bundled with OpenLDAP. OpenLDAP was built using Sleepycat LDBM release 3.1.17 as the "native" storage manager. The experimental back-sql facility in slapd was also built successfully. It was built using Oracle release 8.1.7 and the Oracle ODBC driver and ODBC Driver Manager from Merant. Rudimentary testing was performed with the data and examples provided with back-sql, and back-sql was found to be functional. Slapd and back-sql were then tested with TimesTen, using TimesTen 4.1.1. Back-sql was not immediately functional with TimesTen due to a number of SQL limitations in the TimesTen product. Functional issues encountered were: 1. Back-sql issued SELECT statements including the construct, "UPPER(?)". While TimesTen supports UPPER, it does not support the use of parameters as input to builtin functions. Back-sql was modified to convert the parameter to upper case prior to giving it to the underlying database ... a change that is appropriate for all databases. 2. Back-sql issued SELECT statements using the SQL CONCAT function. TimesTen does not support this function. Back-sql was modified to concatentate the necessary strings itself (in "C" code) prior to passing the parameters to SQL. This change is also appropriate for all databases, not just TimesTen. Once these two issues were resolved, back-sql could successfully process LDAP searches using the sample data and examples provided with back-sql. While performance was not measured at this point, numerous serious performance problems were observed with the back-sql code and the generated SQL. In particular: 1. In the process of implementing an LDAP search, back-sql will generate and execute a SQL query for all object classes stored in back-sql. During the source of generating each SQL query, it is common for back-sql to determine that a particular object class can not possibly have any members satisfying the search. For example, this can occur if the query searches an attribute of the LDAP object that does not exist in the SQL schema. In this case, back-sql would generate and issue the SQL query anyway, including a clause such as "WHERE 1=0" in the generated SELECT. The overhead of parsing, optimizing and executing the query is non-trivial, and the answer (the empty set) is known in advance. Solution: Back-sql was modified to stop executing a SQL query when it can be predetermined that the query will return no rows. 2. Searches in LDAP are fundamentally case-insensitive ("abc" is equal to "aBc"). However, in SQL this is not normally the case. Back-sql thus generated SQL SELECT statements including clauses of the form, "WHERE UPPER(attribute) = 'JOE'". Even if an index is defined on the attribute in the relational database, the index can not be used to satisfy the query, as the index is case sensitive. The relational database then is forced to scan all rows in the table in order to satisfy the query ... an expensive and non-scalable proposition. Solution: Back-sql was modified to allow the schema designer to add additional "upper cased" columns to the SQL schema. These columns, if present, contain an upper cased version of the "standard" field, and will be used preferentially for searching. Such columns can be provided for all searchable columns, some columns, or no columns. An application using database "triggers" or similar mechanisms can automatically maintain these upper cased columns when the standard column is changed. 3. In order to implement the hierarchical nature of LDAP object hierarchies, OpenLDAP uses suffix searches in SQL. For example, to find all objects in the subtree "o=TimesTen,c=us", a SQL SELECT statement of the form, "WHERE UPPER(dn) LIKE '%O=TIMESTEN,C=US'" would be employed. Aside from the UPPER issue discussed above, a second performance problem in this query is the use of suffix search. In TimesTen (and most relational databases), indexes can be used to optimize exact-match searches and prefix searches. However, suffix searches must be performed by scanning every row in the table ... an expensive and non-scalable proposition. Solution: Back-sql was modified to optionally add a new "dn_ru" column to the ldap_entries table. This additional column, if present, contains a byte-reversed and upper cased version of the DN. This allows back-sql to generate indexable prefix searches. This column is also easily maintained automatically through the use of triggers. Results A simple database schema was generated holding the LDAP objects and attributes specified by our customer. An application was written to generate test databases. Both TimesTen and Oracle 8.1.7 were populated with 100,000 entry databases. Load Times Using "slapadd" followed by "slapindex", loading and indexing 100,000 entries in an LDBM database ran for 19 minutes 10 seconds. Using a C++ application that used ODBC, loading 100,000 entries into a disk based RDBMS took 17 minutes 53 seconds. Using a C++ application that used ODBC, loading 100,000 entries into TimesTen took 1 minute 40 seconds. Search Times The command, "timex timesearch.sh '(cn=fname210100*)'" was used to test search times. This command issues the same LDAP search 4000 times over a single LDAP connection. Both the client and server (slapd) were run on the same machine. With TimesTen as the database, 4000 queries took 14.93 seconds, for a rate of 267.9 per second. With a disk based RDBMS as the database, 4000 queries took 77.79 seconds, for a rate of 51.42 per second. With LDBM as the database, 1 query takes 76 seconds, or 0.076 per second. Something is clearly broken.
2001-08-03 01:28:59 +08:00
// (c) Copyright 1999-2001 TimesTen Performance Software. All rights reserved.
#include <stdlib.h>
#include <TTConnectionPool.h>
#include <TTConnection.h>
#include <TTCmd.h>
#include <TTXla.h>
#include <signal.h>
TTConnectionPool pool;
TTXlaConnection conn;
TTConnection conn2;
TTCmd assignDn_ru;
TTCmd getNullDNs;
//----------------------------------------------------------------------
// This class contains all the logic to be implemented whenever
// the SCOTT.MYDATA table is changed. This is the table that is
// created by "sample.cpp", one of the other TTClasses demos.
// That application should be executed before this one in order to
// create and populate the table.
//----------------------------------------------------------------------
class LDAPEntriesHandler: public TTXlaTableHandler {
private:
// Definition of the columns in the table
int Id;
int Dn;
int Oc_map_id;
int Parent;
int Keyval;
int Dn_ru;
protected:
public:
LDAPEntriesHandler(TTXlaConnection& conn, const char* ownerP, const char* nameP);
~LDAPEntriesHandler();
virtual void HandleDelete(ttXlaUpdateDesc_t*);
virtual void HandleInsert(ttXlaUpdateDesc_t*);
virtual void HandleUpdate(ttXlaUpdateDesc_t*);
static void ReverseAndUpper(char* dnP, int id, bool commit=true);
};
LDAPEntriesHandler::LDAPEntriesHandler(TTXlaConnection& conn,
const char* ownerP, const char* nameP) :
TTXlaTableHandler(conn, ownerP, nameP)
{
Id = Dn = Oc_map_id = Parent = Keyval = Dn_ru = -1;
// We are looking for several particular named columns. We need to get
// the ordinal position of the columns by name for later use.
Id = tbl.getColNumber("ID");
if (Id < 0) {
cerr << "target table has no 'ID' column" << endl;
exit(1);
}
Dn = tbl.getColNumber("DN");
if (Dn < 0) {
cerr << "target table has no 'DN' column" << endl;
exit(1);
}
Oc_map_id = tbl.getColNumber("OC_MAP_ID");
if (Oc_map_id < 0) {
cerr << "target table has no 'OC_MAP_ID' column" << endl;
exit(1);
}
Parent = tbl.getColNumber("PARENT");
if (Parent < 0) {
cerr << "target table has no 'PARENT' column" << endl;
exit(1);
}
Keyval = tbl.getColNumber("KEYVAL");
if (Keyval < 0) {
cerr << "target table has no 'KEYVAL' column" << endl;
exit(1);
}
Dn_ru = tbl.getColNumber("DN_RU");
if (Dn_ru < 0) {
cerr << "target table has no 'DN_RU' column" << endl;
exit(1);
}
}
LDAPEntriesHandler::~LDAPEntriesHandler()
{
}
void LDAPEntriesHandler::ReverseAndUpper(char* dnP, int id, bool commit)
{
TTStatus stat;
char dn_rn[512];
int i;
int j;
// Reverse and upper case the given DN
for ((j=0, i = strlen(dnP)-1); i > -1; (j++, i--)) {
dn_rn[j] = toupper(*(dnP+i));
}
dn_rn[j] = '\0';
// Update the database
try {
assignDn_ru.setParam(1, (char*) &dn_rn[0]);
assignDn_ru.setParam(2, id);
assignDn_ru.Execute(stat);
}
catch (TTStatus stat) {
cerr << "Error updating id " << id << " ('" << dnP << "' to '"
<< dn_rn << "'): " << stat;
exit(1);
}
// Commit the transaction
if (commit) {
try {
conn2.Commit(stat);
}
catch (TTStatus stat) {
cerr << "Error committing update: " << stat;
exit(1);
}
}
}
void LDAPEntriesHandler::HandleInsert(ttXlaUpdateDesc_t* p)
{
char* dnP;
int id;
row.Get(Dn, &dnP);
cerr << "DN '" << dnP << "': Inserted ";
row.Get(Id, &id);
ReverseAndUpper(dnP, id);
}
void LDAPEntriesHandler::HandleUpdate(ttXlaUpdateDesc_t* p)
{
char* newDnP;
char* oldDnP;
char oDn[512];
int id;
// row is 'old'; row2 is 'new'
row.Get(Dn, &oldDnP);
strcpy(oDn, oldDnP);
row.Get(Id, &id);
row2.Get(Dn, &newDnP);
cerr << "old DN '" << oDn << "' / new DN '" << newDnP << "' : Updated ";
if (strcmp(oDn, newDnP) != 0) {
// The DN field changed, update it
cerr << "(new DN: '" << newDnP << "')";
ReverseAndUpper(newDnP, id);
}
else {
// The DN field did NOT change, leave it alone
}
cerr << endl;
}
void LDAPEntriesHandler::HandleDelete(ttXlaUpdateDesc_t* p)
{
char* dnP;
row.Get(Dn, &dnP);
cerr << "DN '" << dnP << "': Deleted ";
}
//----------------------------------------------------------------------
int pleaseStop = 0;
extern "C" {
void
onintr(int sig)
{
pleaseStop = 1;
cerr << "Stopping...\n";
}
};
//----------------------------------------------------------------------
int
main(int argc, char* argv[])
{
char* ownerP;
TTXlaTableList list(&conn); // List of tables to monitor
// Handlers, one for each table we want to monitor
LDAPEntriesHandler* sampP = NULL;
// Misc stuff
TTStatus stat;
ttXlaUpdateDesc_t ** arry;
int records;
SQLUBIGINT oldsize;
int j;
if (argc < 2) {
cerr << "syntax: " << argv[0] << " <username>" << endl;
exit(3);
}
ownerP = argv[1];
signal(SIGINT, onintr); /* signal for CTRL-C */
#ifdef _WIN32
signal(SIGBREAK, onintr); /* signal for CTRL-BREAK */
#endif
// Before we do anything related to XLA, first we connect
// to the database. This is the connection we will use
// to perform non-XLA operations on the tables.
try {
cerr << "Connecting..." << endl;
conn2.Connect("DSN=ldap_tt", stat);
}
catch (TTStatus stat) {
cerr << "Error connecting to TimesTen: " << stat;
exit(1);
}
try {
assignDn_ru.Prepare(&conn2,
"update ldap_entries set dn_ru=? where id=?",
"", stat);
getNullDNs.Prepare(&conn2,
"select dn, id from ldap_entries "
"where dn_ru is null "
"for update",
"", stat);
conn2.Commit(stat);
}
catch (TTStatus stat) {
cerr << "Error preparing update: " << stat;
exit(1);
}
// If there are any entries with a NULL reversed/upper cased DN,
// fix them now.
try {
cerr << "Fixing NULL reversed DNs" << endl;
getNullDNs.Execute(stat);
for (int k = 0;; k++) {
getNullDNs.FetchNext(stat);
if (stat.rc == SQL_NO_DATA_FOUND) break;
char* dnP;
int id;
getNullDNs.getColumn(1, &dnP);
getNullDNs.getColumn(2, &id);
// cerr << "Id " << id << ", Dn '" << dnP << "'" << endl;
LDAPEntriesHandler::ReverseAndUpper(dnP, id, false);
if (k % 1000 == 0)
cerr << ".";
}
getNullDNs.Close(stat);
conn2.Commit(stat);
}
catch (TTStatus stat) {
cerr << "Error updating NULL rows: " << stat;
exit(1);
}
// Go ahead and start up the change monitoring application
cerr << "Starting change monitoring..." << endl;
try {
conn.Connect("DSN=ldap_tt", stat);
}
catch (TTStatus stat) {
cerr << "Error connecting to TimesTen: " << stat;
exit(1);
}
/* set and configure size of buffer */
conn.setXlaBufferSize((SQLUBIGINT) 1000000, &oldsize, stat);
if (stat.rc) {
cerr << "Error setting buffer size " << stat << endl;
exit(1);
}
// Make a handler to process changes to the MYDATA table and
// add the handler to the list of all handlers
sampP = new LDAPEntriesHandler(conn, ownerP, "ldap_entries");
if (!sampP) {
cerr << "Could not create LDAPEntriesHandler" << endl;
exit(3);
}
list.add(sampP);
// Enable transaction logging for the table we're interested in
sampP->EnableTracking(stat);
// Get updates. Dispatch them to the appropriate handler.
// This loop will handle updates to all the tables.
while (pleaseStop == 0) {
conn.fetchUpdates(&arry, 1000, &records, stat);
if (stat.rc) {
cerr << "Error fetching updates" << stat << endl;
exit(1);
}
// Interpret the updates
for(j=0;j < records;j++){
ttXlaUpdateDesc_t *p;
p = arry[j];
list.HandleChange(p, stat);
} // end for each record fetched
if (records) {
cerr << "Processed " << records << " records\n";
}
if (records == 0) {
#ifdef _WIN32
Sleep(250);
#else
struct timeval t;
t.tv_sec = 0;
t.tv_usec = 250000; // .25 seconds
select(0, NULL, NULL, NULL, &t);
#endif
}
} // end while pleasestop == 0
// When we get to here, the program is exiting.
list.del(sampP); // Take the table out of the list
delete sampP;
conn.setXlaBufferSize(oldsize, NULL, stat);
return 0;
}