* The original code performs ( n ) normalizations
* and ( n * ( n - 1 ) / 2 ) matches, which hide
* the same number of normalizations. The new code
* performs the same number of normalizations ( n )
* and ( n * ( n - 1 ) / 2 ) mem compares, far less
* expensive than an entire match, if a match is
* equivalent to a normalization and a mem compare ...
*
* This is far more memory expensive than the previous,
* but it can heavily improve performances when big
* chunks of data are added (typical example is a group
* with thousands of DN-syntax members; on my system:
* for members of 5-RDN DNs,
members orig bvmatch (dirty) new
1000 0m38.456s 0m0.553s 0m0.608s
2000 2m33.341s 0m0.851s 0m1.003s
* Moreover, 100 groups with 10000 members each were
* added in 37m27.933s (an analogous LDIF file was
* loaded into Active Directory in 38m28.682s, BTW).
*
* Maybe we could switch to the new algorithm when
* the number of values overcomes a given threshold?
*/
Changed AttributeDescription.{ad_cname,ad_lang} to struct berval everywhere
Deleted ad_free() everywhere
Added ad_mutex to init.c
The AttributeDescriptions are in a linked list hanging off of the
corresponding AttributeType.
by slapd/tools/*; slap_mods_free is needed by ldbm_back_modrdn after
fixing ITS#1184 (at present -DMULTIATTRVAL_RDN is needed when compiling
back-ldbm/modrdn.c to trigger the compilation of new code).