ballocator_doc.txt: New file.

2004-03-11  Dhruv Matani  <dhruvbird@HotPOP.com>

	* docs/html/ext/ballocator_doc.txt: New file.
	* include/Makefile.am (ext_headers): Add
	${ext_srcdir}/bitmap_allocator.h .
	* include/Makefile.in: Regenerate (by hand, since I didn't have
	automake de jure on hand).
	* include/ext/bitmap_allocator.h: New file.
	* testsuite/performance/20_util/allocator/list_sort_search.cc: New test.
	* testsuite/performance/20_util/allocator/map_mt_find.cc: Likewise.
	* testsuite/performance/20_util/allocator/producer_consumer.cc: Add
	test for the bitmap_allocator<>.
	* testsuite/performance/20_util/allocator/insert.cc: Likewise.
	* testsuite/performance/20_util/allocator/insert_insert.cc: Likewise.
	* testsuite/performance/20_util/allocator/map_thread.cc: Likewise.

From-SVN: r79366
This commit is contained in:
Dhruv Matani 2004-03-12 03:28:12 +00:00 committed by Loren J. Rittle
parent a8dad789a5
commit 009368dba6
11 changed files with 1593 additions and 33 deletions

View File

@ -1,3 +1,18 @@
2004-03-11 Dhruv Matani <dhruvbird@HotPOP.com>
* include/Makefile.am (ext_headers): Add
${ext_srcdir}/bitmap_allocator.h .
* include/Makefile.in: Regenerate.
* docs/html/ext/ballocator_doc.txt: New file.
* include/ext/bitmap_allocator.h: New file.
* testsuite/performance/20_util/allocator/list_sort_search.cc: New test.
* testsuite/performance/20_util/allocator/map_mt_find.cc: Likewise.
* testsuite/performance/20_util/allocator/producer_consumer.cc: Add
test for the bitmap_allocator<>.
* testsuite/performance/20_util/allocator/insert.cc: Likewise.
* testsuite/performance/20_util/allocator/insert_insert.cc: Likewise.
* testsuite/performance/20_util/allocator/map_thread.cc: Likewise.
2004-03-11 Paolo Carlini <pcarlini@suse.de>
* include/std/std_complex.h (pow(const complex&, const _Tp&),

View File

@ -0,0 +1,374 @@
BITMAPPED ALLOCATOR
===================
2004-03-11 Dhruv Matani <dhruvbird@HotPOP.com>
---------------------------------------------------------------------
As this name suggests, this allocator uses a bit-map to keep track of
the used and unused memory locations for it's book-keeping purposes.
This allocator will make use of 1 single bit to keep track of whether
it has been allocated or not. A bit 1 indicates free, while 0
indicates allocated. This has been done so that you can easily check a
collection of bits for a free block. This kind of Bitmapped strategy
works best for single object allocations, and with the STL type
parameterized allocators, we do not need to choose any size for the
block which will be represented by a single bit. This will be the size
of the parameter around which the allocator has been
parameterized. Thus, close to optimal performance will result. Hence,
this should be used for node based containers which call the allocate
function with an argument of 1.
The bitmapped allocator's internal pool is exponentially
growing. Meaning that internally, the blocks acquired from the Free
List Store will double every time the bitmapped allocator runs out of
memory.
--------------------------------------------------------------------
The macro __GTHREADS decides whether to use Mutex Protection around
every allocation/deallocation. The state of the macro is picked up
automatically from the gthr abstration layer.
----------------------------------------------------------------------
What is the Free List Store?
----------------------------
The Free List Store (referred to as FLS for the remaining part of this
document) is the Global memory pool that is shared by all instances of
the bitmapped allocator instantiated for any type. This maintains a
sorted order of all free memory blocks given back to it by the
bitmapped allocator, and is also responsible for giving memory to the
bitmapped allocator when it asks for more.
Internally, there is a Free List threshold which indicates the Maximum
number of free lists that the FLS can hold internally
(cache). Currently, this value is set at 64. So, if there are more
than 64 free lists coming in, then some of them will be given back to
the OS using operator delete so that at any given time the Free List's
size does not exceed 64 entries. This is done because a Binary Search
is used to locate an entry in a free list when a request for memory
comes along. Thus, the run-time complexity of the search would go up
given an increasing size, for 64 entries however, lg(64) == 6
comparisons are enough to locate the correct free list if it exists.
Suppose the free list size has reached it's threshold, then the
largest block from among those in the list and the new block will be
selected and given back to the OS. This is done because it reduces
external fragmentation, and allows the OS to use the larger blocks
later in an orderly fashion, possibly merging them later. Also, on
some systems, large blocks are obtained via calls to mmap, so giving
them back to free system resources becomes most important.
The function _S_should_i_give decides the policy that determines
whether the current block of memory should be given to the allocator
for the request that it has made. That's because we may not always
have exact fits for the memory size that the allocator requests. We do
this mainly to prevent external fragmentation at the cost of a little
internal fragmentation. Now, the value of this internal fragmentation
has to be decided by this function. I can see 3 possibilities right
now. Please add more as and when you find better strategies.
1. Equal size check. Return true only when the 2 blocks are of equal
size.
2. Difference Threshold: Return true only when the _block_size is
greater than or equal to the _required_size, and if the _BS is >
_RS by a difference of less than some THRESHOLD value, then return
true, else return false.
3. Percentage Threshold. Return true only when the _block_size is
greater than or equal to the _required_size, and if the _BS is >
_RS by a percentage of less than some THRESHOLD value, then return
true, else return false.
Currently, (3) is being used with a value of 36% Maximum wastage per
Super Block.
--------------------------------------------------------------------
1) What is a super block? Why is it needed?
A super block is the block of memory acquired from the FLS from
which the bitmap allocator carves out memory for single objects and
satisfies the user's requests. These super blocks come in sizes that
are powers of 2 and multiples of 32 (_Bits_Per_Block). Yes both at
the same time! That's because the next super block acquired will be
2 times the previous one, and also all super blocks have to be
multiples of the _Bits_Per_Block value.
2) How does it interact with the free list store?
The super block is contained in the FLS, and the FLS is responsible
for getting / returning Super Bocks to and from the OS using
operator new as defined by the C++ standard.
---------------------------------------------------------------------
How does the allocate function Work?
------------------------------------
The allocate function is specialized for single object allocation
ONLY. Thus, ONLY if n == 1, will the bitmap_allocator's specialized
algorithm be used. Otherwise, the request is satisfied directly by
calling operator new.
Suppose n == 1, then the allocator does the following:
1. Checks to see whether the a free block exists somewhere in a region
of memory close to the last satisfied request. If so, then that
block is marked as allocated in the bit map and given to the
user. If not, then (2) is executed.
2. Is there a free block anywhere after the current block right upto
the end of the memory that we have? If so, that block is found, and
the same procedure is applied as above, and returned to the
user. If not, then (3) is executed.
3. Is there any block in whatever region of memory that we own free?
This is done by checking (a) The use count for each super block,
and if that fails then (b) The individual bit-maps for each super
block. Note: Here we are never touching any of the memory that the
user will be given, and we are confining all memory accesses to a
small region of memory! This helps reduce cache misses. If this
succeeds then we apply the same procedure on that bit-map as (1),
and return that block of memory to the user. However, if this
process fails, then we resort to (4).
4. This process involves Refilling the internal exponentially growing
memory pool. The said effect is achieved by calling _S_refill_pool
which does the following:
(a). Gets more memory from the Global Free List of the
Required size.
(b). Adjusts the size for the next call to itself.
(c). Writes the appropriate headers in the bit-maps.
(d). Sets the use count for that super-block just allocated
to 0 (zero).
(e). All of the above accounts to maintaining the basic
invariant for the allocator. If the invariant is
maintained, we are sure that all is well.
Now, the same process is applied on the newly acquired free blocks,
which are dispatched accordingly.
Thus, you can clearly see that the allocate function is nothing but a
combination of the next-fit and first-fit algorithm optimized ONLY for
single object allocations.
-------------------------------------------------------------------------
How does the deallocate function work?
--------------------------------------
The deallocate function again is specialized for single objects ONLY.
For all n belonging to > 1, the operator delete is called without
further ado, and the deallocate function returns.
However for n == 1, a series of steps are performed:
1. We first need to locate that super-block which holds the memory
location given to us by the user. For that purpose, we maintain a
static variable _S_last_dealloc_index, which holds the index into
the vector of block pairs which indicates the index of the last
super-block from which memory was freed. We use this strategy in
the hope that the user will deallocate memory in a region close to
what he/she deallocated the last time around. If the check for
belongs_to succeeds, then we determine the bit-map for the given
pointer, and locate the index into that bit-map, and mark that bit
as free by setting it.
2. If the _S_last_dealloc_index does not point to the memory block
that we're looking for, then we do a linear search on the block
stored in the vector of Block Pairs. This vector in code is called
_S_mem_blocks. When the corresponding super-block is found, we
apply the same procedure as we did for (1) to mark the block as
free in the bit-map.
Now, whenever a block is freed, the use count of that particular super
block goes down by 1. When this use count hits 0, we remove that super
block from the list of all valid super blocks stored in the
vector. While doing this, we also make sure that the basic invariant
is maintained by making sure that _S_last_request and
_S_last_dealloc_index point to valid locations within the vector.
--------------------------------------------------------------------
Data Layout for a Super Block:
==============================
Each Super Block will be of some size that is a multiple of the number
of Bits Per Block. Typically, this value is chosen as Bits_Per_Byte X
sizeof(unsigned int). On an X86 system, this gives the figure
8 X 4 = 32. Thus, each Super Block will be of size 32 X Some_Value.
This Some_Value is sizeof(value_type). For now, let it be called 'K'.
Thus, finally, Super Block size is 32 X K bytes.
This value of 32 has been chosen because each unsigned int has 32-bits
and Maximum use of these can be made with such a figure.
Consider a block of size 32 ints.
In memory, it would look like this:
---------------------------------------------------------------------
| 136 | 0 | 4294967295 | Data-> Space for 32-ints |
---------------------------------------------------------------------
The first Columns represents the size of the Block in bytes as seen by
the Bitmap Allocator. Internally, a global free list is used to keep
track of the free blocks used and given back by the bitmap
allocator. It is this Free List Store that is responsible for writing
and managing this information. Actually the number of bytes allocated
in this case would be: 4 + 4 + 4 + 32*4 = 140 bytes, but the first 4
bytes are an addition by the Free List Store, so the Bitmap Allocator
sees only 136 bytes. These first 4 bytes about which the bitmapped
allocator is not aware hold the value 136.
What do the remaining values represent?
---------------------------------------
The 2nd 4 in the expression is the sizeof(unsigned int) because the
Bitmapped Allocator maintains a used count for each Super Block, which
is initially set to 0 (as indicated in the diagram). This is
incremented every time a block is removed from this super block
(allocated), and decremented whenever it is given back. So, when the
used count falls to 0, the whole super block will be given back to the
Free List Store.
The value 4294967295 represents the integer corresponding to the
bit representation of all bits set: 11111111111111111111111111111111.
The 3rd 4 is size of the bitmap itself, which is the size of 32-bits,
which is 4-bytes, or 1 X sizeof(unsigned int).
--------------------------------------------------------------------
Another issue would be whether to keep the all bitmaps in a separate
area in memory, or to keep them near the actual blocks that will be
given out or allocated for the client. After some testing, I've
decided to keep these bitmaps close to the actual blocks. this will
help in 2 ways.
1. Constant time access for the bitmap themselves, since no kind of
look up will be needed to find the correct bitmap list or it's
equivalent.
2. And also this would preserve the cache as far as possible.
So in effect, this kind of an allocator might prove beneficial from a
purely cache point of view. But this allocator has been made to try
and roll out the defects of the node_allocator, wherein the nodes get
skewed about in memory, if they are not returned in the exact reverse
order or in the same order in which they were allocated. Also, the
new_allocator's book keeping overhead is too much for small objects
and single object allocations, though it preserves the locality of
blocks very well when they are returned back to the allocator.
-------------------------------------------------------------------
Expected overhead per block would be 1 bit in memory. Also, once
the address of the free list has been found, the cost for
allocation/deallocation would be negligible, and is supposed to be
constant time. For these very reasons, it is very important to
minimize the linear time costs, which include finding a free list
with a free block while allocating, and finding the corresponding
free list for a block while deallocating. Therefore, I have decided
that the growth of the internal pool for this allocator will be
exponential as compared to linear for node_allocator. There, linear
time works well, because we are mainly concerned with speed of
allocation/deallocation and memory consumption, whereas here, the
allocation/deallocation part does have some linear/logarithmic
complexity components in it. Thus, to try and minimize them would
be a good thing to do at the cost of a little bit of memory.
Another thing to be noted is the the pool size will double every time
the internal pool gets exhausted, and all the free blocks have been
given away. The initial size of the pool would be sizeof(unsigned
int)*8 which is the number of bits in an integer, which can fit
exactly in a CPU register. Hence, the term given is exponential growth
of the internal pool.
---------------------------------------------------------------------
After reading all this, you may still have a few questions about the
internal working of this allocator, like my friend had!
Well here are the exact questions that he posed:
1) The "Data Layout" section is cryptic. I have no idea of what you
are trying to say. Layout of what? The free-list? Each bitmap? The
Super Block?
The layout of a Super Block of a given size. In the example, a super
block of size 32 X 1 is taken. The general formula for calculating
the size of a super block is 32*sizeof(value_type)*2^n, where n
ranges from 0 to 32 for 32-bit systems.
2) And since I just mentioned the term `each bitmap', what in the
world is meant by it? What does each bitmap manage? How does it
relate to the super block? Is the Super Block a bitmap as well?
Good question! Each bitmap is part of a Super Block which is made up
of 3 parts as I have mentioned earlier. Re-iterating, 1. The use
count, 2. The bit-map for that Super Block. 3. The actual memory
that will be eventually given to the user. Each bitmap is a multiple
of 32 in size. If there are 32*(2^3) blocks of single objects to be
given, there will be '32*(2^3)' bits present. Each 32 bits managing
the allocated / free status for 32 blocks. Since each unsigned int
contains 32-bits, one unsigned int can manage upto 32 blocks'
status. Each bit-map is made up of a number of unsigned ints, whose
exact number for a super-block of a given size I have just
mentioned.
3) How do the allocate and deallocate functions work in regard to
bitmaps?
The allocate and deallocate functions manipulate the bitmaps and have
nothing to do with the memory that is given to the user. As I have
earlier mentioned, a 1 in the bitmap's bit field indicates free,
while a 0 indicates allocated. This lets us check 32 bits at a time
to check whether there is at lease one free block in those 32 blocks
by testing for equality with (0). Now, the allocate function will
given a memory block find the corresponding bit in the bitmap, and
will reset it (ie. make it re-set (0)). And when the deallocate
function is called, it will again set that bit after locating it to
indicate that that particular block corresponding to this bit in the
bit-map is not being used by anyone, and may be used to satisfy
future requests.
----------------------------------------------------------------------
(Tech-Stuff, Please stay out if you are not interested in the
selection of certain constants. This has nothing to do with the
algorithm per-se, only with some vales that must be chosen correctly
to ensure that the allocator performs well in a real word scenario,
and maintains a good balance between the memory consumption and the
allocation/deallocation speed).
The formula for calculating the maximum wastage as a percentage:
(32 X k + 1) / (2 X (32 X k + 1 + 32 X c)) X 100.
Where,
k => The constant overhead per node. eg. for list, it is 8
bytes, and for map it is 12 bytes.
c => The size of the base type on which the map/list is
instantiated. Thus, suppose the the type1 is int and type2 is
double, they are related by the relation sizeof(double) ==
2*sizeof(int). Thus, all types must have this double size
relation for this formula to work properly.
Plugging-in: For List: k = 8 and c = 4 (int and double), we get:
33.376%
For map/multimap: k = 12, and c = 4 (int and double), we get:
37.524%
Thus, knowing these values, and based on the sizeof(value_type), we
may create a function that returns the Max_Wastage_Percentage for us
to use.

View File

@ -202,6 +202,7 @@ ext_srcdir = ${glibcxx_srcdir}/include/ext
ext_builddir = ./ext
ext_headers = \
${ext_srcdir}/algorithm \
${ext_srcdir}/bitmap_allocator.h \
${ext_srcdir}/debug_allocator.h \
${ext_srcdir}/demangle.h \
${ext_srcdir}/enc_filebuf.h \

View File

@ -411,6 +411,7 @@ ext_srcdir = ${glibcxx_srcdir}/include/ext
ext_builddir = ./ext
ext_headers = \
${ext_srcdir}/algorithm \
${ext_srcdir}/bitmap_allocator.h \
${ext_srcdir}/debug_allocator.h \
${ext_srcdir}/demangle.h \
${ext_srcdir}/enc_filebuf.h \

View File

@ -0,0 +1,823 @@
// Bitmapped Allocator. -*- C++ -*-
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// As a special exception, you may use this file as part of a free software
// library without restriction. Specifically, if other files instantiate
// templates or use macros or inline functions from this file, or you compile
// this file and link it with other files to produce an executable, this
// file does not by itself cause the resulting executable to be covered by
// the GNU General Public License. This exception does not however
// invalidate any other reasons why the executable file might be covered by
// the GNU General Public License.
#if !defined _BITMAP_ALLOCATOR_H
#define _BITMAP_ALLOCATOR_H 1
#include <cstddef>
//For std::size_t, and ptrdiff_t.
#include <utility>
//For std::pair.
#include <algorithm>
//std::find_if.
#include <vector>
//For the free list of exponentially growing memory blocks. At max,
//size of the vector should be not more than the number of bits in an
//integer or an unsigned integer.
#include <functional>
//For greater_equal, and less_equal.
#include <new>
//For operator new.
#include <bits/gthr.h>
//For __gthread_mutex_t, __gthread_mutex_lock and __gthread_mutex_unlock.
#include <ext/new_allocator.h>
//For __gnu_cxx::new_allocator for std::vector.
#include <cassert>
#define NDEBUG
//#define CHECK_FOR_ERRORS
namespace __gnu_cxx
{
class _Mutex {
__gthread_mutex_t _M_mut;
//Prevent Copying and assignment.
_Mutex (_Mutex const&);
_Mutex& operator= (_Mutex const&);
public:
_Mutex ()
{
#if !defined __GTHREAD_MUTEX_INIT
__GTHREAD_MUTEX_INIT_FUNCTION(&_M_mut);
#else
__gthread_mutex_t __mtemp = __GTHREAD_MUTEX_INIT;
_M_mut = __mtemp;
#endif
}
~_Mutex ()
{
//Gthreads does not define a Mutex Destruction Function.
}
__gthread_mutex_t *_M_get() { return &_M_mut; }
};
class _Lock {
_Mutex& _M_mt;
//Prevent Copying and assignment.
_Lock (_Lock const&);
_Lock& operator= (_Lock const&);
public:
_Lock (_Mutex& __mref) : _M_mt(__mref)
{
__gthread_mutex_lock(_M_mt._M_get());
}
~_Lock () { __gthread_mutex_unlock(_M_mt._M_get()); }
};
namespace __aux_balloc {
static const unsigned int _Bits_Per_Byte = 8;
static const unsigned int _Bits_Per_Block = sizeof(unsigned int) * _Bits_Per_Byte;
template <typename _Addr_Pair_t>
inline size_t __balloc_num_blocks (_Addr_Pair_t __ap)
{
return (__ap.second - __ap.first) + 1;
}
template <typename _Addr_Pair_t>
inline size_t __balloc_num_bit_maps (_Addr_Pair_t __ap)
{
return __balloc_num_blocks(__ap) / _Bits_Per_Block;
}
//T should be a pointer type.
template <typename _Tp>
class _Inclusive_between : public std::unary_function<typename std::pair<_Tp, _Tp>, bool> {
typedef _Tp pointer;
pointer _M_ptr_value;
typedef typename std::pair<_Tp, _Tp> _Block_pair;
public:
_Inclusive_between (pointer __ptr) : _M_ptr_value(__ptr) { }
bool operator () (_Block_pair __bp) const throw ()
{
if (std::less_equal<pointer> ()(_M_ptr_value, __bp.second) &&
std::greater_equal<pointer> ()(_M_ptr_value, __bp.first))
return true;
else
return false;
}
};
//Used to pass a Functor to functions by reference.
template <typename _Functor>
class _Functor_Ref :
public std::unary_function<typename _Functor::argument_type, typename _Functor::result_type> {
_Functor& _M_fref;
public:
typedef typename _Functor::argument_type argument_type;
typedef typename _Functor::result_type result_type;
_Functor_Ref (_Functor& __fref) : _M_fref(__fref) { }
result_type operator() (argument_type __arg) { return _M_fref (__arg); }
};
//T should be a pointer type, and A is the Allocator for the vector.
template <typename _Tp, typename _Alloc>
class _Ffit_finder : public std::unary_function<typename std::pair<_Tp, _Tp>, bool> {
typedef typename std::vector<std::pair<_Tp, _Tp>, _Alloc> _BPVector;
typedef typename _BPVector::difference_type _Counter_type;
typedef typename std::pair<_Tp, _Tp> _Block_pair;
unsigned int *_M_pbitmap;
unsigned int _M_data_offset;
public:
_Ffit_finder () : _M_pbitmap (0), _M_data_offset (0) { }
bool operator() (_Block_pair __bp) throw()
{
//Set the _rover to the last unsigned integer, which is the
//bitmap to the first free block. Thus, the bitmaps are in exact
//reverse order of the actual memory layout. So, we count down
//the bimaps, which is the same as moving up the memory.
//If the used count stored at the start of the Bit Map headers
//is equal to the number of Objects that the current Block can
//store, then there is definitely no space for another single
//object, so just return false.
_Counter_type __diff = __gnu_cxx::__aux_balloc::__balloc_num_bit_maps (__bp);
assert (*(reinterpret_cast<unsigned int*>(__bp.first) - (__diff + 1)) <=
__gnu_cxx::__aux_balloc::__balloc_num_blocks (__bp));
if (*(reinterpret_cast<unsigned int*>(__bp.first) - (__diff + 1)) ==
__gnu_cxx::__aux_balloc::__balloc_num_blocks (__bp))
return false;
unsigned int *__rover = reinterpret_cast<unsigned int*>(__bp.first) - 1;
for (_Counter_type __i = 0; __i < __diff; ++__i)
{
_M_data_offset = __i;
if (*__rover)
{
_M_pbitmap = __rover;
return true;
}
--__rover;
}
return false;
}
unsigned int *_M_get () { return _M_pbitmap; }
unsigned int _M_offset () { return _M_data_offset * _Bits_Per_Block; }
};
//T should be a pointer type.
template <typename _Tp, typename _Alloc>
class _Bit_map_counter {
typedef typename std::vector<std::pair<_Tp, _Tp>, _Alloc> _BPVector;
typedef typename _BPVector::size_type _Index_type;
typedef _Tp pointer;
_BPVector& _M_vbp;
unsigned int *_M_curr_bmap;
unsigned int *_M_last_bmap_in_block;
_Index_type _M_curr_index;
public:
//Use the 2nd parameter with care. Make sure that such an entry
//exists in the vector before passing that particular index to
//this ctor.
_Bit_map_counter (_BPVector& Rvbp, int __index = -1) : _M_vbp(Rvbp)
{
this->_M_reset(__index);
}
void _M_reset (int __index = -1) throw()
{
if (__index == -1)
{
_M_curr_bmap = 0;
_M_curr_index = (_Index_type)-1;
return;
}
_M_curr_index = __index;
_M_curr_bmap = reinterpret_cast<unsigned int*>(_M_vbp[_M_curr_index].first) - 1;
assert (__index <= (int)_M_vbp.size() - 1);
_M_last_bmap_in_block = _M_curr_bmap -
((_M_vbp[_M_curr_index].second - _M_vbp[_M_curr_index].first + 1) / _Bits_Per_Block - 1);
}
//Dangerous Function! Use with extreme care. Pass to this
//functions ONLY those values that are known to be correct,
//otherwise this will mess up big time.
void _M_set_internal_bit_map (unsigned int *__new_internal_marker) throw()
{
_M_curr_bmap = __new_internal_marker;
}
bool _M_finished () const throw()
{
return (_M_curr_bmap == 0);
}
_Bit_map_counter& operator++ () throw()
{
if (_M_curr_bmap == _M_last_bmap_in_block)
{
if (++_M_curr_index == _M_vbp.size())
{
_M_curr_bmap = 0;
}
else
{
this->_M_reset (_M_curr_index);
}
}
else
{
--_M_curr_bmap;
}
return *this;
}
unsigned int *_M_get ()
{
return _M_curr_bmap;
}
pointer base () { return _M_vbp[_M_curr_index].first; }
unsigned int _M_offset ()
{
return _Bits_Per_Block * ((reinterpret_cast<unsigned int*>(this->base()) - _M_curr_bmap) - 1);
}
unsigned int _M_where () { return _M_curr_index; }
};
}
//Generic Version of the bsf instruction.
typedef unsigned int _Bit_map_type;
static inline unsigned int _Bit_scan_forward (_Bit_map_type __num)
{
unsigned int __ret_val = 0;
while (__num % 2 == 0)
{
++__ret_val;
__num >>= 1;
}
return __ret_val;
}
struct _OOM_handler {
static std::new_handler _S_old_handler;
static bool _S_handled_oom;
typedef void (*_FL_clear_proc)(void);
static _FL_clear_proc _S_oom_fcp;
_OOM_handler (_FL_clear_proc __fcp)
{
_S_oom_fcp = __fcp;
_S_old_handler = std::set_new_handler (_S_handle_oom_proc);
_S_handled_oom = false;
}
static void _S_handle_oom_proc()
{
_S_oom_fcp();
std::set_new_handler (_S_old_handler);
_S_handled_oom = true;
}
~_OOM_handler ()
{
if (!_S_handled_oom)
std::set_new_handler (_S_old_handler);
}
};
std::new_handler _OOM_handler::_S_old_handler;
bool _OOM_handler::_S_handled_oom = false;
_OOM_handler::_FL_clear_proc _OOM_handler::_S_oom_fcp = 0;
class _BA_free_list_store {
struct _LT_pointer_compare {
template <typename _Tp>
bool operator() (_Tp* __pt, _Tp const& __crt) const throw()
{
return *__pt < __crt;
}
};
#if defined __GTHREADS
static _Mutex _S_bfl_mutex;
#endif
static std::vector<unsigned int*> _S_free_list;
typedef std::vector<unsigned int*>::iterator _FLIter;
static void _S_validate_free_list(unsigned int *__addr) throw()
{
const unsigned int Max_Size = 64;
if (_S_free_list.size() >= Max_Size)
{
//Ok, the threshold value has been reached.
//We determine which block to remove from the list of free
//blocks.
if (*__addr >= *_S_free_list.back())
{
//Ok, the new block is greater than or equal to the last
//block in the list of free blocks. We just free the new
//block.
operator delete((void*)__addr);
return;
}
else
{
//Deallocate the last block in the list of free lists, and
//insert the new one in it's correct position.
operator delete((void*)_S_free_list.back());
_S_free_list.pop_back();
}
}
//Just add the block to the list of free lists
//unconditionally.
_FLIter __temp = std::lower_bound(_S_free_list.begin(), _S_free_list.end(),
*__addr, _LT_pointer_compare ());
//We may insert the new free list before _temp;
_S_free_list.insert(__temp, __addr);
}
static bool _S_should_i_give(unsigned int __block_size, unsigned int __required_size) throw()
{
const unsigned int Max_Wastage_Percentage = 36;
if (__block_size >= __required_size &&
(((__block_size - __required_size) * 100 / __block_size) < Max_Wastage_Percentage))
return true;
else
return false;
}
public:
typedef _BA_free_list_store _BFL_type;
static inline void _S_insert_free_list(unsigned int *__addr) throw()
{
#if defined __GTHREADS
_Lock __bfl_lock(*&_S_bfl_mutex);
#endif
//Call _S_validate_free_list to decide what should be done with this
//particular free list.
_S_validate_free_list(--__addr);
}
static unsigned int *_S_get_free_list(unsigned int __sz) throw (std::bad_alloc)
{
#if defined __GTHREADS
_Lock __bfl_lock(*&_S_bfl_mutex);
#endif
_FLIter __temp = std::lower_bound(_S_free_list.begin(), _S_free_list.end(),
__sz, _LT_pointer_compare());
if (__temp == _S_free_list.end() || !_S_should_i_give (**__temp, __sz))
{
_OOM_handler __set_handler(_BFL_type::_S_clear);
unsigned int *__ret_val = reinterpret_cast<unsigned int*>
(operator new (__sz + sizeof(unsigned int)));
*__ret_val = __sz;
return ++__ret_val;
}
else
{
unsigned int* __ret_val = *__temp;
_S_free_list.erase (__temp);
return ++__ret_val;
}
}
//This function just clears the internal Free List, and gives back
//all the memory to the OS.
static void _S_clear()
{
#if defined __GTHREADS
_Lock __bfl_lock(*&_S_bfl_mutex);
#endif
_FLIter __iter = _S_free_list.begin();
while (__iter != _S_free_list.end())
{
operator delete((void*)*__iter);
++__iter;
}
_S_free_list.clear();
}
};
#if defined __GTHREADS
_Mutex _BA_free_list_store::_S_bfl_mutex;
#endif
std::vector<unsigned int*> _BA_free_list_store::_S_free_list;
template <class _Tp> class bitmap_allocator;
// specialize for void:
template <> class bitmap_allocator<void> {
public:
typedef void* pointer;
typedef const void* const_pointer;
// reference-to-void members are impossible.
typedef void value_type;
template <class U> struct rebind { typedef bitmap_allocator<U> other; };
};
template <class _Tp> class bitmap_allocator : private _BA_free_list_store {
public:
typedef size_t size_type;
typedef ptrdiff_t difference_type;
typedef _Tp* pointer;
typedef const _Tp* const_pointer;
typedef _Tp& reference;
typedef const _Tp& const_reference;
typedef _Tp value_type;
template <class U> struct rebind { typedef bitmap_allocator<U> other; };
private:
static const unsigned int _Bits_Per_Byte = 8;
static const unsigned int _Bits_Per_Block = sizeof(unsigned int) * _Bits_Per_Byte;
static inline void _S_bit_allocate(unsigned int *__pbmap, unsigned int __pos) throw()
{
unsigned int __mask = 1 << __pos;
__mask = ~__mask;
*__pbmap &= __mask;
}
static inline void _S_bit_free(unsigned int *__pbmap, unsigned int __Pos) throw()
{
unsigned int __mask = 1 << __Pos;
*__pbmap |= __mask;
}
static inline void *_S_memory_get(size_t __sz) throw (std::bad_alloc)
{
return operator new(__sz);
}
static inline void _S_memory_put(void *__vptr) throw ()
{
operator delete(__vptr);
}
typedef typename std::pair<pointer, pointer> _Block_pair;
typedef typename __gnu_cxx::new_allocator<_Block_pair> _BPVec_allocator_type;
typedef typename std::vector<_Block_pair, _BPVec_allocator_type> _BPVector;
#if defined CHECK_FOR_ERRORS
//Complexity: O(lg(N)). Where, N is the number of block of size
//sizeof(value_type).
static void _S_check_for_free_blocks() throw()
{
typedef typename __gnu_cxx::__aux_balloc::_Ffit_finder<pointer, _BPVec_allocator_type> _FFF;
_FFF __fff;
typedef typename _BPVector::iterator _BPiter;
_BPiter __bpi = std::find_if(_S_mem_blocks.begin(), _S_mem_blocks.end(),
__gnu_cxx::__aux_balloc::_Functor_Ref<_FFF>(__fff));
assert(__bpi == _S_mem_blocks.end());
}
#endif
//Complexity: O(1), but internally depends upon the complexity of
//the function _BA_free_list_store::_S_get_free_list. The part
//where the bitmap headers are written is of worst case complexity:
//O(X),where X is the number of blocks of size sizeof(value_type)
//within the newly acquired block. Having a tight bound.
static void _S_refill_pool() throw (std::bad_alloc)
{
#if defined CHECK_FOR_ERRORS
_S_check_for_free_blocks();
#endif
const unsigned int __num_bit_maps = _S_block_size / _Bits_Per_Block;
const unsigned int __size_to_allocate = sizeof(unsigned int) +
_S_block_size * sizeof(value_type) + __num_bit_maps*sizeof(unsigned int);
unsigned int *__temp =
reinterpret_cast<unsigned int*>(_BA_free_list_store::_S_get_free_list(__size_to_allocate));
*__temp = 0;
++__temp;
//The Header information goes at the Beginning of the Block.
_Block_pair __bp = std::make_pair(reinterpret_cast<pointer>(__temp + __num_bit_maps),
reinterpret_cast<pointer>(__temp + __num_bit_maps)
+ _S_block_size - 1);
//Fill the Vector with this information.
_S_mem_blocks.push_back(__bp);
unsigned int __bit_mask = 0; //0 Indicates all Allocated.
__bit_mask = ~__bit_mask; //1 Indicates all Free.
for (unsigned int __i = 0; __i < __num_bit_maps; ++__i)
__temp[__i] = __bit_mask;
//On some implementations, operator new might throw bad_alloc, or
//malloc might fail if the size passed is too large, therefore, we
//limit the size passed to malloc or operator new.
_S_block_size *= 2;
}
static _BPVector _S_mem_blocks;
static unsigned int _S_block_size;
static __gnu_cxx::__aux_balloc::_Bit_map_counter<pointer, _BPVec_allocator_type> _S_last_request;
static typename _BPVector::size_type _S_last_dealloc_index;
#if defined __GTHREADS
static _Mutex _S_mut;
#endif
public:
bitmap_allocator() throw()
{ }
bitmap_allocator(const bitmap_allocator&) { }
template <typename _Tp1> bitmap_allocator(const bitmap_allocator<_Tp1>&) throw()
{ }
~bitmap_allocator() throw()
{ }
//Complexity: Worst case complexity is O(N), but that is hardly ever
//hit. if and when this particular case is encountered, the next few
//cases are guaranteed to have a worst case complexity of O(1)!
//That's why this function performs very well on the average. you
//can consider this function to be having a complexity refrred to
//commonly as: Amortized Constant time.
static pointer _S_allocate_single_object()
{
#if defined __GTHREADS
_Lock _bit_lock(*&_S_mut);
#endif
//The algorithm is something like this: The last_requst variable
//points to the last accessed Bit Map. When such a condition
//occurs, we try to find a free block in the current bitmap, or
//succeeding bitmaps until the last bitmap is reached. If no free
//block turns up, we resort to First Fit method. But, again, the
//First Fit is used only upto the point where we started the
//previous linear search.
while (_S_last_request._M_finished() == false && (*(_S_last_request._M_get()) == 0))
{
_S_last_request.operator++();
}
if (_S_last_request._M_finished())
{
//Fall Back to First Fit algorithm.
typedef typename __gnu_cxx::__aux_balloc::_Ffit_finder<pointer, _BPVec_allocator_type> _FFF;
_FFF __fff;
typedef typename _BPVector::iterator _BPiter;
_BPiter __bpi = std::find_if(_S_mem_blocks.begin(), _S_mem_blocks.end(),
__gnu_cxx::__aux_balloc::_Functor_Ref<_FFF>(__fff));
if (__bpi != _S_mem_blocks.end())
{
//Search was successful. Ok, now mark the first bit from
//the right as 0, meaning Allocated. This bit is obtained
//by calling _M_get() on __fff.
unsigned int __nz_bit = _Bit_scan_forward(*__fff._M_get());
_S_bit_allocate(__fff._M_get(), __nz_bit);
_S_last_request._M_reset(__bpi - _S_mem_blocks.begin());
//Now, get the address of the bit we marked as allocated.
pointer __ret_val = __bpi->first + __fff._M_offset() + __nz_bit;
unsigned int *__puse_count = reinterpret_cast<unsigned int*>(__bpi->first) -
(__gnu_cxx::__aux_balloc::__balloc_num_bit_maps(*__bpi) + 1);
++(*__puse_count);
return __ret_val;
}
else
{
//Search was unsuccessful. We Add more memory to the pool
//by calling _S_refill_pool().
_S_refill_pool();
//_M_Reset the _S_last_request structure to the first free
//block's bit map.
_S_last_request._M_reset(_S_mem_blocks.size() - 1);
//Now, mark that bit as allocated.
}
}
//_S_last_request holds a pointer to a valid bit map, that points
//to a free block in memory.
unsigned int __nz_bit = _Bit_scan_forward(*_S_last_request._M_get());
_S_bit_allocate(_S_last_request._M_get(), __nz_bit);
pointer __ret_val = _S_last_request.base() + _S_last_request._M_offset() + __nz_bit;
unsigned int *__puse_count = reinterpret_cast<unsigned int*>
(_S_mem_blocks[_S_last_request._M_where()].first) -
(__gnu_cxx::__aux_balloc::__balloc_num_bit_maps(_S_mem_blocks[_S_last_request._M_where()]) + 1);
++(*__puse_count);
return __ret_val;
}
//Complexity: O(1), but internally the complexity depends upon the
//complexity of the function(s) _S_allocate_single_object and
//_S_memory_get.
pointer allocate(size_type __n)
{
if (__n == 1)
return _S_allocate_single_object();
else
return reinterpret_cast<pointer>(_S_memory_get(__n * sizeof(value_type)));
}
//Complexity: Worst case complexity is O(N) where N is the number of
//blocks of size sizeof(value_type) within the free lists that the
//allocator holds. However, this worst case is hit only when the
//user supplies a bogus argument to hint. If the hint argument is
//sensible, then the complexity drops to O(lg(N)), and in extreme
//cases, even drops to as low as O(1). So, if the user supplied
//argument is good, then this function performs very well.
pointer allocate(size_type __n, typename bitmap_allocator<void>::const_pointer)
{
return allocate(__n);
}
void deallocate(pointer __p, size_type __n) throw()
{
if (__n == 1)
_S_deallocate_single_object(__p);
else
_S_memory_put(__p);
}
//Complexity: O(lg(N)), but the worst case is hit quite often! I
//need to do something about this. I'll be able to work on it, only
//when I have some solid figures from a few real apps.
static void _S_deallocate_single_object(pointer __p) throw()
{
#if defined __GTHREADS
_Lock _bit_lock(*&_S_mut);
#endif
typedef typename _BPVector::iterator iterator;
typedef typename _BPVector::difference_type diff_type;
diff_type __diff;
int __displacement;
assert(_S_last_dealloc_index >= 0);
if (__gnu_cxx::__aux_balloc::_Inclusive_between<pointer>(__p)(_S_mem_blocks[_S_last_dealloc_index]))
{
assert(_S_last_dealloc_index <= _S_mem_blocks.size() - 1);
//Initial Assumption was correct!
__diff = _S_last_dealloc_index;
__displacement = __p - _S_mem_blocks[__diff].first;
}
else
{
iterator _iter = (std::find_if(_S_mem_blocks.begin(), _S_mem_blocks.end(),
__gnu_cxx::__aux_balloc::_Inclusive_between<pointer>(__p)));
assert(_iter != _S_mem_blocks.end());
__diff = _iter - _S_mem_blocks.begin();
__displacement = __p - _S_mem_blocks[__diff].first;
_S_last_dealloc_index = __diff;
}
//Get the position of the iterator that has been found.
const unsigned int __rotate = __displacement % _Bits_Per_Block;
unsigned int *__bit_mapC = reinterpret_cast<unsigned int*>(_S_mem_blocks[__diff].first) - 1;
__bit_mapC -= (__displacement / _Bits_Per_Block);
_S_bit_free(__bit_mapC, __rotate);
unsigned int *__puse_count = reinterpret_cast<unsigned int*>
(_S_mem_blocks[__diff].first) -
(__gnu_cxx::__aux_balloc::__balloc_num_bit_maps(_S_mem_blocks[__diff]) + 1);
assert(*__puse_count != 0);
--(*__puse_count);
if (!*__puse_count)
{
_S_block_size /= 2;
//We may safely remove this block.
_Block_pair __bp = _S_mem_blocks[__diff];
_S_insert_free_list(__puse_count);
_S_mem_blocks.erase(_S_mem_blocks.begin() + __diff);
//We reset the _S_last_request variable to reflect the erased
//block. We do this to pretect future requests after the last
//block has been removed from a particular memory Chunk,
//which in turn has been returned to the free list, and
//hence had been erased from the vector, so the size of the
//vector gets reduced by 1.
if ((diff_type)_S_last_request._M_where() >= __diff--)
{
_S_last_request._M_reset(__diff);
// assert(__diff >= 0);
}
//If the Index into the vector of the region of memory that
//might hold the next address that will be passed to
//deallocated may have been invalidated due to the above
//erase procedure being called on the vector, hence we try
//to restore this invariant too.
if (_S_last_dealloc_index >= _S_mem_blocks.size())
{
_S_last_dealloc_index =(__diff != -1 ? __diff : 0);
assert(_S_last_dealloc_index >= 0);
}
}
}
pointer address(reference r) const { return &r; }
const_pointer address(const_reference r) const { return &r; }
size_type max_size(void) const throw() { return (size_type()-1)/sizeof(value_type); }
void construct (pointer p, const_reference _data)
{
new (p) value_type (_data);
}
void destroy (pointer p)
{
p->~value_type();
}
};
template <typename _Tp>
typename bitmap_allocator<_Tp>::_BPVector bitmap_allocator<_Tp>::_S_mem_blocks;
template <typename _Tp>
unsigned int bitmap_allocator<_Tp>::_S_block_size = bitmap_allocator<_Tp>::_Bits_Per_Block;
template <typename _Tp>
typename __gnu_cxx::bitmap_allocator<_Tp>::_BPVector::size_type
bitmap_allocator<_Tp>::_S_last_dealloc_index = 0;
template <typename _Tp>
__gnu_cxx::__aux_balloc::_Bit_map_counter
<typename bitmap_allocator<_Tp>::pointer, typename bitmap_allocator<_Tp>::_BPVec_allocator_type>
bitmap_allocator<_Tp>::_S_last_request(_S_mem_blocks);
#if defined __GTHREADS
template <typename _Tp>
__gnu_cxx::_Mutex
bitmap_allocator<_Tp>::_S_mut;
#endif
template <typename _Tp1, typename _Tp2>
bool operator== (const bitmap_allocator<_Tp1>&, const bitmap_allocator<_Tp2>&) throw()
{
return true;
}
template <typename _Tp1, typename _Tp2>
bool operator!= (const bitmap_allocator<_Tp1>&, const bitmap_allocator<_Tp2>&) throw()
{
return false;
}
}
#endif //_BITMAP_ALLOCATOR_H

View File

@ -43,6 +43,7 @@
#include <ext/mt_allocator.h>
#include <ext/new_allocator.h>
#include <ext/malloc_allocator.h>
#include <ext/bitmap_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
@ -146,6 +147,7 @@ int main(void)
typedef __gnu_cxx::malloc_allocator<test_type> m_alloc_type;
typedef __gnu_cxx::new_allocator<test_type> n_alloc_type;
typedef __gnu_cxx::__mt_alloc<test_type> so_alloc_type;
typedef __gnu_cxx::bitmap_allocator<test_type> bit_alloc_type;
#ifdef TEST_B0
test_container(vector<test_type, m_alloc_type>());
@ -156,47 +158,62 @@ int main(void)
#ifdef TEST_B2
test_container(vector<test_type, so_alloc_type>());
#endif
#ifdef TEST_B3
test_container(vector<test_type, bit_alloc_type>());
#endif
#ifdef TEST_B4
test_container(list<test_type, m_alloc_type>());
#endif
#ifdef TEST_B4
#ifdef TEST_B5
test_container(list<test_type, n_alloc_type>());
#endif
#ifdef TEST_B5
#ifdef TEST_B6
test_container(list<test_type, so_alloc_type>());
#endif
#ifdef TEST_B7
test_container(list<test_type, bit_alloc_type>());
#endif
#ifdef TEST_B6
#ifdef TEST_B8
test_container(deque<test_type, m_alloc_type>());
#endif
#ifdef TEST_B7
#ifdef TEST_B9
test_container(deque<test_type, n_alloc_type>());
#endif
#ifdef TEST_B8
#ifdef TEST_B10
test_container(deque<test_type, so_alloc_type>());
#endif
#ifdef TEST_B11
test_container(deque<test_type, bit_alloc_type>());
#endif
typedef less<test_type> compare_type;
#ifdef TEST_B9
#ifdef TEST_B12
test_container(map<test_type, test_type, compare_type, m_alloc_type>());
#endif
#ifdef TEST_B10
#ifdef TEST_B13
test_container(map<test_type, test_type, compare_type, n_alloc_type>());
#endif
#ifdef TEST_B11
#ifdef TEST_B14
test_container(map<test_type, test_type, compare_type, so_alloc_type>());
#endif
#ifdef TEST_B15
test_container(map<test_type, test_type, compare_type, bit_alloc_type>());
#endif
#ifdef TEST_B12
#ifdef TEST_B16
test_container(set<test_type, compare_type, m_alloc_type>());
#endif
#ifdef TEST_B13
#ifdef TEST_B17
test_container(set<test_type, compare_type, n_alloc_type>());
#endif
#ifdef TEST_B14
#ifdef TEST_B18
test_container(set<test_type, compare_type, so_alloc_type>());
#endif
#ifdef TEST_B19
test_container(set<test_type, compare_type, bit_alloc_type>());
#endif
#ifdef TEST_T0
test_container(vector<test_type, m_alloc_type>(), true);
@ -207,47 +224,62 @@ int main(void)
#ifdef TEST_T2
test_container(vector<test_type, so_alloc_type>(), true);
#endif
#ifdef TEST_T3
test_container(vector<test_type, bit_alloc_type>(), true);
#endif
#ifdef TEST_T4
test_container(list<test_type, m_alloc_type>(), true);
#endif
#ifdef TEST_T4
#ifdef TEST_T5
test_container(list<test_type, n_alloc_type>(), true);
#endif
#ifdef TEST_T5
#ifdef TEST_T6
test_container(list<test_type, so_alloc_type>(), true);
#endif
#ifdef TEST_T7
test_container(list<test_type, bit_alloc_type>(), true);
#endif
#ifdef TEST_T6
#ifdef TEST_T8
test_container(deque<test_type, m_alloc_type>(), true);
#endif
#ifdef TEST_T7
#ifdef TEST_T9
test_container(deque<test_type, n_alloc_type>(), true);
#endif
#ifdef TEST_T8
#ifdef TEST_T10
test_container(deque<test_type, so_alloc_type>(), true);
#endif
#ifdef TEST_T11
test_container(deque<test_type, bit_alloc_type>(), true);
#endif
typedef less<test_type> compare_type;
#ifdef TEST_T9
#ifdef TEST_T12
test_container(map<test_type, test_type, compare_type, m_alloc_type>(), true);
#endif
#ifdef TEST_T10
#ifdef TEST_T13
test_container(map<test_type, test_type, compare_type, n_alloc_type>(), true);
#endif
#ifdef TEST_T11
#ifdef TEST_T14
test_container(map<test_type, test_type, compare_type, so_alloc_type>(), true);
#endif
#ifdef TEST_T15
test_container(map<test_type, test_type, compare_type, bit_alloc_type>(), true);
#endif
#ifdef TEST_T12
#ifdef TEST_T16
test_container(set<test_type, compare_type, m_alloc_type>(), true);
#endif
#ifdef TEST_T13
#ifdef TEST_T17
test_container(set<test_type, compare_type, n_alloc_type>(), true);
#endif
#ifdef TEST_T14
#ifdef TEST_T18
test_container(set<test_type, compare_type, so_alloc_type>(), true);
#endif
#ifdef TEST_T19
test_container(set<test_type, compare_type, bit_alloc_type>(), true);
#endif
return 0;
}

View File

@ -43,6 +43,7 @@
#include <ext/mt_allocator.h>
#include <ext/new_allocator.h>
#include <ext/malloc_allocator.h>
#include <ext/bitmap_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
@ -117,6 +118,7 @@ int main(void)
typedef __gnu_cxx::malloc_allocator<test_type> m_alloc_type;
typedef __gnu_cxx::new_allocator<test_type> n_alloc_type;
typedef __gnu_cxx::__mt_alloc<test_type> so_alloc_type;
typedef __gnu_cxx::bitmap_allocator<test_type> bit_alloc_type;
#ifdef TEST_S0
test_container(vector<test_type, m_alloc_type>());
@ -127,37 +129,52 @@ int main(void)
#ifdef TEST_S2
test_container(vector<test_type, so_alloc_type>());
#endif
#ifdef TEST_S3
test_container(vector<test_type, bit_alloc_type>());
#endif
#ifdef TEST_S4
test_container(list<test_type, m_alloc_type>());
#endif
#ifdef TEST_S4
#ifdef TEST_S5
test_container(list<test_type, n_alloc_type>());
#endif
#ifdef TEST_S5
#ifdef TEST_S6
test_container(list<test_type, so_alloc_type>());
#endif
#ifdef TEST_S7
test_container(list<test_type, bit_alloc_type>());
#endif
#ifdef TEST_S6
#ifdef TEST_S8
test_container(deque<test_type, m_alloc_type>());
#endif
#ifdef TEST_S7
#ifdef TEST_S9
test_container(deque<test_type, n_alloc_type>());
#endif
#ifdef TEST_S8
#ifdef TEST_S10
test_container(deque<test_type, so_alloc_type>());
#endif
#ifdef TEST_S11
test_container(deque<test_type, bit_alloc_type>());
#endif
typedef less<test_type> compare_type;
#ifdef TEST_S9
#ifdef TEST_S12
test_container(map<test_type, test_type, compare_type, m_alloc_type>());
#endif
#ifdef TEST_S10
#ifdef TEST_S13
test_container(map<test_type, test_type, compare_type, n_alloc_type>());
#endif
#ifdef TEST_S11
#ifdef TEST_S14
test_container(map<test_type, test_type, compare_type, so_alloc_type>());
#endif
#ifdef TEST_S15
test_container(map<test_type, test_type, compare_type, bit_alloc_type>());
#endif
#ifdef TEST_S12
test_container(set<test_type, compare_type, m_alloc_type>());
@ -168,6 +185,9 @@ int main(void)
#ifdef TEST_S14
test_container(set<test_type, compare_type, so_alloc_type>());
#endif
#ifdef TEST_S14
test_container(set<test_type, compare_type, bit_alloc_type>());
#endif
return 0;
}

View File

@ -0,0 +1,125 @@
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// As a special exception, you may use this file as part of a free software
// library without restriction. Specifically, if other files instantiate
// templates or use macros or inline functions from this file, or you compile
// this file and link it with other files to produce an executable, this
// file does not by itself cause the resulting executable to be covered by
// the GNU General Public License. This exception does not however
// invalidate any other reasons why the executable file might be covered by
// the GNU General Public License.
// 2004-03-11 Dhruv Matani <dhruvbird@HotPOP.com>
#include <typeinfo>
#include <sstream>
#include <ext/mt_allocator.h>
#include <ext/malloc_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
#include <ext/bitmap_allocator.h>
using namespace std;
using __gnu_cxx::malloc_allocator;
using __gnu_cxx::__mt_alloc;
using __gnu_cxx::bitmap_allocator;
typedef int test_type;
using namespace __gnu_cxx;
#include <list>
#include <map>
#include <algorithm>
#include <cstdlib>
using namespace std;
template <typename Alloc>
int Test_Allocator ()
{
typedef list<int, Alloc> My_List;
My_List il1;
int const Iter = 150000;
int ctr = 3;
while (ctr--)
{
for (int i = 0; i < Iter; ++i)
il1.push_back (rand()%500001);
//Search for random values that may or may not belong to the list.
for (int i = 0; i < 50; ++i)
std::find (il1.begin(), il1.end(), rand()%100001);
il1.sort ();
//Search for random values that may or may not belong to the list.
for (int i = 0; i < 50; ++i)
{
typename My_List::iterator _liter = std::find (il1.begin(), il1.end(), rand()%100001);
if (_liter != il1.end())
il1.erase (_liter);
}
il1.clear ();
}
return Iter;
}
template <typename Alloc>
void do_test ()
{
using namespace __gnu_test;
int status;
Alloc obj;
time_counter time;
resource_counter resource;
clear_counters(time, resource);
start_counters(time, resource);
int test_iterations = Test_Allocator<Alloc>();
stop_counters(time, resource);
std::ostringstream comment;
comment << "iterations: " << test_iterations <<endl;
comment << "type: " << abi::__cxa_demangle(typeid(obj).name(),
0, 0, &status);
report_header(__FILE__, comment.str());
report_performance(__FILE__, string(), time, resource);
}
int main ()
{
#ifdef TEST_S0
do_test<new_allocator<int> >();
#endif
#ifdef TEST_S1
do_test<malloc_allocator<int> >();
#endif
#ifdef TEST_S2
do_test<bitmap_allocator<int> >();
#endif
#ifdef TEST_S3
do_test<__mt_alloc<int> >();
#endif
}

View File

@ -0,0 +1,148 @@
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// As a special exception, you may use this file as part of a free software
// library without restriction. Specifically, if other files instantiate
// templates or use macros or inline functions from this file, or you compile
// this file and link it with other files to produce an executable, this
// file does not by itself cause the resulting executable to be covered by
// the GNU General Public License. This exception does not however
// invalidate any other reasons why the executable file might be covered by
// the GNU General Public License.
// 2004-03-11 Dhruv Matani <dhruvbird@HotPOP.com>
#include <new>
#include <map>
#include <cstdlib>
#include <string>
#include <pthread.h>
#include <typeinfo>
#include <sstream>
#include <ext/mt_allocator.h>
#include <ext/malloc_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
#include <ext/bitmap_allocator.h>
using namespace std;
using __gnu_cxx::malloc_allocator;
using __gnu_cxx::__mt_alloc;
using __gnu_cxx::bitmap_allocator;
typedef int test_type;
using namespace __gnu_cxx;
using namespace std;
bool less_int (int x1, int x2) { return x1<x2; }
#if defined USE_FUNCTION_COMPARE
#define COMPARE_T typeof(&less_int)
#define COMPARE_F &less_int
#else
#define COMPARE_T std::less<int>
#define COMPARE_F std::less<int>()
#endif
template <typename Alloc>
void *find_proc (void *_vptr)
{
map<int, string, COMPARE_T, Alloc> *_mptr =
reinterpret_cast<map<int, string, COMPARE_T, Alloc>*>(_vptr);
for (int i = 0; i < 700000; ++i)
{
int Finder = rand() % 2000000;
_mptr->find (Finder);
}
return _vptr;
}
template <typename Alloc>
int do_test ()
{
COMPARE_T _comp = COMPARE_F;
map<int, string, COMPARE_T, Alloc> imap (_comp);
int x = 2;
pthread_t thr[3];
const int Iter = 1000000;
while (x--)
{
for (int i = 0; i < 300000; ++i)
imap.insert (make_pair (rand()%1000000, "_test_data"));
for (int i = 0; i < 100000; ++i)
imap.insert (make_pair (rand()%2000000, "_test_data"));
pthread_create (&thr[0], NULL, find_proc<Alloc>, (void*)&imap);
pthread_create (&thr[1], NULL, find_proc<Alloc>, (void*)&imap);
pthread_create (&thr[2], NULL, find_proc<Alloc>, (void*)&imap);
pthread_join (thr[0], 0);
pthread_join (thr[1], 0);
pthread_join (thr[2], 0);
imap.clear ();
}
return Iter;
}
template <typename Alloc>
void exec_tests ()
{
using namespace __gnu_test;
int status;
COMPARE_T _comp = COMPARE_F;
map<int, string, COMPARE_T, Alloc> obj (_comp);
time_counter time;
resource_counter resource;
clear_counters(time, resource);
start_counters(time, resource);
int test_iterations = do_test<Alloc>();
stop_counters(time, resource);
std::ostringstream comment;
comment << "iterations: " << test_iterations <<endl;
comment << "type: " << abi::__cxa_demangle(typeid(obj).name(),
0, 0, &status);
report_header(__FILE__, comment.str());
report_performance(__FILE__, string(), time, resource);
}
int main()
{
#ifdef TEST_T0
exec_tests<new_allocator<int> >();
#endif
#ifdef TEST_T1
exec_tests<malloc_allocator<int> >();
#endif
#ifdef TEST_T2
exec_tests<bitmap_allocator<int> >();
#endif
#ifdef TEST_T3
exec_tests<__mt_alloc<int> >();
#endif
}

View File

@ -40,6 +40,7 @@
#include <ext/mt_allocator.h>
#include <ext/new_allocator.h>
#include <ext/malloc_allocator.h>
#include <ext/bitmap_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
@ -47,6 +48,7 @@ using namespace std;
using __gnu_cxx::__mt_alloc;
using __gnu_cxx::new_allocator;
using __gnu_cxx::malloc_allocator;
using __gnu_cxx::bitmap_allocator;
// The number of iterations to be performed.
int iterations = 10000;
@ -120,6 +122,10 @@ int main(void)
test_container(map<int, int, less<const int>,
__mt_alloc< pair<const int, int> > >());
#endif
#ifdef TEST_T5
test_container(map<int, int, less<const int>, bitmap_allocator<int> >());
#endif
return 0;
}

View File

@ -41,6 +41,7 @@
#include <ext/mt_allocator.h>
#include <ext/new_allocator.h>
#include <ext/malloc_allocator.h>
#include <ext/bitmap_allocator.h>
#include <cxxabi.h>
#include <testsuite_performance.h>
@ -49,6 +50,7 @@ using namespace std;
using __gnu_cxx::__mt_alloc;
using __gnu_cxx::new_allocator;
using __gnu_cxx::malloc_allocator;
using __gnu_cxx::bitmap_allocator;
using abi::__cxa_demangle;
typedef int test_type;
@ -56,6 +58,7 @@ typedef less<test_type> compare_type;
typedef malloc_allocator<test_type> malloc_alloc_type;
typedef new_allocator<test_type> new_alloc_type;
typedef __mt_alloc<test_type> so_alloc_type;
typedef bitmap_allocator<test_type> bit_alloc_type;
// The number of iterations to be performed.
int iterations = 10000;
@ -292,6 +295,10 @@ int main(void)
#ifdef TEST_T3
test_container(vector<test_type, so_alloc_type>());
#endif
#ifdef TEST_T4
test_container(vector<test_type, bit_alloc_type>());
#endif
#ifdef TEST_T5
test_container(list<test_type, malloc_alloc_type>());
@ -302,6 +309,10 @@ int main(void)
#ifdef TEST_T7
test_container(list<test_type, so_alloc_type>());
#endif
#ifdef TEST_T8
test_container(list<test_type, bit_alloc_type>());
#endif
#ifdef TEST_T9
test_container(map<test_type, test_type, compare_type, malloc_alloc_type>());
@ -312,6 +323,10 @@ int main(void)
#ifdef TEST_T11
test_container(map<test_type, test_type, compare_type, so_alloc_type>());
#endif
#ifdef TEST_T12
test_container(map<test_type, test_type, compare_type, bit_alloc_type>());
#endif
return 0;
}