- Organize the documentation into "chapters".
- Each chapter include many documentation pages, reference pages organized as modules, and a quick reference page.
- The "Chapters" tree is created using the defgroup/ingroup mechanism, even for the documentation pages (i.e., .dox files for which I added an \eigenManualPage macro that we can switch between \page or \defgroup ).
- Add a "General topics" entry for all pages that do not fit well in the previous "chapters".
- The highlevel struture is managed by a new eigendoxy_layout.xml file.
- remove the "index" and quite useless pages (namespace list, class hierarchy, member list, file list, etc.)
- add the javascript search-engine.
- add the "treeview" panel.
- remove \tableofcontents (replace them by a custom \eigenAutoToc macro to be able to easily re-enable if needed).
- add javascript to automatically generate a TOC from the h1/h2 tags of the current page, and put the TOC in the left side panel.
- overload various javascript function generated by doxygen to:
- remove the root of the treeview
- remove links to section/subsection from the treeview
- automatically expand the "Chapters" section
- automatically expand the current section
- adjust the height of the treeview to take into account the TOC
- always use the default .css file, eigendoxy.css now only includes our modifications
- use Doxyfile to specify our logo
- remove cross references to unsupported modules (temporarily)
construction of generic expressions working
for both dense and sparse matrix. A nicer solution
would be to use CwiseBinaryOp for any kind of matrix.
To this end we either need to change the overall design
so that the base class(es) depends on the kind of matrix,
or we could add a template parameter to each expression
type (e.g., int Kind = ei_traits<MatrixType>::Kind)
allowing to specialize each expression for each kind of matrix.
* Extend AutoDiffScalar to work with sparse vector expression
for the derivatives.
That means a lot of features which were available for sparse matrices
via the dense (and super slow) implemention are no longer available.
All features which make sense for sparse matrices (aka can be implemented efficiently) will be
implemented soon, but don't expect to see an API as rich as for the dense path.
Other changes:
* no block(), row(), col() anymore.
* instead use .innerVector() to get a col or row vector of a matrix.
* .segment(), start(), end() will be back soon, not sure for block()
* faster cwise product
* extend unit tests
* add support for generic sum reduction and dot product
* optimize the cwise()* : this is a special case of CwiseBinaryOp where
we only have to process the coeffs which are not null for *both* matrices.
Perhaps there exist some other binary operations like that ?
* add a LDL^T factorization with solver using code from T. Davis's LDL
library (LPGL2.1+)
* various bug fixes in trianfular solver, matrix product, etc.
* improve cmake files for the supported libraries
* split the sparse unit test
* etc.
as described on the wiki (one map per N column)
Here's some bench results for the 4 currently supported map impl:
std::map => 18.3385 (581 MB)
gnu::hash_map => 6.52574 (555 MB)
google::dense => 2.87982 (315 MB)
google::sparse => 15.7441 (165 MB)
This is the time is second (and memory consumption) to insert/lookup
10 million of coeffs with random coords inside a 10000^2 matrix,
with one map per packet of 64 columns => google::dense really rocks !
Note I use for the key value the index of the column in the packet (between 0 and 63)
times the number of rows and I used the default hash function.... so maybe there is
room for improvement here....
solver from suitesparse (as cholmod). It seems to be even faster
than SuperLU and it was much simpler to interface ! Well,
the factorization is faster, but for the solve part, SuperLU is
quite faster. On the other hand the solve part represents only a
fraction of the whole procedure. Moreover, I bench random matrices
that does not represents real cases, and I'm not sure at all
I use both libraries with their best settings !
* several fixes (transpose, matrix product, etc...)
* Added a basic cholesky factorization
* Added a low level hybrid dense/sparse vector class
to help writing code involving intensive read/write
in a fixed vector. It is currently used to implement
the matrix product itself as well as in the Cholesky
factorization.
might be twice faster fot small fixed size matrix
* added a sparse triangular solver (sparse version
of inverseProduct)
* various other improvements in the Sparse module
* added complete implementation of sparse matrix product
(with a little glue in Eigen/Core)
* added an exhaustive bench of sparse products including GMM++ and MTL4
=> Eigen outperforms in all transposed/density configurations !
* added some glue to Eigen/Core (SparseBit, ei_eval, Matrix)
* add two new sparse matrix types:
HashMatrix: based on std::map (for random writes)
LinkedVectorMatrix: array of linked vectors
(for outer coherent writes, e.g. to transpose a matrix)
* add a SparseSetter class to easily set/update any kind of matrices, e.g.:
{ SparseSetter<MatrixType,RandomAccessPattern> wrapper(mymatrix);
for (...) wrapper->coeffRef(rand(),rand()) = rand(); }
* automatic shallow copy for RValue
* and a lot of mess !
plus:
* remove the remaining ArrayBit related stuff
* don't use alloca in product for very large memory allocation