Question 1: why are *=scalar and /=scalar working right away ?
Same weirdness in DynamicSparseMatrix where operators += and -= work wihout
having to redefine them ???
* try to be clever in matrix ctors and operator=: be lazy when we can, always allow
to copy rowvector into columnvector, check the template parameters,
try to factor the code better
* add missing copy ctor in UnalignedType
* fix bug in the traits of DiagonalProduct
* renaming: EIGEN_TUNE_FOR_CPU_CACHE_SIZE
* update the dox a little
That means a lot of features which were available for sparse matrices
via the dense (and super slow) implemention are no longer available.
All features which make sense for sparse matrices (aka can be implemented efficiently) will be
implemented soon, but don't expect to see an API as rich as for the dense path.
Other changes:
* no block(), row(), col() anymore.
* instead use .innerVector() to get a col or row vector of a matrix.
* .segment(), start(), end() will be back soon, not sure for block()
* faster cwise product
*Add Eigen/StdVector header.
Including it #includes<vector> and "Core" and generates a partial
specialization of std::vector<T> for T=Eigen::Matrix<...>
that will work even with vectorizable fixed-size Eigen types
(working around a design issue in the c++ STL)
*Add unit-test
CCMAIL: alex.stapleton@gmail.com
only used as fallback for now, needs benchmarking.
also notice that some malloc() impls do waste memory to keep track of alignment
and other stuff (check msdn's page on malloc).
* expand test_dynalloc to cover low level aligned alloc funcs. Remove the old
#ifdef EIGEN_VECTORIZE...
* rewrite the logic choosing an aligned alloc, some new stuff:
* malloc() already aligned on freebsd and windows x64 (plus apple already)
* _mm_malloc() used only if EIGEN_VECTORIZE
* posix_memalign: correct detection according to man page (not necessarily
linux specific), don't attempt to declare it if the platform didn't declare it
(there had to be a reason why it didn't declare it, right?)
ei_aligned_malloc now really behaves like a malloc
(untyped, doesn't call ctor)
ei_aligned_new is the typed variant calling ctor
EIGEN_MAKE_ALIGNED_OPERATOR_NEW now takes the class name as parameter
* extend unit tests
* add support for generic sum reduction and dot product
* optimize the cwise()* : this is a special case of CwiseBinaryOp where
we only have to process the coeffs which are not null for *both* matrices.
Perhaps there exist some other binary operations like that ?
(former solution still available and tested)
This plays much better with classes that already have base classes --
don't force the user to mess with multiple inheritance, which gave
much trouble with MSVC.
* Expand the unaligned assert dox page
* Minor fixes in the lazy evaluation dox page
can't believe that 3 people wasted so much time because of a missing &
!!!!!
and this time MSVC didn't catch it so it was really copying the vector
on the stack at an unaligned location!
For that reason, we shouldn't give Matrix two empty base classes, since sizeof(Matrix) must be optimal.
So we overload operator new and delete manually rather than inheriting an empty struct for doing that.
order, one bit for enabling/disabling auto-alignment. If you want to
disable, do:
Matrix<float,4,1,Matrix_DontAlign>
The Matrix_ prefix is the only way I can see to avoid
ambiguity/pollution. The old RowMajor, ColMajor constants are
deprecated, remain for now.
* this prompted several improvements in matrix_storage. ei_aligned_array
renamed to ei_matrix_array and moved there. The %16==0 tests are now
much more centralized in 1 place there.
* unalignedassert test: updated
* update FindEigen2.cmake from KDElibs
* determinant test: use VERIFY_IS_APPROX to fix false positives; add
testing of 1 big matrix
* Matrix: always inherit WithAlignedOperatorNew, regardless of
vectorization or not
* rename ei_alloc_stack to ei_aligned_stack_alloc
* mixingtypes test: disable vectorization as SSE intrinsics don't allow
mixing types and we just get compile errors there.
cachefriendlyproduct, that should be banned as well as depending on the
platform they can give a malloc, and they could happen even with (large
enough) fixed size matrices. Corresponding fix in Product.h:
cachefriendly is now only used for dynamic matrices -- fixedsize, no
matter how large, doesn't use the cachefriendly product. We don't need
to care (in my opinion) about performance for large fixed size, as large
fixed size is a bad idea in the first place and it is more important to
be able to guarantee clearly that fixed size never causes a malloc.
* fixes for mistakes (especially in the cast() methods in Geometry) revealed by the new "mixing types" test
* dox love, including a section on coeff access in core and an overview in geometry
* fix issues in Product revealed by this test
* in Dot.h forbid mixing of different types (at least for now, might allow real.dot(complex) in the future).
* use _mm_malloc/_mm_free on other platforms than linux of MSVC (eg., cygwin, OSX)
* replace a lot of inline keywords by EIGEN_STRONG_INLINE to compensate for
poor MSVC inlining
* actually GCC 4.3.0 has a bug, "deprecated" placed at the end
of a function prototype doesn't have any effect, moving them to
the start of the function prototype makes it actually work!
* finish porting the cholesky unit-test to the new LLT/LDLT,
after the above fix revealed a deprecated warning
Derived to MatrixBase.
* the optimization of eval() for Matrix now consists in a partial
specialization of ei_eval, which returns a reference type for Matrix.
No overriding of eval() in Matrix anymore. Consequence: careful,
ei_eval is no longer guaranteed to give a plain matrix type!
For that, use ei_plain_matrix_type, or the PlainMatrixType typedef.
* so lots of changes to adapt to that everywhere. Hope this doesn't
break (too much) MSVC compilation.
* add code examples for the new image() stuff.
* lower a bit the precision for floats in the unit tests as
we were already doing some workarounds in inverse.cpp and we got some
failed tests.
* somehow the NICE_RANDOM stuff wasn't being used anymore and
tests were sometimes failing again. Fixed by #including Eigen/Array
instead of cherry-picking just Random.h.
* little fixes in the unaligned assert page
* add Transform::operator= taking rotation.
An old remnant was left commented out. Why was it disabled?
* slight optimization in operator= taking translation
* slight optimization (perhaps) in the new memory assertion
for this very nasty bug (unaligned member in dynamically allocated struct)
that our friends at Krita just encountered:
http://bugs.kde.org/show_bug.cgi?id=177133
CCBUG:177133
- in matrix-matrix product, static assert on the two scalar types to be the same.
- Similarly in CwiseBinaryOp. POTENTIALLY CONTROVERSIAL: we don't allow anymore binary
ops to take two different scalar types. The functors that we defined take two args
of the same type anyway; also we still allow the return type to be different.
Again the reason is that different scalar types are incompatible with vectorization.
Better have the user realize explicitly what mixing different numeric types costs him
in terms of performance.
See comment in CwiseBinaryOp constructor.
- This allowed to fix a little mistake in test/regression.cpp, mixing float and double
- Remove redundant semicolon (;) after static asserts
* add a LDL^T factorization with solver using code from T. Davis's LDL
library (LPGL2.1+)
* various bug fixes in trianfular solver, matrix product, etc.
* improve cmake files for the supported libraries
* split the sparse unit test
* etc.
Some naming questions:
- for "extend" we could also think of: "expand", "union", "add"
- same for "clamp": "crop", "intersect"
- same for "contains": "isInside", "intersect"
=> ah "intersect" is conflicting, so that eliminates this one !
* remove the automatic resizing feature of operator =
* add function Matrix::set() to be used when the previous
behavior is wanted
* the default constructor of dynamic-size matrices now
creates a "null" matrix (data=0, rows = cols = 0)
instead of a 1x1 matrix
* fix UnixX typos ;)
as described on the wiki (one map per N column)
Here's some bench results for the 4 currently supported map impl:
std::map => 18.3385 (581 MB)
gnu::hash_map => 6.52574 (555 MB)
google::dense => 2.87982 (315 MB)
google::sparse => 15.7441 (165 MB)
This is the time is second (and memory consumption) to insert/lookup
10 million of coeffs with random coords inside a 10000^2 matrix,
with one map per packet of 64 columns => google::dense really rocks !
Note I use for the key value the index of the column in the packet (between 0 and 63)
times the number of rows and I used the default hash function.... so maybe there is
room for improvement here....
solver from suitesparse (as cholmod). It seems to be even faster
than SuperLU and it was much simpler to interface ! Well,
the factorization is faster, but for the solve part, SuperLU is
quite faster. On the other hand the solve part represents only a
fraction of the whole procedure. Moreover, I bench random matrices
that does not represents real cases, and I'm not sure at all
I use both libraries with their best settings !
* rename Cholesky to LLT
* rename CholeskyWithoutSquareRoot to LDLT
* rename MatrixBase::cholesky() to llt()
* rename MatrixBase::choleskyNoSqrt() to ldlt()
* make {LLT,LDLT}::solve() API consistent with other modules
Note that we are going to keep a source compatibility untill the next beta release.
E.g., the "old" Cholesky* classes, etc are still available for some time.
To be clear, Eigen beta2 should be (hopefully) source compatible with beta1,
and so beta2 will contain all the deprecated API of beta1. Those features marked
as deprecated will be removed in beta3 (or in the final 2.0 if there is no beta 3 !).
Also includes various updated in sparse Cholesky.
* several fixes (transpose, matrix product, etc...)
* Added a basic cholesky factorization
* Added a low level hybrid dense/sparse vector class
to help writing code involving intensive read/write
in a fixed vector. It is currently used to implement
the matrix product itself as well as in the Cholesky
factorization.
However, for matrices larger than 5, it seems there is constantly a quite large error for a very
few coefficients. I don't what's going on, but that's certainely not due to numerical issues only.
(also note that the test with the pseudo eigenvectors fails the same way)
* eigenvectors => pseudoEigenvectors
* added pseudoEigenvalueMatrix
* clear the documentation
* added respective unit test
Still missing: a proper eigenvectors() function.
based on the former.
* opengl_demo: makes IcoSphere better (vertices are instanciated only once) and
removed the generation of a big geometry for the fancy spheres...
* add a WithAlignedOperatorNew class with overloaded operator new
* make Matrix (and Quaternion, Transform, Hyperplane, etc.) use it
if needed such that "*(new Vector4) = xpr" does not failed anymore.
* Please: make sure your classes having fixed size Eigen's vector
or matrice attributes inherit WithAlignedOperatorNew
* add a ei_new_allocator STL memory allocator to use with STL containers.
This allocator really calls operator new on your types (unlike GCC's
new_allocator). Example:
std::vector<Vector4f> data(10);
will segfault if the vectorization is enabled, instead use:
std::vector<Vector4f,ei_new_allocator<Vector4f> > data(10);
NOTE: you only have to worry if you deal with fixed-size matrix types
with "sizeof(matrix_type)%16==0"...
few bits left of the comma and for floating-point types will never return zero.
This replaces the custom functions in test/main.h, so one does not anymore need
to think about that when writing tests.
This allow code factorization and generic template specialization
of functions
* added any_rotation * {Translation,Scaling,Transform} products methods
* rewrite of the actually broken ToRoationMatrix helper class to
a global ei_toRotationMatrix function.