2008-10-17 06:28:23 +08:00
namespace Eigen {
/** \page TutorialAdvancedLinearAlgebra Tutorial 3/3 - Advanced linear algebra
2008-10-17 19:20:46 +08:00
\ingroup Tutorial
2008-10-17 06:28:23 +08:00
<div class="eimainmenu">\ref index "Overview"
| \ref TutorialCore "Core features"
| \ref TutorialGeometry "Geometry"
| \b Advanced \b linear \b algebra
2009-01-17 01:20:12 +08:00
| \ref TutorialSparse "Sparse matrix"
2008-10-17 06:28:23 +08:00
</div>
\b Table \b of \b contents
2008-12-07 06:21:29 +08:00
- \ref TutorialAdvSolvers
2008-10-17 06:28:23 +08:00
- \ref TutorialAdvLU
- \ref TutorialAdvCholesky
- \ref TutorialAdvQR
- \ref TutorialAdvEigenProblems
2008-12-07 06:21:29 +08:00
\section TutorialAdvSolvers Solving linear problems
2008-10-17 06:28:23 +08:00
This part of the tutorial focuses on solving linear problem of the form \f$ A \mathbf{x} = b \f$,
where both \f$ A \f$ and \f$ b \f$ are known, and \f$ x \f$ is the unknown. Moreover, \f$ A \f$
assumed to be a squared matrix. Of course, the methods described here can be used whenever an expression
involve the product of an inverse matrix with a vector or another matrix: \f$ A^{-1} B \f$.
Eigen offers various algorithms to this problem, and its choice mainly depends on the nature of
the matrix \f$ A \f$, such as its shape, size and numerical properties.
2008-12-07 06:21:29 +08:00
\subsection TutorialAdvSolvers_Triangular Triangular solver
2008-10-17 06:28:23 +08:00
If the matrix \f$ A \f$ is triangular (upper or lower) and invertible (the coefficients of the diagonal
are all not zero), then the problem can be solved directly using MatrixBase::solveTriangular(), or better,
MatrixBase::solveTriangularInPlace(). Here is an example:
<table class="tutorial_code"><tr><td>
\include MatrixBase_marked.cpp
</td>
<td>
output:
\include MatrixBase_marked.out
</td></tr></table>
See MatrixBase::solveTriangular() for more details.
2008-12-07 06:21:29 +08:00
\subsection TutorialAdvSolvers_Inverse Direct inversion (for small matrices)
If the matrix \f$ A \f$ is small (\f$ \leq 4 \f$) and invertible, then a good approach is to directly compute
the inverse of the matrix \f$ A \f$, and then obtain the solution \f$ x \f$ by \f$ \mathbf{x} = A^{-1} b \f$. With Eigen,
2008-10-17 06:28:23 +08:00
this can be implemented like this:
\code
#include <Eigen/LU>
Matrix4f A = Matrix4f::Random();
Vector4f b = Vector4f::Random();
Vector4f x = A.inverse() * b;
\endcode
Note that the function inverse() is defined in the LU module.
See MatrixBase::inverse() for more details.
2008-12-07 06:21:29 +08:00
\subsection TutorialAdvSolvers_Symmetric Cholesky (for positive definite matrices)
If the matrix \f$ A \f$ is \b positive \b definite, then
2008-10-17 06:28:23 +08:00
the best method is to use a Cholesky decomposition.
2008-12-07 06:21:29 +08:00
Such positive definite matrices often arise when solving overdetermined problems in a least square sense (see below).
2008-10-17 06:28:23 +08:00
Eigen offers two different Cholesky decompositions: a \f$ LL^T \f$ decomposition where L is a lower triangular matrix,
and a \f$ LDL^T \f$ decomposition where L is lower triangular with unit diagonal and D is a diagonal matrix.
The latter avoids square roots and is therefore slightly more stable than the former one.
\code
#include <Eigen/Cholesky>
MatrixXf D = MatrixXf::Random(8,4);
MatrixXf A = D.transpose() * D;
2009-04-01 01:00:30 +08:00
VectorXf b = A * VectorXf::Random(4);
2008-10-17 06:28:23 +08:00
VectorXf x;
A.llt().solve(b,&x); // using a LLT factorization
A.ldlt().solve(b,&x); // using a LDLT factorization
\endcode
when the value of the right hand side \f$ b \f$ is not needed anymore, then it is faster to use
the \em in \em place API, e.g.:
\code
A.llt().solveInPlace(b);
\endcode
In that case the value of \f$ b \f$ is replaced by the result \f$ x \f$.
If the linear problem has to solved for various right hand side \f$ b_i \f$, but same matrix \f$ A \f$,
then it is highly recommended to perform the decomposition of \$ A \$ only once, e.g.:
\code
// ...
LLT<MatrixXf> lltOfA(A);
lltOfA.solveInPlace(b0);
lltOfA.solveInPlace(b1);
// ...
\endcode
\sa Cholesky_Module, LLT::solve(), LLT::solveInPlace(), LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT.
2008-12-07 06:21:29 +08:00
\subsection TutorialAdvSolvers_LU LU decomposition (for most cases)
If the matrix \f$ A \f$ does not fit in any of the previous categories, or if you are unsure about the numerical
stability of your problem, then you can use the LU solver based on a decomposition of the same name : see the section \ref TutorialAdvLU below. Actually, Eigen's LU module does not implement a standard LU decomposition, but rather a so-called LU decomposition
with full pivoting and rank update which has much better numerical stability.
2008-10-17 06:28:23 +08:00
The API of the LU solver is the same than the Cholesky one, except that there is no \em in \em place variant:
\code
2008-12-07 06:21:29 +08:00
#include <Eigen/LU>
MatrixXf A = MatrixXf::Random(20,20);
VectorXf b = VectorXf::Random(20);
VectorXf x;
2008-10-17 06:28:23 +08:00
A.lu().solve(b, &x);
\endcode
Again, the LU decomposition can be stored to be reused or to perform other kernel operations:
\code
// ...
LU<MatrixXf> luOfA(A);
luOfA.solve(b, &x);
// ...
\endcode
2008-12-07 06:21:29 +08:00
See the section \ref TutorialAdvLU below.
2008-10-17 06:28:23 +08:00
\sa class LU, LU::solve(), LU_Module
2008-12-07 06:21:29 +08:00
\subsection TutorialAdvSolvers_SVD SVD solver (for singular matrices and special cases)
2008-10-17 06:28:23 +08:00
Finally, Eigen also offer a solver based on a singular value decomposition (SVD). Again, the API is the
same than with Cholesky or LU:
\code
2008-12-07 06:21:29 +08:00
#include <Eigen/SVD>
MatrixXf A = MatrixXf::Random(20,20);
VectorXf b = VectorXf::Random(20);
VectorXf x;
2008-10-17 06:28:23 +08:00
A.svd().solve(b, &x);
2008-12-07 06:21:29 +08:00
SVD<MatrixXf> svdOfA(A);
2008-10-17 06:28:23 +08:00
svdOfA.solve(b, &x);
\endcode
\sa class SVD, SVD::solve(), SVD_Module
<a href="#" class="top">top</a>\section TutorialAdvLU LU
2008-12-07 06:21:29 +08:00
Eigen provides a rank-revealing LU decomposition with full pivoting, which has very good numerical stability.
You can obtain the LU decomposition of a matrix by calling \link MatrixBase::lu() lu() \endlink, which is the easiest way if you're going to use the LU decomposition only once, as in
\code
#include <Eigen/LU>
MatrixXf A = MatrixXf::Random(20,20);
VectorXf b = VectorXf::Random(20);
VectorXf x;
A.lu().solve(b, &x);
\endcode
Alternatively, you can construct a named LU decomposition, which allows you to reuse it for more than one operation:
\code
#include <Eigen/LU>
MatrixXf A = MatrixXf::Random(20,20);
2009-02-21 22:30:49 +08:00
Eigen::LU<MatrixXf> lu(A);
2008-12-07 06:21:29 +08:00
cout << "The rank of A is" << lu.rank() << endl;
if(lu.isInvertible()) {
cout << "A is invertible, its inverse is:" << endl << lu.inverse() << endl;
2008-12-07 07:20:30 +08:00
}
else {
cout << "Here's a matrix whose columns form a basis of the kernel a.k.a. nullspace of A:"
<< endl << lu.kernel() << endl;
}
2008-12-07 06:21:29 +08:00
\endcode
2008-10-17 06:28:23 +08:00
\sa LU_Module, LU::solve(), class LU
<a href="#" class="top">top</a>\section TutorialAdvCholesky Cholesky
todo
\sa Cholesky_Module, LLT::solve(), LLT::solveInPlace(), LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT
<a href="#" class="top">top</a>\section TutorialAdvQR QR
todo
\sa QR_Module, class QR
<a href="#" class="top">top</a>\section TutorialAdvEigenProblems Eigen value problems
todo
\sa class SelfAdjointEigenSolver, class EigenSolver
*/
}