mirror of
https://gitlab.com/libeigen/eigen.git
synced 2025-04-06 19:10:36 +08:00
Update documentation for matrix decompositions and least squares solvers.
This commit is contained in:
parent
5c6b3efead
commit
7e6f94961c
@ -30,14 +30,17 @@ computing least squares solutions:
|
||||
</table>
|
||||
|
||||
This is example from the page \link TutorialLinearAlgebra Linear algebra and decompositions \endlink.
|
||||
If you just need to solve the least squares problem, but are not interested in the SVD per se, a
|
||||
faster alternative method is CompleteOrthogonalDecomposition.
|
||||
|
||||
|
||||
\section LeastSquaresQR Using the QR decomposition
|
||||
|
||||
The solve() method in QR decomposition classes also computes the least squares solution. There are
|
||||
three QR decomposition classes: HouseholderQR (no pivoting, so fast but unstable),
|
||||
ColPivHouseholderQR (column pivoting, thus a bit slower but more accurate) and FullPivHouseholderQR
|
||||
(full pivoting, so slowest and most stable). Here is an example with column pivoting:
|
||||
three QR decomposition classes: HouseholderQR (no pivoting, fast but unstable if your matrix is
|
||||
not rull rank), ColPivHouseholderQR (column pivoting, thus a bit slower but more stable) and
|
||||
FullPivHouseholderQR (full pivoting, so slowest and slightly more stable than ColPivHouseholderQR).
|
||||
Here is an example with column pivoting:
|
||||
|
||||
<table class="example">
|
||||
<tr><th>Example:</th><th>Output:</th></tr>
|
||||
@ -61,9 +64,11 @@ Finding the least squares solution of \a Ax = \a b is equivalent to solving the
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
If the matrix \a A is ill-conditioned, then this is not a good method, because the condition number
|
||||
This method is usually the fastest, especially when \a A is "tall and skinny". However, if the
|
||||
matrix \a A is even mildly ill-conditioned, this is not a good method, because the condition number
|
||||
of <i>A</i><sup>T</sup><i>A</i> is the square of the condition number of \a A. This means that you
|
||||
lose twice as many digits using normal equation than if you use the other methods.
|
||||
lose roughly twice as many digits of accuracy using the normal equation, compared to the more stable
|
||||
methods mentioned above.
|
||||
|
||||
*/
|
||||
|
||||
|
@ -99,7 +99,7 @@ To get an overview of the true relative speed of the different decompositions, c
|
||||
<td><em>-</em></td>
|
||||
</tr>
|
||||
|
||||
<tr class="alt">
|
||||
<tr>
|
||||
<td>LLT</td>
|
||||
<td>Positive definite</td>
|
||||
<td>Very fast</td>
|
||||
@ -111,7 +111,7 @@ To get an overview of the true relative speed of the different decompositions, c
|
||||
<td>Blocking</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<tr class="alt">
|
||||
<td>LDLT</td>
|
||||
<td>Positive or negative semidefinite<sup><a href="#note1">1</a></sup></td>
|
||||
<td>Very fast</td>
|
||||
|
@ -14,7 +14,7 @@ QR, %SVD, eigendecompositions... After reading this page, don't miss our
|
||||
\f[ Ax \: = \: b \f]
|
||||
Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
|
||||
|
||||
\b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like,
|
||||
\b The \b solution: You can choose between various decompositions, depending on the properties of your matrix \a A,
|
||||
and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
|
||||
and is a good compromise:
|
||||
<table class="example">
|
||||
@ -34,7 +34,7 @@ Vector3f x = dec.solve(b);
|
||||
|
||||
Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
|
||||
works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
|
||||
depending on your matrix and the trade-off you want to make:
|
||||
depending on your matrix, the problem you are trying to solve, and the trade-off you want to make:
|
||||
|
||||
<table class="manual">
|
||||
<tr>
|
||||
@ -128,11 +128,13 @@ depending on your matrix and the trade-off you want to make:
|
||||
</table>
|
||||
To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink.
|
||||
|
||||
All of these decompositions offer a solve() method that works as in the above example.
|
||||
All of these decompositions offer a solve() method that works as in the above example.
|
||||
|
||||
For example, if your matrix is positive definite, the above table says that a very good
|
||||
choice is then the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general
|
||||
matrix (not a vector) as right hand side is possible.
|
||||
If you know more about the properties of your matrix, you can use the above table to select the best method.
|
||||
For example, a good choice for solving linear systems with a non-symmetric matrix of full rank is PartialPivLU.
|
||||
If you know that your matrix is also symmetric and positive definite, the above table says that
|
||||
a very good choice is the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general
|
||||
matrix (not a vector) as right hand side is possible:
|
||||
|
||||
<table class="example">
|
||||
<tr><th>Example:</th><th>Output:</th></tr>
|
||||
@ -146,7 +148,34 @@ For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing
|
||||
supports many other decompositions), see our special page on
|
||||
\ref TopicLinearAlgebraDecompositions "this topic".
|
||||
|
||||
\section TutorialLinAlgSolutionExists Checking if a solution really exists
|
||||
|
||||
\section TutorialLinAlgLeastsquares Least squares solving
|
||||
|
||||
The most general and accurate method to solve under- or over-determined linear systems
|
||||
in the least squares sense, is the SVD decomposition. Eigen provides two implementations.
|
||||
The recommended one is the BDCSVD class, which scales well for large problems
|
||||
and automatically falls back to the JacobiSVD class for smaller problems.
|
||||
For both classes, their solve() method solved the linear system in the least-squares
|
||||
sense.
|
||||
|
||||
Here is an example:
|
||||
<table class="example">
|
||||
<tr><th>Example:</th><th>Output:</th></tr>
|
||||
<tr>
|
||||
<td>\include TutorialLinAlgSVDSolve.cpp </td>
|
||||
<td>\verbinclude TutorialLinAlgSVDSolve.out </td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
An alternative to the SVD, which is usually faster and about as accurate, is CompleteOrthogonalDecomposition.
|
||||
|
||||
Again, if you know more about the problem, the table above contains methods that are potentially faster.
|
||||
If your matrix is full rank, HouseHolderQR is the method of choice. If your matrix is full rank and well conditioned,
|
||||
using the Cholesky decomposition (LLT) on the matrix of the normal equations can be faster still.
|
||||
Our page on \link LeastSquares least squares solving \endlink has more details.
|
||||
|
||||
|
||||
\section TutorialLinAlgSolutionExists Checking if a matrix is singular
|
||||
|
||||
Only you know what error margin you want to allow for a solution to be considered valid.
|
||||
So Eigen lets you do this computation for yourself, if you want to, as in this example:
|
||||
@ -179,11 +208,11 @@ very rare. The call to info() is to check for this possibility.
|
||||
\section TutorialLinAlgInverse Computing inverse and determinant
|
||||
|
||||
First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
|
||||
in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often
|
||||
in \em numerical linear algebra they are not as useful as in pure mathematics. Inverse computations are often
|
||||
advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
|
||||
is invertible.
|
||||
|
||||
However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful.
|
||||
However, for \em very \em small matrices, the above may not be true, and inverse and determinant can be very useful.
|
||||
|
||||
While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
|
||||
call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
|
||||
@ -198,28 +227,6 @@ Here is an example:
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
\section TutorialLinAlgLeastsquares Least squares solving
|
||||
|
||||
The most accurate method to do least squares solving is with a SVD decomposition.
|
||||
Eigen provides two implementations.
|
||||
The recommended one is the BDCSVD class, which scale well for large problems
|
||||
and automatically fall-back to the JacobiSVD class for smaller problems.
|
||||
For both classes, their solve() method is doing least-squares solving.
|
||||
|
||||
Here is an example:
|
||||
<table class="example">
|
||||
<tr><th>Example:</th><th>Output:</th></tr>
|
||||
<tr>
|
||||
<td>\include TutorialLinAlgSVDSolve.cpp </td>
|
||||
<td>\verbinclude TutorialLinAlgSVDSolve.out </td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Another methods, potentially faster but less reliable, are to use a Cholesky decomposition of the
|
||||
normal matrix or a QR decomposition. Our page on \link LeastSquares least squares solving \endlink
|
||||
has more details.
|
||||
|
||||
|
||||
\section TutorialLinAlgSeparateComputation Separating the computation from the construction
|
||||
|
||||
In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
|
||||
|
Loading…
x
Reference in New Issue
Block a user