1 | namespace Eigen {
|
---|
2 |
|
---|
3 | /** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions
|
---|
4 |
|
---|
5 | This page explains how to solve linear systems, compute various decompositions such as LU,
|
---|
6 | QR, %SVD, eigendecompositions... After reading this page, don't miss our
|
---|
7 | \link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions.
|
---|
8 |
|
---|
9 | \eigenAutoToc
|
---|
10 |
|
---|
11 | \section TutorialLinAlgBasicSolve Basic linear solving
|
---|
12 |
|
---|
13 | \b The \b problem: You have a system of equations, that you have written as a single matrix equation
|
---|
14 | \f[ Ax \: = \: b \f]
|
---|
15 | Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
|
---|
16 |
|
---|
17 | \b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like,
|
---|
18 | and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
|
---|
19 | and is a good compromise:
|
---|
20 | <table class="example">
|
---|
21 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
22 | <tr>
|
---|
23 | <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td>
|
---|
24 | <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td>
|
---|
25 | </tr>
|
---|
26 | </table>
|
---|
27 |
|
---|
28 | In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the
|
---|
29 | matrix is of type Matrix3f, this line could have been replaced by:
|
---|
30 | \code
|
---|
31 | ColPivHouseholderQR<Matrix3f> dec(A);
|
---|
32 | Vector3f x = dec.solve(b);
|
---|
33 | \endcode
|
---|
34 |
|
---|
35 | Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
|
---|
36 | works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
|
---|
37 | depending on your matrix and the trade-off you want to make:
|
---|
38 |
|
---|
39 | <table class="manual">
|
---|
40 | <tr>
|
---|
41 | <th>Decomposition</th>
|
---|
42 | <th>Method</th>
|
---|
43 | <th>Requirements on the matrix</th>
|
---|
44 | <th>Speed</th>
|
---|
45 | <th>Accuracy</th>
|
---|
46 | </tr>
|
---|
47 | <tr>
|
---|
48 | <td>PartialPivLU</td>
|
---|
49 | <td>partialPivLu()</td>
|
---|
50 | <td>Invertible</td>
|
---|
51 | <td>++</td>
|
---|
52 | <td>+</td>
|
---|
53 | </tr>
|
---|
54 | <tr class="alt">
|
---|
55 | <td>FullPivLU</td>
|
---|
56 | <td>fullPivLu()</td>
|
---|
57 | <td>None</td>
|
---|
58 | <td>-</td>
|
---|
59 | <td>+++</td>
|
---|
60 | </tr>
|
---|
61 | <tr>
|
---|
62 | <td>HouseholderQR</td>
|
---|
63 | <td>householderQr()</td>
|
---|
64 | <td>None</td>
|
---|
65 | <td>++</td>
|
---|
66 | <td>+</td>
|
---|
67 | </tr>
|
---|
68 | <tr class="alt">
|
---|
69 | <td>ColPivHouseholderQR</td>
|
---|
70 | <td>colPivHouseholderQr()</td>
|
---|
71 | <td>None</td>
|
---|
72 | <td>+</td>
|
---|
73 | <td>++</td>
|
---|
74 | </tr>
|
---|
75 | <tr>
|
---|
76 | <td>FullPivHouseholderQR</td>
|
---|
77 | <td>fullPivHouseholderQr()</td>
|
---|
78 | <td>None</td>
|
---|
79 | <td>-</td>
|
---|
80 | <td>+++</td>
|
---|
81 | </tr>
|
---|
82 | <tr class="alt">
|
---|
83 | <td>LLT</td>
|
---|
84 | <td>llt()</td>
|
---|
85 | <td>Positive definite</td>
|
---|
86 | <td>+++</td>
|
---|
87 | <td>+</td>
|
---|
88 | </tr>
|
---|
89 | <tr>
|
---|
90 | <td>LDLT</td>
|
---|
91 | <td>ldlt()</td>
|
---|
92 | <td>Positive or negative semidefinite</td>
|
---|
93 | <td>+++</td>
|
---|
94 | <td>++</td>
|
---|
95 | </tr>
|
---|
96 | </table>
|
---|
97 |
|
---|
98 | All of these decompositions offer a solve() method that works as in the above example.
|
---|
99 |
|
---|
100 | For example, if your matrix is positive definite, the above table says that a very good
|
---|
101 | choice is then the LDLT decomposition. Here's an example, also demonstrating that using a general
|
---|
102 | matrix (not a vector) as right hand side is possible.
|
---|
103 |
|
---|
104 | <table class="example">
|
---|
105 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
106 | <tr>
|
---|
107 | <td>\include TutorialLinAlgExSolveLDLT.cpp </td>
|
---|
108 | <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td>
|
---|
109 | </tr>
|
---|
110 | </table>
|
---|
111 |
|
---|
112 | For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen
|
---|
113 | supports many other decompositions), see our special page on
|
---|
114 | \ref TopicLinearAlgebraDecompositions "this topic".
|
---|
115 |
|
---|
116 | \section TutorialLinAlgSolutionExists Checking if a solution really exists
|
---|
117 |
|
---|
118 | Only you know what error margin you want to allow for a solution to be considered valid.
|
---|
119 | So Eigen lets you do this computation for yourself, if you want to, as in this example:
|
---|
120 |
|
---|
121 | <table class="example">
|
---|
122 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
123 | <tr>
|
---|
124 | <td>\include TutorialLinAlgExComputeSolveError.cpp </td>
|
---|
125 | <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td>
|
---|
126 | </tr>
|
---|
127 | </table>
|
---|
128 |
|
---|
129 | \section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors
|
---|
130 |
|
---|
131 | You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page".
|
---|
132 | Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using
|
---|
133 | SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver.
|
---|
134 |
|
---|
135 | The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is
|
---|
136 | very rare. The call to info() is to check for this possibility.
|
---|
137 |
|
---|
138 | <table class="example">
|
---|
139 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
140 | <tr>
|
---|
141 | <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td>
|
---|
142 | <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td>
|
---|
143 | </tr>
|
---|
144 | </table>
|
---|
145 |
|
---|
146 | \section TutorialLinAlgInverse Computing inverse and determinant
|
---|
147 |
|
---|
148 | First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
|
---|
149 | in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often
|
---|
150 | advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
|
---|
151 | is invertible.
|
---|
152 |
|
---|
153 | However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful.
|
---|
154 |
|
---|
155 | While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
|
---|
156 | call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
|
---|
157 | allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices.
|
---|
158 |
|
---|
159 | Here is an example:
|
---|
160 | <table class="example">
|
---|
161 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
162 | <tr>
|
---|
163 | <td>\include TutorialLinAlgInverseDeterminant.cpp </td>
|
---|
164 | <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td>
|
---|
165 | </tr>
|
---|
166 | </table>
|
---|
167 |
|
---|
168 | \section TutorialLinAlgLeastsquares Least squares solving
|
---|
169 |
|
---|
170 | The best way to do least squares solving is with a SVD decomposition. Eigen provides one as the JacobiSVD class, and its solve()
|
---|
171 | is doing least-squares solving.
|
---|
172 |
|
---|
173 | Here is an example:
|
---|
174 | <table class="example">
|
---|
175 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
176 | <tr>
|
---|
177 | <td>\include TutorialLinAlgSVDSolve.cpp </td>
|
---|
178 | <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
|
---|
179 | </tr>
|
---|
180 | </table>
|
---|
181 |
|
---|
182 | Another way, potentially faster but less reliable, is to use a LDLT decomposition
|
---|
183 | of the normal matrix. In any case, just read any reference text on least squares, and it will be very easy for you
|
---|
184 | to implement any linear least squares computation on top of Eigen.
|
---|
185 |
|
---|
186 | \section TutorialLinAlgSeparateComputation Separating the computation from the construction
|
---|
187 |
|
---|
188 | In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
|
---|
189 | There are however situations where you might want to separate these two things, for example if you don't know,
|
---|
190 | at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing
|
---|
191 | decomposition object.
|
---|
192 |
|
---|
193 | What makes this possible is that:
|
---|
194 | \li all decompositions have a default constructor,
|
---|
195 | \li all decompositions have a compute(matrix) method that does the computation, and that may be called again
|
---|
196 | on an already-computed decomposition, reinitializing it.
|
---|
197 |
|
---|
198 | For example:
|
---|
199 |
|
---|
200 | <table class="example">
|
---|
201 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
202 | <tr>
|
---|
203 | <td>\include TutorialLinAlgComputeTwice.cpp </td>
|
---|
204 | <td>\verbinclude TutorialLinAlgComputeTwice.out </td>
|
---|
205 | </tr>
|
---|
206 | </table>
|
---|
207 |
|
---|
208 | Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size,
|
---|
209 | so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you
|
---|
210 | are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just
|
---|
211 | passing the size to the decomposition constructor, as in this example:
|
---|
212 | \code
|
---|
213 | HouseholderQR<MatrixXf> qr(50,50);
|
---|
214 | MatrixXf A = MatrixXf::Random(50,50);
|
---|
215 | qr.compute(A); // no dynamic memory allocation
|
---|
216 | \endcode
|
---|
217 |
|
---|
218 | \section TutorialLinAlgRankRevealing Rank-revealing decompositions
|
---|
219 |
|
---|
220 | Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically
|
---|
221 | also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a
|
---|
222 | singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions
|
---|
223 | whether they are rank-revealing or not.
|
---|
224 |
|
---|
225 | Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(),
|
---|
226 | and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the
|
---|
227 | case with FullPivLU:
|
---|
228 |
|
---|
229 | <table class="example">
|
---|
230 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
231 | <tr>
|
---|
232 | <td>\include TutorialLinAlgRankRevealing.cpp </td>
|
---|
233 | <td>\verbinclude TutorialLinAlgRankRevealing.out </td>
|
---|
234 | </tr>
|
---|
235 | </table>
|
---|
236 |
|
---|
237 | Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no
|
---|
238 | floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
|
---|
239 | on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
|
---|
240 | could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
|
---|
241 | on your decomposition object before calling rank() or any other method that needs to use such a threshold.
|
---|
242 | The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
|
---|
243 | decomposition after you've changed the threshold.
|
---|
244 |
|
---|
245 | <table class="example">
|
---|
246 | <tr><th>Example:</th><th>Output:</th></tr>
|
---|
247 | <tr>
|
---|
248 | <td>\include TutorialLinAlgSetThreshold.cpp </td>
|
---|
249 | <td>\verbinclude TutorialLinAlgSetThreshold.out </td>
|
---|
250 | </tr>
|
---|
251 | </table>
|
---|
252 |
|
---|
253 | */
|
---|
254 |
|
---|
255 | }
|
---|