source: pacpussensors/trunk/Vislab/lib3dv-1.2.0/lib3dv/eigen/doc/TopicLinearAlgebraDecompositions.dox@ 148

Last change on this file since 148 was 136, checked in by ldecherf, 8 years ago

Doc

File size: 8.1 KB
Line 
1namespace Eigen {
2
3/** \eigenManualPage TopicLinearAlgebraDecompositions Catalogue of dense decompositions
4
5This page presents a catalogue of the dense matrix decompositions offered by Eigen.
6For an introduction on linear solvers and decompositions, check this \link TutorialLinearAlgebra page \endlink.
7
8\section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen
9
10<table class="manual-vl">
11 <tr>
12 <th class="meta"></th>
13 <th class="meta" colspan="5">Generic information, not Eigen-specific</th>
14 <th class="meta" colspan="3">Eigen-specific</th>
15 </tr>
16
17 <tr>
18 <th>Decomposition</th>
19 <th>Requirements on the matrix</th>
20 <th>Speed</th>
21 <th>Algorithm reliability and accuracy</th>
22 <th>Rank-revealing</th>
23 <th>Allows to compute (besides linear solving)</th>
24 <th>Linear solver provided by Eigen</th>
25 <th>Maturity of Eigen's implementation</th>
26 <th>Optimizations</th>
27 </tr>
28
29 <tr>
30 <td>PartialPivLU</td>
31 <td>Invertible</td>
32 <td>Fast</td>
33 <td>Depends on condition number</td>
34 <td>-</td>
35 <td>-</td>
36 <td>Yes</td>
37 <td>Excellent</td>
38 <td>Blocking, Implicit MT</td>
39 </tr>
40
41 <tr class="alt">
42 <td>FullPivLU</td>
43 <td>-</td>
44 <td>Slow</td>
45 <td>Proven</td>
46 <td>Yes</td>
47 <td>-</td>
48 <td>Yes</td>
49 <td>Excellent</td>
50 <td>-</td>
51 </tr>
52
53 <tr>
54 <td>HouseholderQR</td>
55 <td>-</td>
56 <td>Fast</td>
57 <td>Depends on condition number</td>
58 <td>-</td>
59 <td>Orthogonalization</td>
60 <td>Yes</td>
61 <td>Excellent</td>
62 <td>Blocking</td>
63 </tr>
64
65 <tr class="alt">
66 <td>ColPivHouseholderQR</td>
67 <td>-</td>
68 <td>Fast</td>
69 <td>Good</td>
70 <td>Yes</td>
71 <td>Orthogonalization</td>
72 <td>Yes</td>
73 <td>Excellent</td>
74 <td><em>Soon: blocking</em></td>
75 </tr>
76
77 <tr>
78 <td>FullPivHouseholderQR</td>
79 <td>-</td>
80 <td>Slow</td>
81 <td>Proven</td>
82 <td>Yes</td>
83 <td>Orthogonalization</td>
84 <td>Yes</td>
85 <td>Average</td>
86 <td>-</td>
87 </tr>
88
89 <tr class="alt">
90 <td>LLT</td>
91 <td>Positive definite</td>
92 <td>Very fast</td>
93 <td>Depends on condition number</td>
94 <td>-</td>
95 <td>-</td>
96 <td>Yes</td>
97 <td>Excellent</td>
98 <td>Blocking</td>
99 </tr>
100
101 <tr>
102 <td>LDLT</td>
103 <td>Positive or negative semidefinite<sup><a href="#note1">1</a></sup></td>
104 <td>Very fast</td>
105 <td>Good</td>
106 <td>-</td>
107 <td>-</td>
108 <td>Yes</td>
109 <td>Excellent</td>
110 <td><em>Soon: blocking</em></td>
111 </tr>
112
113 <tr><th class="inter" colspan="9">\n Singular values and eigenvalues decompositions</th></tr>
114
115 <tr>
116 <td>JacobiSVD (two-sided)</td>
117 <td>-</td>
118 <td>Slow (but fast for small matrices)</td>
119 <td>Excellent-Proven<sup><a href="#note3">3</a></sup></td>
120 <td>Yes</td>
121 <td>Singular values/vectors, least squares</td>
122 <td>Yes (and does least squares)</td>
123 <td>Excellent</td>
124 <td>R-SVD</td>
125 </tr>
126
127 <tr class="alt">
128 <td>SelfAdjointEigenSolver</td>
129 <td>Self-adjoint</td>
130 <td>Fast-average<sup><a href="#note2">2</a></sup></td>
131 <td>Good</td>
132 <td>Yes</td>
133 <td>Eigenvalues/vectors</td>
134 <td>-</td>
135 <td>Good</td>
136 <td><em>Closed forms for 2x2 and 3x3</em></td>
137 </tr>
138
139 <tr>
140 <td>ComplexEigenSolver</td>
141 <td>Square</td>
142 <td>Slow-very slow<sup><a href="#note2">2</a></sup></td>
143 <td>Depends on condition number</td>
144 <td>Yes</td>
145 <td>Eigenvalues/vectors</td>
146 <td>-</td>
147 <td>Average</td>
148 <td>-</td>
149 </tr>
150
151 <tr class="alt">
152 <td>EigenSolver</td>
153 <td>Square and real</td>
154 <td>Average-slow<sup><a href="#note2">2</a></sup></td>
155 <td>Depends on condition number</td>
156 <td>Yes</td>
157 <td>Eigenvalues/vectors</td>
158 <td>-</td>
159 <td>Average</td>
160 <td>-</td>
161 </tr>
162
163 <tr>
164 <td>GeneralizedSelfAdjointEigenSolver</td>
165 <td>Square</td>
166 <td>Fast-average<sup><a href="#note2">2</a></sup></td>
167 <td>Depends on condition number</td>
168 <td>-</td>
169 <td>Generalized eigenvalues/vectors</td>
170 <td>-</td>
171 <td>Good</td>
172 <td>-</td>
173 </tr>
174
175 <tr><th class="inter" colspan="9">\n Helper decompositions</th></tr>
176
177 <tr>
178 <td>RealSchur</td>
179 <td>Square and real</td>
180 <td>Average-slow<sup><a href="#note2">2</a></sup></td>
181 <td>Depends on condition number</td>
182 <td>Yes</td>
183 <td>-</td>
184 <td>-</td>
185 <td>Average</td>
186 <td>-</td>
187 </tr>
188
189 <tr class="alt">
190 <td>ComplexSchur</td>
191 <td>Square</td>
192 <td>Slow-very slow<sup><a href="#note2">2</a></sup></td>
193 <td>Depends on condition number</td>
194 <td>Yes</td>
195 <td>-</td>
196 <td>-</td>
197 <td>Average</td>
198 <td>-</td>
199 </tr>
200
201 <tr class="alt">
202 <td>Tridiagonalization</td>
203 <td>Self-adjoint</td>
204 <td>Fast</td>
205 <td>Good</td>
206 <td>-</td>
207 <td>-</td>
208 <td>-</td>
209 <td>Good</td>
210 <td><em>Soon: blocking</em></td>
211 </tr>
212
213 <tr>
214 <td>HessenbergDecomposition</td>
215 <td>Square</td>
216 <td>Average</td>
217 <td>Good</td>
218 <td>-</td>
219 <td>-</td>
220 <td>-</td>
221 <td>Good</td>
222 <td><em>Soon: blocking</em></td>
223 </tr>
224
225</table>
226
227\b Notes:
228<ul>
229<li><a name="note1">\b 1: </a>There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.</li>
230<li><a name="note2">\b 2: </a>Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.</li>
231<li><a name="note3">\b 3: </a>Our JacobiSVD is two-sided, making for proven and optimal precision for square matrices. For non-square matrices, we have to use a QR preconditioner first. The default choice, ColPivHouseholderQR, is already very reliable, but if you want it to be proven, use FullPivHouseholderQR instead.
232</ul>
233
234\section TopicLinAlgTerminology Terminology
235
236<dl>
237 <dt><b>Selfadjoint</b></dt>
238 <dd>For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for \em hermitian.
239 More generally, a matrix \f$ A \f$ is selfadjoint if and only if it is equal to its adjoint \f$ A^* \f$. The adjoint is also called the \em conjugate \em transpose. </dd>
240 <dt><b>Positive/negative definite</b></dt>
241 <dd>A selfadjoint matrix \f$ A \f$ is positive definite if \f$ v^* A v > 0 \f$ for any non zero vector \f$ v \f$.
242 In the same vein, it is negative definite if \f$ v^* A v < 0 \f$ for any non zero vector \f$ v \f$ </dd>
243 <dt><b>Positive/negative semidefinite</b></dt>
244 <dd>A selfadjoint matrix \f$ A \f$ is positive semi-definite if \f$ v^* A v \ge 0 \f$ for any non zero vector \f$ v \f$.
245 In the same vein, it is negative semi-definite if \f$ v^* A v \le 0 \f$ for any non zero vector \f$ v \f$ </dd>
246
247 <dt><b>Blocking</b></dt>
248 <dd>Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.</dd>
249 <dt><b>Implicit Multi Threading (MT)</b></dt>
250 <dd>Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product rountines.</dd>
251 <dt><b>Explicit Multi Threading (MT)</b></dt>
252 <dd>Means the algorithm is explicitely parallelized to take advantage of multicore processors via OpenMP.</dd>
253 <dt><b>Meta-unroller</b></dt>
254 <dd>Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.</dd>
255 <dt><b></b></dt>
256 <dd></dd>
257</dl>
258
259*/
260
261}
Note: See TracBrowser for help on using the repository browser.