To avoid floating point overflow problems which arise in power-methods like Lanczos, an initial pass is made through the input matrix to
This latter value, being the sum of all of the singular values, is used to rescale the entire matrix, effectively forcing the largest singular value to be strictly less than one, and transforming floating point overflow problems into floating point underflow (ie, very small singular values will become invisible, as they will appear to be zero and the algorithm will terminate).
This implementation uses {@link org.apache.mahout.math.matrix.linalg.EigenvalueDecomposition} to do theeigenvalue extraction from the small (desiredRank x desiredRank) tridiagonal matrix. Numerical stability is achieved via brute-force: re-orthogonalization against all previous eigenvectors is computed after every pass. This can be made smarter if (when!) this proves to be a major bottleneck. Of course, this step can be parallelized as well.
|
|