- iterativeSolver
iterativeSolver¶
Works with VSimBase, VSimEM, VSimPD, VSimPA, and VSimVE licenses.
Solve a linear equation using iterative methods.
Since VSim uses the Aztec library for iterative solvers, many of these options are identical to options described in the AztecOO User Guide, which may be consulted for additional information.
iterativeSolver Parameters¶
- BaseSolver (code block, required)
BaseSolver parameters include:
- kind (required)
One of the following options. Keep in mind that every problem will respond differently to these solvers and preconditioner options, so experimentation is encouraged to find the most efficient and accurate set of options:
gmres
Generalized minimal residual (GMRES) method. GMRES is an all-purpose solver that works for problems where convergence to a solution may be an issue. The method works by seeking a vector \(x_n\in K_n(A,b) = \mathrm{span}(b, Ab, A^2b, \ldots, A^{n-1}b)\) such that \(r_n = Ax_n - b\) is minimized at each iteration. Thus, the following two attributes are also required with this option:
kspace
(integer, default = 30):The size of the basis for the Krylov vector space \(K_n(A,b)\).
orthog
(string, one ofclassic
ormodified
):The type of Gram-Schmidt orthogonalization used in forming the basis of \(K_n(A,b)\). If roundoff errors occur with
classic
, switch tomodified
.
cg
Conjugate gradient (CG) method. This is a powerful, scalable, and effecient method for symmetric positive-definite matrices, but it should not be used for any other non-SPD matrices. CG exploits the fact that the (exact) solution to the system \(x_*\) uniquely minimizes the quadratic function \(f(x) = \frac{1}{2}x^TAx - x^Tb\), and approximates it by searching in the direction of the function gradient.
cgs
Conjugate gradient squared (CGS) method. Similar to BiCGSTAB and works best when computation of \(A^T\) is impractical. However, the method is more susceptible to round-off errors and exhibits erratic convergence as a result.
tfqmr
Transpose free quasi-minimal residual (TFQMR) method. Aims to reduce \(A\) to a tridiagonal matrix and solve the resultant system by way of least-squares solvers. Uses look-ahead algorithms to avoid breakdowns in the underlying iterative processes BiCGSTAB uses. Slightly more expensive than BiCGSTAB but sidesteps some of the stability issues.
bicgstab
Biconjugate gradient stabilized (BiCGSTAB) method. Another all-purpose solver, good for when the system requires a large number of iterations to converge. Requires less storage than GMRES in that BiCGSTAB stores sequences of the residual and search vectors (which are mutually orthogonal) instead of orthogonalized residual vectors and the data associated with them. BiCGSTAB is not recommended for SPD systems, as CG will prove to be roughly twice as fast in that case. If the user is experiencing a breakdown with CGS, switching to BiCGSTAB may be advisable.
- Preconditioner (code block, required)
Preconditioner parameters include:
kind
(required) - one of- none
No preconditioner.
- jacobi
Jacobi preconditioner \(P=\mathrm{diag}(A)\). Works best with diagonally dominant matrices.
- neumann
Neumann preconditioner \(P=(2I - P_J^{-1}A)P_J^{-1}\) where \(P_J=\mathrm{diag}(A)\). Slightly more expensive than Jacobi but handles parallelization much more elegantly.
- leastSquares
Least-squares preconditioner \(P = A^T\). Transforms the system into the normal equations \(A^TAx = A^Tb\) and will return a precise solution if the original system \(Ax = b\) is consistent. Works best if the original system is known to be inconsistent or overconstrained (i.e. \(A\) has more rows than columns).
- symmetricGaussSeidel
Symmetric Gauss-Seidel preconditioner \(P = (L+D)D^{-1}(L+D)^T\) where \(L\) is the lower triangular part of \(A\) and \(D = \mathrm{diag}(A)\). This preconditioner is scalable since it only requires one storage vector, however its elements cannot be computed in parallel.
- multigrid
Multigrid preconditioner. This is the most powerful preconditioner option and works well with large problems. Requires the following parameters to be specified:
mgDefaults (string)
: Choose fromDD
(domain decomposition),SA
,DD-ML
, orDD-Maxwell
. TheSA
option is strongly recommended for symmetric matricies.maxLevels (integer)
: Maximum number of levels of smoother application before attempting a direct solve on the smaller problem. This parameter incurs the most communication overhead for parallel problems, so decreasing it to 3 or 4 may be advisible if the application is highly parallelized.smootherType (string)
: The type of smoother to use at each multigrid level.GaussSeidel
works well in general, but other options may increase the speed of the simulation. Other selections includeJacobi
,Chebyshev
, andSymBlockGaussSeidel
. IfmgDefaults
is set to eitherDD
orDD-ML
, theAztec
option is also available.smootherSweeps (integer)
: Number of times to smooth at each multigrid level. Three is usually sufficient but depending on the matrix properties fewer passes may be necessary.smootherPrePost (string)
: The multigrid solving algorithm calls for smoothing steps both before and after the solve, which is computationally wise in most cases. If the user would like to perform only one of these steps and possibly accelerate the solve, they can specify the step to perform here (choose frombefore
,after
, orboth
).coarseType (string)
: The solver to use at the coarsest level of the preconditioner. Select from aJacobi
solver (good for diagonally dominant matrices) orKLU
(an LU direct solver that works well with sparse matrices).dampingFactor (double)
: The damping factor for smoothed aggregation. Tech-X recommends1.333
as a good choice for most applications, but this can be adjusted up or down to speed up the solves.threshold (integer)
: Multigrid aggregation at each level. Setting this to0
should be sufficient for most applicaitons.increaseDecrease (string)
: Choose fromincrease
ordecrease
Ifincrease
, the finest convergence level will be at level 0. Ifdecrease
, the finest convergence level will be at levelmaxLevels - 1
. This option should not affect the convergence rate of the solver in either case.
- tolerance (real, default = 1e-8)
Tolerance to be used in determining convergence.
- maxIterations (integer, default = 1000)
Maximum number of iterations to reach convergence.
- output (string, default = none)
Level of Aztec output for viewing convergence.
last
prints the number of iterations required for convergence, the final residual, and the time elapsed computing the accepted solution.summary
is similar but also prints information about the solve that is being done.
- convergenceMetric (string, default = r0)
The residual measure used for determining convergence. One of
r0: \(\|r\|_2 / \|r_0\|_2\)
rhs: \(\|r\|_2 / \|b\|_2\)
Anorm: \(\|r\|_2 / \|A\|_\infty\)
noscaled: \(\|r\|_2\)
sol: \(\|r\|_\infty / (\|A\|_\infty\|x\|_1 + \|b\|_\infty)\)
This specifies the measure that will be compared to
tolerance
to determine whether iteration is sufficient.r0
reduces the error of the initial guess by the tolerance.rhs
reduces the error \((Ax-b)\) until it is less thantolerance
times \(b\) (the right-hand side).r0
is the safest metric for most applications. However, for repeated similar solves where the iterative solutions do not vary greatly over time (e.g. slowly-varying simulations), therhs
metric may accelerate the solve, if the previous solution is similar to the next solution. This is not recommended if the right-hand-side of the equation is zero (i.e. the system is homogeneous). Thenoscaled
option will also have a similar effect, but this requires setting the tolerance to the absolute unscaled error, rather than the relative error.Currently, the
sol
option does not work.
Example iterativeSolver Block¶
<LinearSolver mySolver>
kind = iterativeSolver
<BaseSolver>
kind = gmres
</BaseSolver>
<Preconditioner>
kind = multigrid
mgDefaults = SA
</Preconditioner>
tolerance = 1.e-10
maxIterations = 1000
</LinearSolver>