iterativeSolver

iterativeSolver

Works with VSimBase, VSimEM, VSimPD, VSimPA, and VSimMD licenses.

Solve a linear equation using iterative methods.

Since VSim uses the Aztec library for iterative solvers, many of these options are identical to options described in the AztecOO User Guide, which may be consulted for additional information.

iterativeSolver Parameters

BaseSolver (code block, required)

BaseSolver parameters include:

kind (required)

One of the following options. Keep in mind that every problem will respond differently to these solvers and preconditioner options, so experimentation is encouraged to find the most efficient and accurate set of options:

  • gmres

    Generalized minimal residual (GMRES) method. GMRES is an all-purpose solver that works for problems where convergence to a solution may be an issue. The method works by seeking a vector \(x_n\in K_n(A,b) = \mathrm{span}(b, Ab, A^2b, \ldots, A^{n-1}b)\) such that \(r_n = Ax_n - b\) is minimized at each iteration. Thus, the following two attributes are also required with this option:

    • kspace (integer, default = 30):
      The size of the Krylov vector space \(K_n(A,b)\) used.
    • orthog (string, one of classic or modified):
      The type of Gram-Schmidt orthogonalization used in forming \(K_n(A,b)\). If roundoff errors occur with classic, switch to modified.
  • cg

    Conjugate gradient (CG) method. This is a powerful, scalable, and effecient method for symmetric positive-definite matrices, but it should not be used for any other non-SPD matrices. CG exploits the fact that the (exact) solution to the system \(x_*\) uniquely minimizes the quadratic function \(f(x) = \frac{1}{2}x^TAx - x^Tb\), and approximates it by searching in the direction of the function gradient.

  • cgs

    Conjugate gradient squared (CGS) method. Similar to BiCGSTAB and works best when computation of \(A^T\) is impractical. However, the method is more susceptible to round-off errors and exhibits erratic convergence as a result.

  • tfqmr

    Transpose free quasi-minimal residual (TFQMR) method. Aims to reduce \(A\) to a tridiagonal matrix and solve the resultant system by way of least-squares solvers. Uses look-ahead algorithms to avoid breakdowns in the underlying iterative processes BiCGSTAB uses. Slightly more expensive than BiCGSTAB but sidesteps some of the stability issues.

  • bicgstab

    Biconjugate gradient stabilized (BiCGSTAB) method. Another all-purpose solver, good for when the system requires a large number of iterations to converge. Requires less storage than GMRES in that BiCGSTAB stores sequences of the residual and search vectors (which are mutually orthogonal) instead of orthogonalized residual vectors and the data associated with them. BiCGSTAB is not recommended for SPD systems, as CG will prove to be roughly twice as fast in that case. If the user is experiencing a breakdown with CGS, switching to BiCGSTAB may be advisable.

Preconditioner (code block, required)

Preconditioner parameters include:

kind (required) - one of

  • none

    No preconditioner.

  • jacobi

    Jacobi preconditioner \(P=\mathrm{diag}(A)\). Works best with diagonally dominant matrices.

  • neumann

    Neumann preconditioner \(P=(2I - P_J^{-1}A)P_J^{-1}\) where \(P_J=\mathrm{diag}(A)\). Slightly more expensive than Jacobi but handles parallelization much more elegantly.

  • leastSquares

    Least-squares preconditioner \(P = A^T\). Transforms the system into the normal equations \(A^TAx = A^Tb\) and will return a precise solution if the original system \(Ax = b\) is consistent. Works best if the original system is known to be inconsistent or overconstrained (i.e. \(A\) has more rows than columns).

  • symmetricGaussSeidel

    Symmetric Gauss-Seidel preconditioner \(P = (L+D)D^{-1}(L+D)^T\) where \(L\) is the lower triangular part of \(A\) and \(D = \mathrm{diag}(A)\). This preconditioner is scalable since it only requires one storage vector, however its elements cannot be computed in parallel.

  • multigrid

    Multigrid preconditioner. This is the most powerful preconditioner option and works well with large problems. Requires the following parameters to be specified:

    • mgDefaults (string): Choose from DD (domain decomposition), SA, DD-ML, or DD-Maxwell. The SA option is strongly recommended for symmetric matricies.
    • maxLevels (integer): Maximum number of levels of smoother application before attempting a direct solve on the smaller problem. This parameter incurs the most communication overhead for parallel problems, so decreasing it to 3 or 4 may be advisible if the application is highly parallelized.
    • smootherType (string): The type of smoother to use at each multigrid level. GaussSeidel works well in general, but other options may increase the speed of the simulation. Other selections include Jacobi, Chebyshev, and SymBlockGaussSeidel. If mgDefaults is set to either DD or DD-ML, the Aztec option is also available.
    • smootherSweeps (integer): Number of times to smooth at each multigrid level. Three is usually sufficient but depending on the matrix properties fewer passes may be necessary.
    • smootherPrePost (string): The multigrid solving algorithm calls for smoothing steps both before and after the solve, which is computationally wise in most cases. If the user would like to perform only one of these steps and possibly accelerate the solve, they can specify the step to perform here (choose from before, after, or both).
    • coarseType (string): The solver to use at the coarsest level of the preconditioner. Select from a Jacobi solver (good for diagonally dominant matrices) or KLU (an LU direct solver that works well with sparse matrices).
    • dampingFactor (double): The damping factor for smoothed aggregation. Tech-X recommends 1.333 as a good choice for most applications, but this can be adjusted up or down to speed up the solves.
    • threshold (integer): Multigrid aggregation at each level. Setting this to 0 should be sufficient for most applicaitons.
    • increaseDecrease (string): Choose from increase or decrease If increase, the finest convergence level will be at level 0. If decrease, the finest convergence level will be at level maxLevels - 1. This option should not affect the convergence rate of the solver in either case.
tolerance (real, default = 10e-06)

Tolerance to be used in determining convergence.

maxIterations (integer, default = 1000)

Maximum number of iterations to reach convergence.

convergenceMetric (string, default = r0)

The residual measure used for determining convergence. One of

  • r0
  • rhs
  • Anorm
  • noscaled
  • sol

Google the AztecOO User Guide for detailed descriptions. In short, r0 is safest. However, for repeated similar solves (as in a slowly-varying simulation), the use of rhs may speed the solve, if the previous solution is similar to the next solution (however, this is not recommended if the right-hand-side of the equation is zero). The noscaled option will also have a similar effect, but this requires setting the tolerance to the absolute unscaled error, rather than a relative error.

Example iterativeSolver Block

<LinearSolver mySolver>
  kind = iterativeSolver
  <BaseSolver>
    kind = gmres
  </BaseSolver>
  <Preconditioner>
    kind = multigrid
    mgDefaults = SA
  </Preconditioner>
  tolerance = 1.e-10
  maxIterations = 1000
</LinearSolver>