Go to www.awrcorp.com
Back to search page Click to download printable version of this guide.
Iterative Methods

An iterative solver may be appropriate in certain situations and can be selected by setting the Linear Solver/Method to Iterative. The iterative solver may be one of several different methods, but they all have some common characteristics that differentiate them from the direct solvers discussed previously. Although iterative solvers are generally very memory-efficient compared with direct factorization, they must essentially start over to obtain a matrix solution for each new excitation vector, so for problems that involve many excitations (such as a driven frequency RF3p simulation with many ports or when you are computing a fast frequency sweep) using an iterative solver can greatly increase the solve time. Moreover, iterative solvers may not converge for all problems, meaning that they may not obtain a solution with the specified residual error before reaching the maximum number of iterations allowed. Generally iterative solvers work better on statics problems than on frequency-domain problems, and for many statics problems they are the solver of choice. For frequency-domain electromagnetic problems usually the direct solver is a better option, but iterative solvers may be competitive in cases where there are relatively few ports/frequencies and the matrix is large enough that factoring it is impractical or impossible on the available hardware. If Automatic is available and chosen, the solver will automatically select an appropriate method for the analysis.

To improve the convergence of iterative solvers it is common to apply a transformation to the matrix equation of the form N A NTx=Nb . Common pre-conditioners supported in Analyst include Jacobi (diagonal scaling), and Gauss-Seidel.

Another strategy that is used to improve iterative solver convergence is to add unknowns to the problem that result in a beneficial change to the matrix properties. In Analyst, scalar unknowns are added to the nodes of the mesh for this purpose when conjugate gradient-based solvers are chosen. This gives a slightly larger matrix, but one that typically solves much more rapidly that the original matrix equation.

Of these, only GMR has the property that the residual error will reduce every iteration, but CG is generally the most efficient.

Please send email to awr.support@cadence.com if you would like to provide feedback on this article. Please make sure to include the article link in the email.

Legal and Trademark Notice