5.1.4.2. Linear Solver

You may set the linear solver method to Direct, Iterative, or Automatic. The Direct linear solver options involve an LU decomposition of the system matrix A, i.e., computation of a lower triangular matrix L and an upper triangular matrix U such that A=LU. Once the factorization is complete the solution to the matrix equation Ax=b is obtained in two steps. First, the expression Ly=b is solved for y (so-called forward substitution). Then y is solved (backward substitution). When the HMLU direct solver is used, the forward-backward substitution is used within an iterative loop to improve the solution accuracy. The advantage of a direct solver over an iterative solver (discussed below) is both robustness and the speed of generating solutions once the factorization has been computed. Although the factorization operation can take substantial time and memory, the subsequent forward-backward substitution (FBS), used to obtain a solution for a given right-hand side (excitation), is very fast. The disadvantage is that factorization is computationally expensive, with memory requirements that can exceed 10 times the storage requirements for the system matrix. The Automatic setting will choose the solver best suited for the problem, and allows for fail-over to a different solver if necessary.

The Iterative solver may use any of a number of methods, described as follows. Although iterative solvers are generally very memory-efficient compared with direct factorization, they must essentially start over to obtain a matrix solution for each new excitation vector, so for problems that involve many excitations (such as a driven frequency RF3p simulation with many ports or when you are computing a fast frequency sweep) using an iterative solver can greatly increase the solve time. Moreover, iterative solvers may not converge for all problems, meaning that they may not obtain a solution with the specified residual error before reaching the maximum number of iterations allowed. Generally iterative solvers work better on statics problems than on frequency-domain problems, and for many statics problems they are the solver of choice. For frequency-domain electromagnetic problems usually the direct solver is a better option, but iterative solvers may be competitive in cases where there are relatively few excitations and the matrix is large enough that factoring it is impractical or impossible on the available hardware. If Automatic is available and chosen, the solver will automatically select an appropriate method for the analysis.

To improve the convergence of iterative solvers it is common to apply a transformation to the matrix equation of the form N A NTx=Nb . Common pre-conditioners supported in Analyst include Jacobi (diagonal scaling), and Gauss-Seidel.

Another strategy that is used to improve iterative solver convergence is to add unknowns to the problem that result in a beneficial change to the matrix properties. In Analyst, scalar unknowns are added to the nodes of the mesh for this purpose when conjugate gradient-based solvers are chosen. This gives a slightly larger matrix, but one that typically solves much more rapidly that the original matrix equation.

• CG: The standard preconditioned conjugate-gradient method. This method is the default iterative solver. It is very efficient and works well on most problems. The default pre-conditioner is Jacobi (diagonal scaling).

• BiStabCG: Stabilized variant of the biconjugate-gradient method.

• CGS: Conjugate-gradient squared method.

• GMR: Generalized minimum residual method.

Of these, only GMR has the property that the residual error will reduce every iteration, but CG is generally the most efficient.

Please send email to awr.support@ni.com if you would like to provide feedback on this article. Please make sure to include the article link in the email.

Legal and Trademark Notice