This chapter provides a general overview of the Cadence® AWR Design Environment® platform simulation principles and practices.
The AWR Design Environment platform determines what simulators to run based on the types of measurements configured in the project, so control blocks are not necessary on schematics to define simulations. Because of this architecture, unnecessary simulation is avoided. Once a design is simulated, it does not simulate again until the design changes. You can specify simulation of only a specific EM structure by right-clicking the EM document in the Project Browser and choosing Simulate.
The following figure shows the various types of simulations you can set up.
The most common measurements are those on graphs. However, the other objects listed also cause simulation.
Annotations: simulation results displayed directly on a schematic, such as DC currents. See “Annotations” for details.
EM Structures: listed separately since they always simulate if they are enabled and have not changed since their last simulation. They are not measurement-driven because typically their simulation times are long and their results are stored in the project for use as subcircuits. The only way to ensure an EM structure does not simulate is to disable the EM structure.
Output Equations: identical to measurements on graphs except they are assigned to a variable that can then be operated on by equations. See “Using Output Equations” for details. Output equations are easily forgotten because the Output Equations window must be open to know that they are in a project.
Graph Measurements: the most common type used in designs. These plot simulation output on various types of graphs. See “Graphs, Measurements, and Output Files” for details.
Optimizer Goals: another way you can set up measurements. These goals are typically measurements on existing graphs, but you can set up measurements only available to the optimize process. See “Optimization” for details.
Yield Goals: another way you can set up measurements. These goals are typically measurements on existing graphs, but you can set up measurements only available to the yield process. See “Graphs, Measurements, and Output Files” for details.
Output Files: simulation results written to a file. See “Working with Output Files ” for details.
Data Sets: for storing/restoring simulation results for a given type of simulation See “Data Sets” for details.
For larger designs, it can be difficult to track all of the measurements configured for a project. Simulation filters are a feature that prevents different types of simulations. See “Simulation Filters” for details.
When creating or modifying measurements, the available simulators display under Simulator in the Add/Modify Measurement dialog box. Depending on the type of measurement and the licensing options with which the program is started, there may be several simulators available for the measurement.
For example, with a full set of available features, the following options display for linear simulation.
The following options display for nonlinear simulation.
The following options display for AC simulation.
Note that EM simulations are not set using this mechanism; they are set when an EM structure is created. See “EM: EM Editor” for details.
Cadence® AWR® Microwave Office® software provides the capability to sweep parameters in any type of simulation.
There are four basic types of sweep controls: frequency, power (for example, swept power ports), voltage/current (for example, swept voltage and current sources) and variable. Each type is discussed in this section.
The following schematic is referred to throughout this section.
There are three ways to set up one-dimensional frequency sweeps in the AWR Design Environment software.
Project frequencies
Document frequencies
SWPFRQ frequencies
Project frequencies are the default frequencies used in any new project. Double-click the Project Options node in the Project Browser and click the Frequencies tab to edit your project frequency list. Click the button when complete. See “Project Options Dialog Box: Frequencies Tab ” for details on this dialog box.
Document frequencies are a second frequency list that you can set on each simulation document (schematic, data file, EM structure and netlists). For data files, these are frequencies included in the data file. For the others, these are set on the options for that document. To access the document frequencies, right-click the document in the Project Browser, choose Frequencies tab. By default, the document frequencies are identical to the project frequencies. Note the Use project defaults check box in the upper left of the dialog box. This check box must be cleared to enable setting frequencies local to each document.
and click theYou can specify a third frequency sweep by using a SWPFRQ block on a schematic (from the Simulation Control category in the Element Browser). The swept frequency values are entered in the Values column of the SWPFRQ Element Options dialog box. If you want to specify frequency sweeps differently than a linear sweep, there are built-in equations that you can easily use with the SWPFRQ block. See “Built-in Functions ” for details.
When adding measurements, you can choose which frequency set to use. In the Add/Modify Measurement dialog box Sweep Freq option you can click the arrow at the right to choose between the document and project frequencies.
The frequency range and number for points for the frequency range you are selecting also displays.
There are many frequency sweep control types, but the three most commonly used are:
FDOC - document frequencies (the frequencies that are local to an individual schematic, data file, EM structure, etc.). By default, a measurement uses the document frequencies as the swept parameter.
FPRJ - project frequencies (the frequencies that are global to the project and set in the Project Options dialog box).
Sweep defined by the SWPFRQ block - control placed in a schematic (identified by the ID of the control, typically FSWPx where x is an integer).
In addition to the three commonly used frequency sweep types, other possible frequency sweep types are:
FDOCN - this option is available if the document referred to by the measurement contains noise data (such as a Touchstone S-parameter file with a noise data block). The frequencies that are swept are the frequencies at which the noise data are defined.
F_SYMB - this option is available if the document contains a port that produces a digital bit stream and the symbol rate is specified (for example, PORT_ARBS, PORT_PRBS). In this case, F_SYMB is the only option available, regardless of other frequency sweep controls that may be associated with the document.
F_OSC- this option is available if the document contains an oscillator analysis element that defines a frequency range (such as OSCAPROBE). In this case, F_OSC is the only option available, regardless of other frequency sweep controls that may be associated with the document.
F_SPEC - this option is available if the document contains an element that has a frequency parameter (such as PORTF). In this case, F_SPEC is the only option available, regardless of other frequency sweep controls that may be associated with the document.
For DC-only simulations, no frequency sweep option is presented.
The frequency sweeps described above apply to all cases where only
one frequency is independently swept.
This includes nonlinear
simulations where an additional tone is swept as a function of tone 1 frequency;
for example, a tracking LO that varies with a mixer's input frequency to result
in a constant output frequency.
Less common nonlinear simulations require the frequency of an additional tone to be swept independently in the same simulation, resulting in nested frequency sweeps. For example, in a multi-channel frequency conversion circuit, the LO frequency may need to be stepped to a new value after the input frequency is swept over its band. In such cases, Cadence® AWR® APLAC® HB simulators allow the frequency of the additional tone to be swept using the general variable sweep control. See “Creating a Sweep with Variable Sweep Control (SWPVAR)” for more information.
Some electrical elements such as the swept power ports are used as sweep controls. If at least one of these elements is present in a schematic, the swept parameter associated with that element is presented as a sweep option when adding or modifying a measurement for that schematic.
The following shows the same schematic, except Port 1 is changed to a swept power port. When adding or modifying a measurement, this additional power sweep displays as one of the swept parameter options. The available options specify how the swept variables are displayed.
Swept voltage and current sources (both AC and DC) can be used as sweep sources, and work in the same manner as swept power ports discussed previously.
In addition to frequency, power, voltage and current, any parameter and variable can be swept for a simulation.
To create a new sweep for a parameter or variable:
Right-click the parameter or variable and choose
.In the Sweep Setup dialog box, specify the sweep values, then click
to add the SWPVAR element to the document.When setting up a sweep for a layout variable or parameter, the SWPVAR element is placed in the associated schematic window.
Set the SWPVAR element UnitType parameter if necessary (see “Using Units with the Swept Variable Control (SWPVAR)”).
Add/modify the appropriate measurement and specify how the swept variable results are to display.
You can also manually set up the sweep by adding a SWPVAR element (in the Elements Browser Simulation Control category). The following shows an example of using the SWPVAR simulation control in a schematic.
The swept parameter associated with the SWPVAR control is available as a sweep option when adding or modifying a measurement for that schematic.
To modify an existing sweep, right-click the parameter and choose
to display the Sweep Setup dialog box for changes. You can also choose or to disable/enable a sweep. When a sweep is disabled, the corresponding SWPVAR is disabled.When renaming or deleting a swept variable, the associated SWPVAR does not change. If you rename a swept variable, it is no longer swept since there is no SWPVAR associated with the new variable name. The SWPVAR still refers to the old variable name. However, this behavior is different when editing variables in EM layout. Renaming or deleting a swept variable deletes the associated SWPVAR.
Sweep types are given the following priority/ordering during simulation:
Frequency
Power
Voltage/Current
Variable
Within each category, sweeps are sorted alphabetically by ID.
In the Add/Modify Measurement dialog box, the sweeps are listed based on this ordering.
The UnitType parameter of the SWPVAR control sets the unit type for the values
specified by the Values parameter. Note that the values are in base
units
, (MKS units - meters, ohms, volts, amps, etc.). For
example, if the Values parameter is set to stepped(1, 10, 1)
and UnitType is set to Inductance, the variable is swept
from 1 to 10 Henries in steps of 1 Henry.
To simplify unit entry, you can append SPICE-convention unit modifiers to
values. For example, you can enter "1n" instead of "1e-9" to specify 1
nanohenry. In the previous example, you can set the Values parameter to
stepped(1n, 10n, 1n)
or stepped(1, 10,
1)*1n
to sweep the variable from 1 to 10 nH in steps of 1 nH. These
modifiers follow SPICE rules; they are not case-sensitive, they must follow the
number directly without a space in between, and any characters directly
following the modifier are ignored.
The x-axis of the graphs in the following figures illustrate the results of using the SWPVAR control with and without units.
Finally, all dependent parameters should use base units. To specify base units, choose Schematics/Diagrams tab on the Project Options dialog box, then select the Schematic dependent parameters use base units check box.
and click theThe AWR Microwave Office equation system provides an extremely flexible means of specifying the values to sweep. Several examples follow. For more information about specifying vector values using the equation system, see “Equation Syntax”.
Basic vector syntax - specify a vector of values by listing all of the values within curly brackets. For example:
x={3, 5, 7}
stepped
function - specify a vector of values. For
example, to specify a vector of values that range from 0 to 10 in steps
of 2, use the following syntax:
x=stepped(0, 10, 2)
points
function - specify a vector of values. For
example, to specify a vector of values that range from 1 to 10 with a
total of 10 points, use the following syntax:
x=points(1, 10, 10)
data_file
function - the values used for a swept
variable can come from a data file. For example, if a text data file
named "df1.txt" contains a column of numbers, a vector of values can be
specified using the following syntax:
x=data_file("df1.txt", "", "")
vfile
function - used in a similar manner as the
data_file
function:
x=vfile("df1.txt")
This section discusses how to specify the display of swept parameter data. The example shown previously shows how to view different "slices" of multi-dimensional, swept-parameter data.
The following figure shows the display options available for a frequency sweep.
Use for x-axis - Specifies that the frequency sweep is used for the x-axis. If no sweep parameter is specified, then the first sweep parameter that either has a value assigned to it, or takes its value from Select with tuner is used for the horizontal axis. Only the single, selected value is plotted.
Plot all traces - If another swept parameter is being used for the x-axis, selecting this option displays the swept data evaluated at every frequency value (a separate trace for each frequency value displays on the graph).
Select with tuner - Adds the frequency parameter to the tuner. You can then use the slider control to view the sweep at each frequency value separately.
Freq = xxx - Represents each individual frequency. If another swept parameter is being used for the x-axis, selecting a single frequency displays the swept data at only the selected frequency.
The following figure shows the display options available for a variable sweep.
Use for x-axis - Specifies that the variable sweep is used for the x-axis. If no sweep parameter is specified, then the first sweep parameter that either has a value assigned to it, or takes its value from Select with tuner is used for the horizontal axis. Only the single, selected value is plotted.
Plot all traces - If another swept parameter (such as frequency) is being used for the x-axis, selecting this option displays the swept data evaluated at every variable value (a separate trace for each variable value displays on the graph).
Select with tuner - Adds the variable parameter to the tuner. Only the trace(s) associated with the selected value display.
Disable sweep - By default, every permutation in a multi-dimensional sweep simulation is simulated. Selecting this option prevents simulation at these sweep values. Since simulations are not performed at these values, this option can be helpful in reducing simulation time. Anything that refers to that variable uses the actual variable value during the simulation.
Var = xxx - Each individual swept variable value is presented. If another swept parameter is used for the x-axis (such as frequency), selecting a single value displays the swept data at only the selected value.
The following figure shows the results of an S-parameter measurement, S(1,1),
for the schematic. Document frequencies are used for the x-axis. Each value of
the swept variable Lswp
defined by the SWPVAR block, is plotted
by setting Plot all traces. A frequency sweep is performed
for each value of the swept variable Lswp
.
The following figure shows similar results, except the swept inductance variable, Lswp, is used for the x-axis. A sweep is performed at each frequency value.
The following figure shows the simulation results when the swept inductance variable, Lswp, is used for the x-axis and a single frequency (5 GHz) is selected.
The following figure shows the simulation results when frequency is used for
the x-axis and the value of Lswp
at which the frequency sweep
is selected by using the tuner (SWPVAR.SWP1 is set to
Select with tuner).
You can display parameter (variable) markers on graph traces. Parameter marker settings are located on the Markers tab of the Graph Options dialog box (see “Graph Options Dialog Box: Markers Tab”). Parameter markers are designated with a 'p' and differ from normal data/trace markers. Parameter markers allow you to identify traces that are associated with a particular swept variable value.
The measurements on graphs use the following notation:
DocName.Simulator.~DisabledSweeps.$FreqSwp:MeasName(MeasArgs)[Swp1
display, Swp2 display, ...]
, where
DocName
- the name of the document (schematic, data
file, etc.) referred to by the measurement.
Simulator
- the name of the simulator (APLAC,
Spectre, or other) used by the measurement. This option is skipped for
the Default Linear and Harmonic Balance (Legacy) simulators.
~DisabledSweeps
- the ID's of any SWPVAR controls
whose sweeps are disabled (multiple sweeps may be disabled).
$FreqSwp
- the name of the frequency sweep control
($FPRJ, $FSWP, etc.). If the document frequencies ($FDOC) are used for
the frequency sweep, the $FDOC identifier does not display in the
measurement notation because $FDOC is the default frequency sweep (this
also keeps the measurement notation short).
MeasName
- the name of the measurement.
MeasArgs
- the arguments for the measurement (this
may be empty).
[Swp1 display, Swp2 display, ...]
- each element of
this set indicates how each sweep should display. The number of elements
equals the number of swept parameters present in the document. The
possible options are:
X
- use this sweep for the x-axis.
* - plot all traces.
T
- select with tuner.
~
- disable sweep.
Positive integer number - indicates the one-based index of a single sweep value, used to display a particular trace associated with the swept parameter.
For example, consider the following measurement associated with this schematic:
Schematic1.AP_HB.~SWP2.$FPRJ:DB(|Pcomp(PORT_2,1_0)|)[*,X,T,~]
Here, the document name is "Schematic1", the simulator is APLAC HB, the sweep
associated with the SWPVAR control with the ID of SWP2 is disabled, the project
frequencies $FPRJ are used as the swept frequencies, and Pcomp is the
measurement (in dBm) with two arguments. Inside the square brackets, * indicates
that traces are plotted for each frequency. X
indicates that
the second sweep (the power at port 1 in this case) is to be used for the
x-axis. T
indicates that the third sweep (SWPVAR.SWP1) value is
selected with the tuner. Finally, ~ indicates that the fourth sweep
(SWPVAR.SWP2) is to be disabled.
As noted, there are various ways to display swept parameters, such as using them for the x-axis, selecting with the tuner, or selecting individual sweep points. Additionally, you can use a marker on a different graph to specify a sweep index on the current graph. As long as the marker is added to a graph that uses a measurement with the same data source name, the marker is one of the choices for the swept data.
The following example circuit has a very simple input and output matching network for a FET device. The simulation is set up to run IV curves as well as S-parameters. The IV curves are shown in the following figure.
Using default settings for plotting S(1,1) plots one trace for each bias point.
If you view the measurement controls for the sweeps for this measurement you see that for the "IVCURVE.IV1.SWP" and "IVCURVE.IV1.STEP swept variable, the setting is "Plot all traces". Note the values that are available in the drop-down menu for the sweeps.
You can add a marker to the IV curves data on the graph named "Bias" by choosing Ctrl + M.
or by pressingNow, when you look at the available values for the sweeps for the S-parameter measurement, one of the options is the marker added to the IVCURVES.
Notice the new "Marker: m1@Bias" option. The syntax indicates marker 1 on the "Bias" graph. If both the IVCURVE sweeps are set to this value, then only one trace displays on the Smith Chart for the S-parameters.
After the simulation is complete, you can click on the marker (not the marker display frame) and drag to watch the S-parameter display update with the data from the closest simulation point to the location of the marker. In this example, using the marker to choose the bias for the S-parameters is much more intuitive than other approaches since you can visually see the mode of operation the transistor is working.
The number of sweeps obviously affects the total number of points used in a simulation. For example, assume that a project has three sweeps - one frequency sweep and two variable sweeps. If there are 10 frequency points, 20 points for the first variable and 30 points for the second variable, the total number of simulation points is:
Total number simulation points = 10 x 20 x 30 = 6,000
If the simulation involves a nonlinear measurement on a highly nonlinear circuit, the simulation time can become large.
Now assume that the frequency sweep is used for the x-axis and that single values
are chosen for display for both variable sweeps as shown in the following figure.
Even though single values are specified for the variable sweeps, all 6,000
points are still simulated
.
To mitigate the large number of simulation points, you can disable the two variable sweeps and the variables set to the single values of interest to reduce the number of simulation points to 10, which is the number of frequency points.
You can set up a variable to sweep that is used on an element that is set up for extraction. For information on this special setup, see “Extraction and Swept Variables”.
The Variable Browser displays the current value of variables and element parameters for schematic elements in the active project. To open the Variable Browser choose View > Variable Browser, or click the Variable Browser button on the toolbar. The Variable Browser provides different ways to find the desired variable and manage your project's variables, such as changing the sort order of the variables, filtering on specific criteria such as an element type in a certain schematic, and viewing Independent or All (independent and dependent) variables based on the tab selected, as shown in the following figures.
When the All tab is selected, buttons control whether to display element parameters and equations, equations only, or element parameters only. If element parameters are displayed, two additional buttons control whether all element parameters or only the ID parameter display, and whether to display secondary parameters.
The Variable Browser uses the generic property grid to work with the data. See “Using Property Grids” for details on the generic property grid functionality.
Models in the AWR Design Environment platform have primary and secondary parameters. For example, the BSIM3 MOSFET model has primary parameters L and W for the geometry of the transistor. The additional hundreds of parameters control how the model works.
Click the
button on the Element Options dialog box toolbar to toggle on and off the display of secondary parameters in the property grid. For example, without this button clicked (secondary parameters hidden) you see the following parameters:With this button clicked (secondary parameters displayed), you see many additional parameters.
Note that secondary parameters display grayed in the property grid.
The Variable Browser lists all the variables in your project. In larger designs it can take a lot of time to navigate to a specific element or equation that you want to inspect based on the entries in the Variable Browser. To simplify this process you can click on the entry for an element or equation in the Element column, and the document that contains it opens with the element or equation centered in the view.
You can enter one or more user-defined tags in the Tag column to associate variables with certain performance characteristics in their design. For example, if you have an input matching network you can tag a set of parameters with "s11" to remind you that they affect s11 of your amplifier circuit.
Filtering is accomplished with a sub-string search that displays all variables that contain a tag that matches the filter text. If tagging is set up properly, filtering and sorting allow you to see all of the variables that affect certain performance characteristics. For more information about filtering and sorting, see “Property Grid Filtering Text Boxes” . You can filter on tagged variables in both the Variable Browser and the Tuner. For more information about filtering items displayed in the Tuner, see “Additional Tuning Details”.
Click the All tab view of the Variable Browser. A floating window opens with various search and replace options, as show in the following figure. You can only replace text in cells that are editable. You can also use the and buttons to revert and reapply text changes.
button to search on text in theYou can specify in the Variable Browser that variables and independent equations
are tuned or optimized by selecting the Tune and/or
Optimize check box for each. If tuning or optimization is
enabled Cadence recommends that the variable also be constrained. To enable
constraints, select the Constrained check box for the variable,
and then in the Lower and Upper columns
enter the lower and upper bounds for that variable, respectively. You can also enter
the lower and upper bounds as a percentage or absolute delta of the current value by
following the value with a "%", or "#", sign respectively. For example, to
enter 10 percent you type "10%
" or to enter +- 10 you
type "10#
". Doing so in the respective column sets the
constraint for that column; doing so in the Value column sets
the limit for both the Upper and Lower
constraints. You can also set the allowable step size for that variable in the
corresponding Step Size column. Lastly, to clear any
constraints on a variable or group of variables, type "0%
" or
"0#
" as the Value for that variable
or group of variables. For more information about tuning and optimization, see “Tuning”.
The Variable Browser provides dynamic feedback based on the current value of a variable and the constraints that are set for that variable. The Lower and Upper boxes display in red if the variable is at or beyond the constraint value set for that variable, and they display in yellow if the current value of the variable is closer to either the Upper or Lower bound than five percent of the delta between the Upper and Lower bounds.
The Variable Browser allows you to control the statistical analysis for a variable. By default, the yield-related columns are hidden. To view them, toggle the associated toolbar button as shown in the following figure.
For more information about setting up a variable for yield, see “Yield Analysis”.
You can use the Tuner to tune a parameter value and observe the resulting response on the graph of that circuit.
To tune a parameter value and observe the resulting response:
Click on the schematic window to make it active.
Click the Tune Tool button on the toolbar.
Move the cursor over the desired element parameter or equation variable to see the complete name. A dark cross cursor displays in a circle.
Click to activate the parameter for tuning. The parameter displays in an alternate color. To turn off tuning, click the active parameter again. Alternatively, you can change the tune state in the Tune column of the Element Options dialog box Parameters tab or in the Variable Browser.
Choose Tune button on the toolbar. The Tuner displays. For more information about the Tuner, see “Tuner Dialog Box”.
or click theClick a tuning button and slide the tuning bar left and right. Observe the simulation change on the graph as the variables are tuned.
The following tips and information are helpful for tuning operations:
Environment settings in the AWR Design Environment platform control how variables and parameters set for tuning display. See “Environment Options Dialog Box: Colors Tab ” for information on how to set these colors.
Press Ctrl while dragging the tuning bar to continually update displayed variables and parameters and layout views.
Press Shift while dragging a tuning bar to update results only when you release the mouse button. In this mode, you can move to a specific value without needing to simulate at all of the points in between.
To remove a variable from the tuner, clear the check box in the Tune column of the Tuner dialog box.
Filter displayed variables for tuning based on the variables tags. For more information about tagging a variable, see “Tagging”.
The Step control on the tuner always produces that value above or below the tuning nominal values as you move the tuner up or down. Because of this, there are some step values that cannot precisely reach the Max or Min values. For example, if Max=9, Min=0.5, Nom=2, and Step=2.5, while tuning, you cannot take 2.5 steps from 2 and get precisely to 0.5 and 9, so the tuner comes as close as possible.
Optimization is a process during which the AWR Design Environment platform automatically adjusts designated circuit parameters such as circuit-element values, transmission-line lengths, and similar quantities to achieve user-specified performance goals such as minimum noise figure and flat gain. The circuit parameters to be adjusted must be variables or parameters with an assigned numeric value (not other variables) that have been selected for optimization. A variable or parameter can be selected for optimization by editing its properties, or by setting the optimization property in the Variable Browser.
Optimization is controlled by an error function, which provides a numerical error value based on the difference between calculated and desired performance. The optimization process attempts to find the minimum of this function. Optimization is an iterative process: The AWR Design Environment software calculates the error function, modifies the variables, and calculates again. The optimizers use algorithms that cause the performance to move closer to the goals (so the error decreases) after each iteration.
An important feature of the AWR Design Environment software is that noise, linear and nonlinear performance can be optimized simultaneously.
The AWR Design Environment software optimizers adjust the values of the optimization variables to achieve performance goals defined by the goals displayed under Optimizer Goals in the Project Browser. They do this by minimizing the following mean-square error function:
where the f
_{q} are the analysis
frequencies and
|G
_{n}-M
_{n}|
is the error in a parameter. M_{n} is the magnitude of an S
parameter, noise figure, IM level, or similar quantity,
W
_{n} is a weight, and
L
_{n} is the order of the norm.
Clearly, if parameter M
_{j} has a large
weight W_{j},
the error for the
parameter contributes heavily to the error function, so an optimizer reduces the
error in M
_{j} more than in other less
heavily-weighted parameters.
In the previous equation, N
is the number of goals that
are specified for the optimization, and
Q
_{n} is the number of frequency
points that falls within the goals range.
The L
factor used as part of the error function
described previously is related to the definition of the norm of the error
vector (i.e. minimizing the L2
norm is equivalent to
setting the value of L
to 2). For the purpose of this
discussion, the error vector is considered a vector of scalar values that
represent the error of a single parameter at a set of frequency points. The true
error function is a sum of the error function values determined by each
goal.
The most common value for L
is 2, which is equivalent to
a least squares optimization. A value of 1 optimizes the scalar difference,
while values higher than 2 tend to put a higher weight on the largest
difference.
An exception is that a value of L=
0 is used to specify
the equivalent of the infinity norm which is defined as the maximum element in
the error vector. A value of L=
0 effectively causes the
optimization to minimize the maximum deviation from the goal (this has been
called a min-max optimizer in some programs).
The definition of the L
factor can be set individually
for each goal. This allows the optimization of one goal using a least-squares
criteria (L
=2) and the optimization of another goal using a
minimize the maximum criteria (L
=0).
When optimizing circuits you should follow these guidelines:
During optimization, the circuit is analyzed at each frequency in the project frequency list, so minimizing the number of analysis frequencies makes the optimization proceed very rapidly. In a narrowband circuit, it is often adequate to use only the band-edge frequencies, and even in broader circuits it is rarely necessary to use more than a few frequency points.
An optimizer terminates in any of the following scenarios:
after the desired number of iterations
when the error function drops to zero
when interrupted (when restarted, the optimizer simply begins with the variables it had when it terminated)
when the optimizer determines that no further improvement is possible.
Optimizers often run for a long time while appearing idle. Be assured, the optimizer is working on your problem.
The selection of frequencies affects the optimization in much the same way as the selection of weights: a range having many frequency points is effectively weighted more heavily than one having only a few points.
Optimization is a powerful and useful tool that must be used properly. The greatest reason for the failure of an optimizer to yield reasonable results is that a user attempts to make it do something that is impossible, such as forcing a simultaneous conjugate match on a circuit having K<1, or something that is unreasonable, such as optimizing an amplifier simultaneously for noise figure, input and output match, and gain, when the initial design is sloppy. The use of conflicting goals impedes or prevents the optimizer's progress; it is often not obvious that goals are conflicting. For example, the following set of goals usually gives poor results when applied to the design of a transistor amplifier:
Myamp:DB(|s[2,1]|) = 12 Myamp:DB(|s[2,2]|) = -30 Myamp:DB(NF) < 1.0
This set of constraints instructs the optimizer to minimize the noise figure and output return loss while achieving a specific gain. This set of goals may seem reasonable, but in fact it puts three constraints on the matching circuit, while only two can be met by the two degrees of freedom (source and load impedance) available. Such requirements are, of course, often real and necessary, but they require trade-offs based on engineering judgment. The optimizer may be helpful in making those trade-offs, but it cannot circumvent the need for them.
Minimizing noise and minimizing IM distortion are inherently conflicting goals. In fact, any optimization involving IM performance is likely to introduce a subtle conflict with any other performance parameter.
Local optimizers find only the local minimum of the error function; they generally cannot find the global minimum, unless, of course, it is the same as the local one. As a result, the most successful optimization occurs when the initial design is good enough that the best local minimum is the global minimum, and when you are willing to experiment with optimization weights and frequency ranges, to limit the number of variables, and to try a different optimizer when one fails to yield satisfactory results. The following example is a sensible approach to the design of a low-noise transistor amplifier:
Design the input circuit and optimize it for noise figure using the transistor described by its S-parameters and noise statements alone.
Calculate the output reflection coefficient.
Design the output matching circuit and optimize it for either a specific value of gain or a conjugate match (not both). Optimize the amplifier circuit as a whole for gain or output match, and noise figure. By optimizing the circuit in parts, the chance of achieving a successful design is much greater than it would be if the amplifier were optimized as a whole at the outset.
The following process works well for extracting FET models. The best method is to determine the two most critical items, the internal transconductance Gm and series resistance Rs, before beginning the fitting process, and to get good initial estimates of other model parameters from low-frequency Y parameters. (These can be calculated easily because series inductances can be ignored and other simplifying assumptions are reasonable.) It is much easier to fit a FET model to a set of S-parameters when Rs is fixed at the outset.
Measure Rs and the internal Gm at dc (the internal Gm is dId/dVg, where Vg is the voltage across the gate-depletion capacitance, not the gate-to-source terminal voltage). Assume initially that Rd=Rs. A good value of Rs is difficult to obtain, but it is essential for successful fitting. Remember that the internal Gm is not the same as the Gm measured at the FETs terminals.
Convert the measured S-parameters below 1 GHz to Y-parameters, and estimate the gate-to-source capacitance (C_{gs}), the gate-to-drain capacitance (Cgd), the drain-to-source capacitance (Cds) and output resistance (Rds) from these. (Do this analytically, not with the optimizer.) Also check Gm against Y21. If the device is packaged, you must divide the input and output capacitances between the package parasitics and C_{gs} and C_{gd}. Assume that the package capacitances at the input and output are 0.15 to 0.2 pF (70-mil ceramic package) and the package capacitance between the drain and gate terminals is approximately 0.05 to 0.1 pF.
Perform the S-parameter fitting at microwave frequencies with relatively heavy weighting on the phases. Keep Gm, Rds, and R_{s} constant (do not make them variables) and put fairly tight constraints on the capacitances (0.01 0.05 pF). Use the Random optimizer initially, and when it slows, switch to the gradient or simplex. You can adjust the numbers with the tune mode if necessary or experiment with weights and other optimizers.
Ensure that the topology of your FET model is meaningful. The model of a chip FET is straightforward, but a model of a packaged device can be difficult to generate. If the device is packaged, remember to include gate, source, and drain inductances within the package; these may be surprisingly high: 0.4 to 0.6 nH for a 70-mil package. Even a chip FET may have a few tens of picohenries of gate, source, and drain inductance.
If your results are nonsensical (element values are clearly incorrect), repeat the process with tighter constraints on the variables. The best device model is not necessarily the one that reproduces the measured S-parameters the best; it is the one that gives the most meaningful circuit element values.
You need to specify which element parameters or variables to use during optimization. Lower bound and upper bound limits (constraints) can also be configured for any parameter or variable. The optimizer does not increase the value of the variable above the upper bound or decrease the value below the lower bound.
To set a parameter for optimization and set up constraint limits, double-click the element on the schematic to display the Element Options dialog box. On the Parameters tab, select Optimize. To set optimization constraints, select Constrain and enter values in Lower and Upper. You can also enter constraints by using the "%" or "#" modifier as described in the “Variable Browser” section.
To set a variable for optimization and set up constraint limits, select the equation, right-click and choose Optimize, and to set optimization constraints, select Constraint and enter values in Lower bound and Upper bound.
to display the Edit Equation dialog box. Select
An alternative and perhaps easier method for setting element parameters (including constraints) or variables for optimization is through the Variable Browser (choose View > Variable Browser). You can change the values and the limits by typing directly into the cells of the Variable Browser or by using any of the methods described in the “Variable Browser” section.
To select a variable for optimization, click in the Optimize column. To constrain the variable, click in the Constrained column. To select a variable for tuning, click in the Tune column.
Environment settings in the AWR Design Environment platform control how variables and parameters set for optimization display. See “Environment Options Dialog Box: Colors Tab ” for information on how to set these colors.
Optimization goals can be associated with any measurement or output equation in a project.
The Optimizer Goals node in the Project Browser contains subnodes for each optimization goal that you create in the AWR Design Environment platform for that project.
To add an optimization goal there must first be items in the project that can be measured (for example, schematics or EM structures). Choose Project > Add Opt Goal or right-click Optimizer Goals in the Project Browser and choose Add Optimizer Goal. You can also right-click a measurement on a rectangular graph legend and choose then click and drag the mouse on the graph to define start and end points for the goal. The points snap to a grid that is defined along the horizontal axis by the sweep points and along the vertical axis as one tenth of the division step size. Holding down the Shift key while dragging disables this snap-to-grid feature. The end points also snap to this grid when editing goals by dragging them on a graph, with the Shift key disabling the point snap.
The Goals window allows you to edit, view, and sort all optimization goals (even disabled goals) for a project. To access this window choose Simulate > Optimize to display the Optimizer dialog box, then click the Goals tab at the bottom of the dialog box. In the Goals window you can right-click the column header to choose which columns to display, rearrange columns by dragging them to another column location, change the displayed column width, and click individual column names to toggle an ascending/descending sort process.
To modify an individual optimization goal, you also can right-click the goal in the Project Browser and choose Properties to display the Modify Optimization Goal dialog box.
To delete an optimization goal, right-click the goal in the Project Browser and choose Delete Goal.
To disable/enable individual optimization goals, right-click the goal and choose Toggle Enable. When one or more goals are disabled, you can right-click on Optimizer Goals and choose Toggle All Opt Goals to reverse the disabled/enabled status of all goals.
To disable all optimization goals, right-click Optimizer Goals in the Project Browser and choose Disable All Opt Goals. You can re-enable all optimization goals by choosing Enable All Opt Goals.
The AWR Design Environment software automatically determines if a variable that is selected for optimization has any effect on enabled optimization goals. This check determines if the enabled goals have a dependency on the variables selected for optimization. If there is no dependency, the variable is excluded from the variables for optimization. If none of the enabled goals depend on any of the variables selected for optimization, the optimizer reports an error indicating that there is nothing to optimize. The automatic check does not change the optimization settings for individual goals, so it provides a convenient method for controlling which variables are to be optimized on.
For example, in a project with two schematics where both have optimizable variables and optimization goals for measurements, you can control which schematic is optimized by enabling and disabling the goals for each schematic. It is not necessary to turn the optimization off for variables in schematics that you do not want to optimize. You can simply disable any goals for the schematic.
To optimize the circuit, choose Simulate > Optimize. The optimizers are controlled by entries in the Optimizer dialog box.
The Cost History is a small plot of the cost function as a function of the iteration number. This graph automatically "wraps" and scales as the iteration progresses. The various optimization methods are described as follows; all dialog box options are described in “Optimizer Dialog Box”.
Stop on simulation errors controls what the optimizer does when a simulation error is encountered. When this option is selected, if an error is found, the optimizer stops and displays the following dialog box.
In this mode, you can revert to the error state to investigate the error. When this option is not selected, simulation errors are ignored and the optimization process continues. A good example of this use is two microstrip lines connected by a step model, with the width of both lines set to optimize. Some combinations of the two widths violates the maximum difference between the widths for the step model. In this case, you might want the optimizer to skip any cases where the error is found to keep searching for a valid answer.
Selecting the
check box creates a JSON formatted log file containing various optimization and variable information. The Status Window displays a link to the file after optimization completion. The data in the file is split into three sections:Setup: Contains optimizer settings information, a list of the optimization goals, and the names and initial values of variables set for optimization.
Iterations: Logs all iterations parameters and their values, their time of iteration, and individual and total goal costs per iteration.
Summary: Provides summary information, including the stop time, and the ten best iterations, including the iteration number, iteration cost and parameter values.
The log file is written to the Logs
directory at the path
listed in the Directories dialog box (choose ). The name of the log file is auto-generated based
on the date and time of the optimization start, for example
2020-05-11T18:50:43.344Z_optlog.json
.
You can select the desired optimization method in Optimization Methods. There are many optimization methods supported in the AWR Design Environment software, each of which may be preferential for a given problem in regards to result quality or optimizer speed. Faster optimizers are often more sensitive to a problem’s characteristics and parameters, while slower optimizers are frequently more robust.
The following table provides general guidance on selecting an optimizer. As each problem is different, your results may vary.
Optimizer Selection
Optimizer | Global/Local^{[1]} | Max No. of Variables^{[2]} | Robust?^{[3]} | Discrete Variables OK?^{[4]} | Notes/General Use Recommendation |
---|---|---|---|---|---|
Advanced Genetic Algorithm | Global | Many | Yes | Yes | Single-thread and parallel versions
available Rec: Many variables, very challenging problem, poor or no initial guess |
Particle Swarm | Global | Many | Yes | Yes | Single-thread and parallel versions
available Rec: Many variables, very challenging problem, a good initial guess can be useful |
Pointer (robust, gradient) | Global | Many | Yes | Yes | Versions may produce significantly different
results Rec: Challenging problem |
Random Global | Global | Many | Yes | Yes | Rec: No initial design, highly irregular search space, other global methods fail |
Differential Evolution | Global | Many | Yes | No | Rec: Design problem that has highly-tuned solution, poor or no initial guess |
Genetic (Gaussian, Uniform) | Global | Many | Yes | No | Versions often produce similar
results Rec: Challenging problem, poor or no initial guess |
Kapu | Local | Many | Yes | Yes | Single-thread and parallel versions
available Rec: Many variables, an initial design of some kind (e.g., one found quickly by a Global optimizer), and/or very challenging problem |
Random Local | Local | Many | Yes | Yes | Single-thread and parallel versions
available Rec: Large no. of variables, OK initial design, other local methods fail |
Simulated Annealing Simplex | Local | Many | Yes | No | Rec: Good initial guess, standard Simplex gets caught on local minima |
Simplex | Local | Few | Yes | No | Rec: OK initial guess and/or difficult design space, highly-tuned design |
Simplex (local) | Local | Few | Yes | No | This version of Simplex starts closer to
initial design than the other Rec: OK initial guess, difficult design space, highly-tuned design |
Robust Simplex | Local | Few | Yes | Yes | Rec: OK initial guess and/or difficult design space, highly-tuned design |
Conjugate Gradient | Local | Few | No | No | Rec: Few variables, relatively uncomplicated design space |
Gradient | Local | Few | No | No | Rec: Few variables, relatively uncomplicated design space |
Direction Set Method | Local | Few | No | No | Rec: Few variables, relatively uncomplicated design space |
Discrete Local | Local | Few | Yes | Req'd | Discrete variables only Rec: Discrete variables, but not a large number of them |
Lineup | Req'd | *Special-purpose method, designed to increase
yield on circuits with discrete-value
components Rec: Circuit design with discrete component values, yield is to be maximized |
|||
^{[1] }Global/Local: Local optimizers generally need a good initial design, and they start out exploring this initial design. They can be effective at optimizing performance of sensitive designs, but usually cannot handle too many local minima nor go too far from the starting point without getting trapped. Global methods usually do not rely on an initial design, and can traverse the search space as a whole. They can search large design spaces effectively, but may be very slow when converging to a final design. ^{[2] }Max No. of Variables: Most methods can handle a few variables, and all tend to optimize more slowly as variables are added, but the performance of many methods drops off quickly as variables are added. "Few" variables is meant as less than about 10, although this depends on problem difficulty. The easier it is to find a solution, the more variables an optimizer can handle. Optimizers that can handle "Many" variables do not have as great a limitation on variable count, although it is possible to overwhelm even these optimizers. ^{[3] }Robust?: A robust optimizer is not easy to trap in local minima. This is related to the Global/Local characteristic. Optimizers that do not need a good initial guess are also not easy to trap in local minima even when near a final solution. However, some local optimizers are also not easily trapped in local minima. ^{[4] }Discrete Variables OK?: Indicates whether or not an optimizer can accommodate discrete variables. Some optimizers are only able to operate in continuous domains. |
In its simplest form, the Pointer optimizer is used like a Random or Gradient optimizer. This optimizer has been trained on a variety of circuits and often produces good results.
The Pointer optimizer requires that all optimization variables be constrained. If you do not constrain your variables it does so internally, and you may not get the desired results. You should verify that all of your variables are properly constrained before running the Pointer algorithms.
The Pointer optimizer combines the power and robustness of four widely used and accepted search methods - linear simplex, downhill simplex, sequential quadratic programming, and genetic algorithm.
Cocktails, or combinations of optimizers, are even more robust than simple restarts of one scheme. One robust rapidly converging cocktail is an evolution strategy followed by a simplex. Similarly, a good cocktail for smooth topographies would be the combination of a Monte Carlo (random) method with a gradient method. The gradient method is started from numerous randomly generated points in space with the best resultant point retained.
An "optimizer" in Pointer is really a hybrid optimizer consisting of a combination of the genetic, downhill simplex, gradient, and linear simplex algorithms. The choice of algorithms and the number of iterations, restarts, and step-sizes are determined automatically.
Pointer's optimizers can be divided into three groups: genetic algorithms, downhill simplex methods, and sequential quadratic programming methods. These methods are described as follows:
Genetic algorithms use mutation or recombination and selection to minimize the objective function. They begin with a large number of points randomly distributed over the design space (at least one point for every dimension of the problem if possible). In mutation, each of the points produces a number of new points that are normally distributed around the original point. The best point out of this next generation of points is selected. In recombination, a random number of points exchange parameter values. Again, the best points are selected for the next iteration. This recombination mechanism allows points to move towards a point with a low objective function value.
A standard deviation represents the average step size. This standard deviation adds one dimension to every parameter in each point in the algorithm. Those points with the best standard deviation have the highest chance of finding the global minimum. Initially, the evolution method converges very rapidly, but eventually has trouble converging to the exact solution. It does, however, deal well with complex topographies.
The downhill simplex method is a geometrically intuitive algorithm. A simplex is defined as a body in n dimensions consisting of n+1 vertices. Specifying the location of each vertex fully defines the simplex. In two dimensions, the simplex is a triangle. In three dimensions, it is a tetrahedron. As the algorithm proceeds, the simplex makes its way downward toward the location of the minimum through a series of steps. These steps can be divided into reflections, expansions, and contractions. Most steps are reflections which consist of moving the vertex of the simplex where the objective function is largest (worst) through the opposite face of the simplex to a lower (better) point. Reflections maintain the volume of the simplex. When possible, an expansion can accompany the reflection in order to increase the size of the simplex and speed convergence by allowing larger steps. Conversely, contractions "shrink" the simplex, allowing it to settle into a minimum or pass through a small opening like the neck of an hourglass.
This method has the highest probability of finding the global minimum when you start it with big initial steps. The initial simplex then spans a greater fraction of the design space and the chances of getting trapped in a local minimum are smaller. However, for complex hyper-dimensional topographies, this method can break down.
The sequential quadratic programming (SQP) method is a generalization of Newton's method for unconstrained optimization. However, SQP can solve nonlinearly constrained optimization problems with differentiable objective and constraint functions. The search direction is the solution of a quadratic programming sub-problem, at each iteration. In this search method, the objective function is replaced by a quadratic approximation. The SQP method is used for problems with smooth objective functions (or problems that are continuously differentiable in the design space) and on small problems with up to 100 variables. Pointer uses an SQP program designed by Dr. Klaus Schittkowski.
Linear methods (also called linear programming or linear optimization) are ideally suited to problems in which the objective function "O" and constraints "ci" are a linear combination of the design variables "xi".
Although the nonlinear optimizers described previously solve linear problems, they are much slower. Unfortunately, linear algorithms are unable to handle nonlinear problems. Optimization times can be greatly reduced if you can formulate your problem as a linear problem.
Pointer uses the linear simplex algorithm (not to be confused with the downhill simplex algorithm for nonlinear topographies). It is based on a Gauss-Jordan elimination procedure for solving systems of linear equations.
This optimizer randomly selects trial points from the entire solution space in search of the optimum. This method should only be used when the error function is highly irregular or discontinuous and other global methods fail, as it is generally much less efficient than other global optimizers such as genetic algorithms. After finding a solution with this optimizer, it is highly recommended that a local method be used to bring the final solution closer to the optimum.
Random steps from an initial starting point in the search space, one variable at a time. This is a very simple optimizer, but it works surprisingly well in some cases. It can do well particularly when the number of variables is large, because it operates almost as efficiently with a large number of variables as with a small number. It is very inefficient by nature, so is only recommended when other methods fail to produce desired results.
The Kapu optimizer is a Cadence-proprietary algorithm that blends local optimization with high resistance to local minima. It works well on a wide range of designs, from 3 to over 90 variables.
There are two parameters for this optimizer:
Quality factor: This parameter is used to determine how quickly the search converges. The larger the number, the more slowly the algorithm converges. If the quality factor is too low, the search may converge too quickly to a local minimum. However, if the quality factor is too large, it may achieve sub-optimal results for a set number of simulations. The default of 2.0 provides good results for a variety of problems.
Exploration: This parameter is the fraction of the available search space used to initialize the optimizer. A value of 0.01, for example, means that the initial variable range is within 1% of the original variable values. This can be useful if you are already close to the final solution in the search space. A value of 1.0 means that the entire search space is used to initialize the optimizer, and is expected to be most useful when the best solution is unknown. Note that this parameter only involves the initialization of the optimizer, and does not prevent the variable values from moving out of this initial range.
Recommendations:
When there are many variables but no good initial guess, it helps to use a true global method for a small number of iterations first. For example, on a design with 76 variables, just 100 iterations of the Advanced Genetic Optimizer was enough to find a starting point that was sufficient for the Kapu optimizer to find an excellent solution.
If the method is getting trapped in local minima even with a good initial guess, increase the quality factor.
The Davidon-Fletcher-Powell optimizer (also known as the Fletcher-Powell method) can be classified as a gradient method; but precisely, it is a quasi-Newton optimizer. It is generally a good optimizer for simple circuits with straightforward requirements: the larger number of functional evaluations does not slow the optimization appreciably, but the optimizer converges on a solution very quickly. It is also quite good (although perhaps not as good as the simplex method) at following difficult contours. Experience shows that the Gradient optimizers often do not work well with passive circuits as compared to the Simplex optimizer.
The Gradient optimizer has two additional parameters you can set.
Convergence Tolerance: This parameter is used to determine convergence. If the improvement from one step of the optimization to the next does not improve more than this value, then the optimizer should stop.
Step Size: The step size is the amount the values are perturbed when you compute the gradients. For Cadence® AWR® AXIEM® 3D planar EM analysis, since you snap points to a fine grid, you need to set this to a larger value than the default, otherwise a small perturbation might result in a zero gradient.
The Conjugate Gradient optimizer uses an approximate derivative, determined via simulation, to determine the direction of a line search. Once a minimum is reached, the conjugate of this vector is used as the next direction to search, until convergence is reached. It works best with smaller numbers of variables and simpler requirements, as with the Gradient optimizer. It was originally developed to solve certain systems of linear equations very efficiently, so while it can have difficulty with complicated real-world requirements, it can be very fast and high-performance on well-suited search spaces.
It also has the same two parameters as the Gradient optimizer, with the same recommendations described for that optimizer.
This optimizer uses Powell’s direction set method to search for an optimum, starting with an initial guess. It performs a line search in each of a set of directions, which begins simply as a search along each dimension, updating the set of directions based on the results of the previous set. It is not a true Gradient optimizer and can be significantly more robust, but, similarly, it works best on problems without too many variables or complicated requirements.
It also has the same two parameters as the Gradient optimizer, with the same recommendations described for that optimizer.
The downhill simplex search (based on the Nelder-Meade optimizer) is relatively slow but very robust for a Local optimizer. A nice property of this optimizer is that it follows difficult contours in the error function quite well, although more slowly than a Gradient optimizer. It also finds the precise optimum, in contrast to the Gradient optimizer, which tends to wander when it gets close to the optimum; therefore, the Simplex optimizer is very good for finishing an optimization after the Gradient optimizer "bottoms out." The Simplex optimizer has a fairly long initialization process, requiring a number of functional evaluations at least as large as the number of variables, and often more. After initialization, its speed is relatively insensitive to the number of variables, but its improvement of the error function, per iteration, may be small when a large number of variables are in use.
The Simplex optimizer creates a constellation of N+1 points on the error surface, where N is the number of variables selected for optimization. The N+1 points in the search space define a simplex. The method works by contracting the highest point in the current simplex through the opposite face of the simplex (reflection). Other modifications to the simplex that are performed during the search are expansion and contraction.
The method performs the down hill moves until it reaches a local minimum. To avoid early convergence at a poor local minima, the method is restarted periodically using N new random points, and the best point in the previous simplex.
The “Simplex Optimizer (Local)” variation of this optimizer varies from the Simplex optimizer version in how it is initialized. The Local variation initializes the simplex closer to the original initial design, so it may be more efficient if a good initial guess is known.
This optimizer uses the Nelder-Meade Downhill Simplex optimization methodology, similar to the Simplex optimizers previously described, with a key difference.
The canonical Simplex method has rigidly-defined distances for finding the next point based on the points currently in its structure. This method is more robust because the size of the steps are permitted to vary according to the "Step Variation" parameter. If the Step Variation parameter is set to zero, this algorithm behaves like the canonical Simplex. If it is set between 0 and 1, the distances traveled by each step can vary from the canonical value. Higher values of the parameter produce more variation in the step sizes. This variability allows more thorough and robust search of the space, but it can also slow the algorithm down, especially on easier problems with few local minima. Cadence recommends not setting this parameter too high. A reasonable value appears to be 0.2.
The parallel version is even more robust than the single-thread version, due to its parallel sampling of the design space. Note that it tends not to converge faster. The same guideline applies to setting the Step Variation parameter. Note that both versions display an "Optimization stagnated" message if it restarts itself a sufficient number of times with no progress. The simplex method can converge to a very small region in the search space, and once it stagnates, even with multiple restarts, it almost never moves further. Detection criteria are used to end the optimization when this occurs, rather than continue to use resources.
Simulated Annealing is used in conjunction with the downhill simplex method discussed previously. The simulated Annealing method is incorporated into the downhill simplex method by adding a small temperature-dependent probabilistic deviation to the cost of each point in the simplex. Then a similar deviation is subtracted from any new point that is tried as a replacement for the current high point in the simplex. This will in effect, always take a downhill move, and sometimes take an uphill move with a probability based on the current temperature. By carefully controlling the rate of temperature change, the problem can be slowly "cooled" so that the solution converges to a global optimum instead of a local optimum.
The Annealing schedule used is defined by
where K is the total number of evaluations, k is the number of evaluations made so far, and To and a are parameters that can be adjusted to tune the performance of the algorithm. The following graph shows a plot with To=100 and a=4.
Differential Evolution (DE) is a population-based optimizer (a variation of the standard Genetic Algorithms (GA)). The crucial difference between DE and GA is its scheme for generating trial parameter vectors. Basically, DE adds the weighted difference between two population vectors to a third vector.
For Population Size, a general rule is to use 5 x the number of optimizable parameters up to a maximum value of 60. A small population size yields faster results, but is more likely to stall in a local optimum. Population sizes higher than 60 are usually not beneficial, independent of the number of parameters.
Usually, the Greedy strategy yields the fastest results. When an optimization stagnates (there is no improvement for a large number of iterations), you should try the Robust setting, which generally takes longer but is less likely to be stagnant.
For easy problems, lower Crossover Probability (for example: 0.3, 0.4 or 0.5) yields faster results, because if the parameters can be optimized independently, a smaller Crossover Probability is beneficial.
The chromosomes used for the optimization problem is the vector of continuous constrained variables that define the search space. Each gene is represented by a single scalar value in the vector. The genetic algorithms used in the AWR Design Environment software differ somewhat from the standard genetic algorithms. The typical combinatorial optimization versions of genetic algorithms create new points in space from two previous points in the space, by gene cross-over and mutation. The AWR Design Environment software versions have been modified to be better suited to continuous optimization problems. The modification involves a method of generating a number (the child gene) from two other numbers (the parent genes) in a somewhat random fashion. The generated number should be similar to either of the parent numbers, but not identical. The general algorithm was borrowed from the standard discrete optimization algorithms that can be found in the literature. For each of two parent numbers selected at random (biased towards more fit parents), two children numbers are generated. Two methods for the generation of the child genes are implemented in the AWR Design Environment software.
For each parent gene, a number is generated from a normal distribution using the parent as a mean for the distribution. The standard deviation of each distribution is taken to be the mutation rate and is computed from a user-defined maximum mutation, and a similarity ratio. The similarity ratio is computed from the similarities between the parent chromosomes (Not Genes) and is given by
where N is the dimension of the search space (length of the chromosome), M=Upper-Lower, and a and b are the parents being compared. The above S varies from 1 (a=b) to 0 (a=0, b=M). One number is generated from each parent, and the assignment of each of the two generated genes to the child chromosomes is done at random with equal probability. If the Max Mutation rate were zero, then the children would inherit identical genes from either parent.
In the second method, a gene is generated using a uniform random distribution between the two parent gene values. Once one child gene is generated, the second gene is generated as a mirror image about the center of the distribution. A random, normally distributed, zero mean deviation is then added to each gene value to provide a mutation mechanism. The standard deviation for this distribution is computed using the same method as the Genetic method above. The following figure demonstrates the procedure.
The advanced genetic algorithm combines various advanced methods of crossover, mutation, representation, etc. that have been found to work well on a variety of electromagnetic design problems. The algorithm has been found to be particularly effective on problems with large numbers of parameters, and design spaces involving physics simulation. It tends to find areas of interest in the search space very well, but can be slow to exploit these fully - that is, it is good at finding "hills" to climb, but is slower at climbing them once found. However, in complicated search spaces involving physical simulation, this algorithm can still be very effective even when climbing local hills, as it will be more robust than true local-search methods. Note that all variables must be constrained, though they can be discrete or continuous.
There are two parameters for this method:
Quality Factor: This parameter sets the amount of computational resources used in the optimization. The lower the number, the faster the convergence and the fewer function calls used for a given problem. The default of 2.0 has been found to provide good results for a variety of problems. Increasing the parameter often helps for very difficult or multi-goal problems. Reducing the number can help with highly-tuned, simpler problems.
Exploration: This parameter is the fraction of the available search space used to initialize the genetic algorithm. A value of 0.01, for example, means that the initial variable range will be within 1% of the original variable values. This can be useful if you are already close to the final solution in the search space. A value of 1.0 means that the entire search space is used to initialize the genetic algorithm, and is expected to be most useful when the best solution is unknown. Note that this parameter only involves the initialization of the algorithm, and does not prevent the variable values from moving out of this initial range.
Recommendations:
Advanced Genetic Algorithm works best with designs with large numbers of variables, many and competing criteria, and noisy or physics-based simulations. This method also can work very well if an initial guess is not known.
This method is not easy to trap in local minima, so it works very well for antennas and other physics-based devices. However, local minima can still cause problems even when optimizing close to the final design.
This method can be slow on more “pure” mathematical design spaces, such as simple filters with lumped ideal elements. It is not necessarily the best method for designs for which a local search method like the simplex works well.
A general strategy for optimization problems where the initial guess is not known and the solution will be highly tuned is to first use the Advanced Genetic Algorithm to find an approximate solution, and then switch to a Local optimizer such as Simplex to finalize the solution. In other words, for these types of problems, the genetic optimizer finds a good hill to climb, and the Simplex optimizer then climbs it quickly.
The Particle Swarm is a nature-based algorithm that mimics swarming behavior to search a design space. This method can work well with both physics-based design optimization as well as ideal component optimization. It can be used to search globally or locally, depending on the parameter settings.
There are two parameters for this optimizer:
Swarm Growth: This parameter is used to determine how quickly the swarm grows proportional to the number of unknowns. The larger the swarm, the more searching is done in parallel. However, a swarm that is too large may achieve sub-optimal results for a set number of simulations. The default of 2.0 has been found to give good results for a variety of problems.
Exploration: This parameter is the fraction of the available search space used to initialize the particle swarm. A value of 0.01, for example, means that the initial variable range will be within 1% of the original variable values. This can be useful if you are already close to the final solution in the search space. A value of 1.0 means that the entire search space is used to initialize the swarm, and is expected to be most useful when the best solution is unknown. Note that this parameter only involves the initialization of the swarm, and does not prevent the variable values from moving out of this initial range.
Recommendations:
Particle Swarm may not be as scalable as the Advanced Genetic Optimizer, but it is able to search an entire design space without a known initial guess at the solution, and thus is more general-purpose than a Local optimizer.
This method does not need to be combined with a Local optimizer, as it converges well on its own. However, for some problems, particularly those that use ideal or lumped components, it may be more efficient to use the particle swarm to find hills to climb, then climb them quickly with a local method like the Simplex optimizer.
The Discrete Local Search algorithm searches over a discrete grid of variable values. This optimizer is intended to be used for an efficient search of continuous real variables, where you want the end value rounded to a user-defined step size. By restricting the search space to discrete values, the algorithm can perform a more efficient search, in addition to providing results rounded to a step size.
You must provide the step size for each variable being optimized. The step size is set in the Element Options dialog box on the Parameters tab. The Discrete Local Search optimizer is the only optimizer that uses this step size as part of the search process. If the variable is constrained as well, then the discrete values that are allowed are computed starting from the lower bound (the values can be lower + i*step). Defining the step this way for constrained variables provides an easy way to ensure that the end values are on a user-defined grid.
You can also use the Discrete Local Optimizer with discrete variables. When doing so, you do not need to apply a step size. Since this optimizer is a local search method, any discrete variables should have ordered values (if the discrete variables are not ordered, it is unlikely that the search yields a good minimum).
The Discrete Local Search optimizer has two parameters you can set:
Number Grid Levels: The optimization can be performed over progressively finer grids. The ratio of the parameter sampling grid from one level to the next is always 4, so if Number Grid Levels is set to 3, then the search is first performed on a grid that is sampled at step_size*16, then step_size*4, then step_size*1. The search is performed on a course grid first, then the grid is refined (by a factor of 4 in each dimension) until you reach the final grid with a step_size of 1.
Allow Increase (0-1): This factor controls how much searching is allowed over sample points that increase the cost. If this value is set to 0, the search terminates when no cost improvement is found from any of the nearest neighbors. Setting it to 0 can reduce the number of total iterations for cases where the cost function is well behaved with a well defined minimum. Setting this value to a higher number allows the optimizer to get out of local minimum (as long as they are not too deep). The higher this value, the more exploration of higher cost values. By default, this parameter is set to 0.5, which is a reasonable trade-off. Setting this value to a higher number does not guarantee a better final result, as it is possible to result in a worse local minimum if too much exploration is done on the coarse grid before refining. For values of this parameter of 0.5 or less, exploration of higher cost values is only done on the finest grid, which is recommended if you think the solution is anywhere near the starting point.
Since the search space grows exponentially with the number of optimizer variables, Cadence recommends that you try to minimize the number of variables used with this optimizer. This optimizer is not expected to perform well with a large number of variables. For expensive optimizations (such as EM optimization), you should usually try to restrict the number of optimization variables to just a few.
One of the advantages of this optimizer is that you can set up information for it that can help minimize the number of evaluations needed to find an optimal solution. Choose as large a variable step size as possible and as small a variable constraint range as possible. Also, try to minimize the number of variables over which you are optimizing. When you are close to the optimal solution, setting Number Grid Levels to "1" and Allow Increase to "0" should minimize the number of iterations without degrading the solution. If you are far from the solution, using more grid levels can improve the efficiency of the search significantly. If the optimizer is getting stuck too early in local minimum, you can try increasing the Allow Increase parameter.
This optimizer can also be used to ‘snap’ values to a user-defined grid in an optimal way. For example, if you want to keep an optimized line width on a 1um grid, you can use this optimizer to pick the closest values that round to 1um that minimize the cost function. In this case, you should be close to the optimal solution, so set Number Grid Levels to "1" and Allow Increase to "0".
The Lineup Optimizer is an optimization method to maximize the yield of a circuit or system network that is composed of multiple elements. Each element can only take on a discrete set of values, and the Lineup Optimizer optimizes the various combinations of element values to maximize the number of lineups that meet optimization goals. The following example better explains this optimizer.
A typical application for this optimizer is to maximize the yield of a module manufactured from components that have performance variation. For example, the following amplifier module is composed of three discrete amplifier components. The available parts for each stage are measured, with S-parameter results stored in a MDIF file. See “MDIF File Format ” for more information. There is gain variation among the parts, so that not all combinations of available parts result in the cascaded total gain of the module meeting specification. You can use the Lineup Optimizer to maximize the number of modules that can be built that meet specifications, by optimizing the combinations or line up of existing parts.
To use the Lineup Optimizer, you must add an optimizer goal. Optimized variables and parameters must be discrete. In the previous example, the element parameter PartNo is the MDIF independent variable, and it is enabled for optimization. See “Setting Element Parameters for Optimization” for more information. The MDIF file must only have one independent variable, and it cannot be multi-dimensional.
After the optimizer is finished, a CSV file with the naming convention
project_name_ordering_results.csv
is generated in the
project directory. This file can easily be opened in a spreadsheet editor, and
it represents the best lineup combinations the optimizer finds. Each row of the
file represents one lineup combination, and the last column represents the
optimization cost of that lineup. Lineups with zero cost meet the optimization
goal.
Each optimized variable can take values from a bin of discrete values.
When the optimizer starts, n
lineups are evaluated where
n
is the number of parts in the smallest bin. The first
lineup consists of the first element of each bin. The second lineup consists
of all the second elements of each bin, and so on until n
lineups are created. The cost is computed for all n
lineups, and the pass rate is calculated. The next step is to randomly swap
parts within the same bin between the lineups, and recalculate the costs and
pass rate. If this swap improves the total pass rate (reduces the number of
lineups with cost greater than zero), then it is accepted, otherwise it is
rejected (swap reversed), and a new swap is tried. Only the first
n
elements of each variable bin are included in the
swap process, so if a variable has bin size >n
, not all of
its parts are included in the optimization process. The optimizer stops
swapping when either the goals are met (zero cost for all lineups), the
maximum number of iterations is reached, or you cancel the optimizer.
The Lineup optimizer has two parameters that you can set:
Maximum Iteration is the maximum number
of optimizer iterations to run before the optimizer is stopped.
This number is not the same as the Optimizer
Iteration count that is updating as the optimizer
runs. Two cost evaluations per iteration are needed for the two
lineups that swapped parts. Each cost evaluation is counted as
an iteration in the display, so the Optimizer
Iteration count is twice the Max
Iteration value plus n
(the
number of cost evaluations needed for the initial lineups) if
the maximum limit is reached.
Swap control influences how the optimizer picks parts to swap with. If the parameter is set to 0, then the optimizer only tries to swap parts between two lineups that are both failing (for example, with a cost greater than zero). If the value is 1, then approximately half the time, one of the lineups in the swap is failing. The larger the value, the more likely the swap occurs between any two randomly chosen lineups, and not just failing lineups. The default value of 1 is normally sufficient.
The following optimizers support discrete optimization: Pointer (Robust and Gradient), Discrete Local Search, Random (Global and Local), Kapu, Particle Swarm, and Advanced Genetic Algorithm.
Variables are discrete when they reference a vector. For example, to optimize a
width of microstrip line you can declare a vector widths=stepped(20,50,1) variable
and then use the vector for all w parameters of the MLIN element, like w=widths[10].
The optimizer optimizes the index. You can declare discrete quantities with any
available vector notation. The Discrete_Filter_Optimization.emp
project in the \Examples
subdirectory of the AWR Design Environment platform
program directory is provided as a discrete optimization example.
The following optimizers have versions that enable parallel optimization: Random (Local), Kapu, Particle Swarm, Advanced Genetic Algorithm, and Robust Simplex. Parallel versions have an extra parameter: num of parallel jobs. When you start the optimizer, this number of parallel threads is launched, and runs asynchronously in the background. These threads allow the optimization to use the computational power available in machines with parallel cores much more effectively. Note that the parallel methods are generally very similar to the original single-thread optimizers, but modifications are needed to run the simulations in parallel. These changes can sometimes result in different final design performance compared to the single-thread version. Even if their speed is not needed, you should consider trying these optimizers to see if they provide better results. In particular, the parallel version of the Robust Simplex optimizer can be much better at avoiding local minima.
You can use the yield analysis capabilities within the AWR Design Environment platform to study the effects of statistical variations on circuit performance. The AWR Design Environment software can analyze the yield of a circuit for a given description of the statistical properties of the component values. The software also performs yield optimization which optimizes the yield of a circuit by modifying the nominal values of specified parameters.
There are two steps you must perform before analyzing the yield of a circuit. The first step is setting up the parameter value statistical properties. The second step is the specification of the goals used to determine if a circuit has acceptable (pass) or unacceptable (fail) performance.
You can assign statistical properties to any independent variable or parameter. To set the statistical properties of independent variables, right-click the variable in the schematic or equation window and choose Parameters tab, click the button on the toolbar to display the applicable columns.
. The settings are similar to those described as follows; you can click the button for more information. To set the statistical properties for an element, double-click the element to display the Element Options dialog box, and on theClick the Use Statistics column to vary the parameter randomly, based on the other settings, during yield analysis or yield optimization. If Use Statistics is not selected, the statistical properties are ignored, and the parameter is not varied for yield.
Click the Yield Optimize column to allow modification of the nominal value of the parameter during a yield optimization. If the parameter value is also constrained, the yield optimizer uses these constraints to ensure that the nominal value is not changed to an unacceptable value. If a parameter is used for yield optimization, you must also select Use Statistics.
Distribution allows you to specify different statistical
distributions. You can specify variation using the Tolerance
and Tolerance2 columns. Tolerance2 is only
used for those distributions that need a second statistical parameter. You can
specify the variation as an absolute value, or as a percentage of the nominal value
(add %
as a suffix to the tolerance value).
Variation depends on what distribution type is set. If Distribution is Uniform, the variation is represented by a uniform distribution that is non-zero for the nominal value +/- the Tolerance.
If Distribution is Normal, the Tolerance parameter specifies the standard deviation of the distribution.
If Distribution is Log Normal, the Tolerance parameter specifies the standard deviation of the distribution. The shape of the curve is identical to the normal distribution when the x-axis is plotted on a log scale. If not on a log scale, the distribution displays as follows.
If Distribution is Normal - Tol, Tolerance specifies the standard deviation of the distribution and Tolerance2 defines how far from the nominal value to remove from the distribution. You can use this distribution to try to use lower quality parts that would be removed from the tighter tolerance center of the distribution. The dotted line in the following figure shows the full normal distribution for reference.
If Distribution is Discrete, the Tolerance (total amount of spread allowed) and the Tolerance2 (at what intervals) values are allowed.
If Distribution is Normal Clipped,Tolerance specifies the standard deviation of the distribution and Tolerance2 defines how far to keep from the nominal value in the distribution. The dotted line in the following figure shows the full normal distribution for reference.
The Yield Goals node in the Project Browser contain subnodes for each yield goal that you create in the AWR Design Environment platform for that project.
The statistical analysis requires the specification of the goals used to determine if a circuit has acceptable (pass) or unacceptable (fail) performance. You can add a yield goal by choosing Project > Add Yield Goal or by right-clicking Yield Goals in the Project Browser and choosing Add Yield Goal. Yield goals are set and edited very similarly to optimization goals (see “Setting Optimization Goals”).
To add a yield goal there must first be items in the project that can be measured (for example, schematics or EM structures). Choose Project > Add Yield Goal or right-click Yield Goals in the Project Browser and choose Add Yield Goal. You can also right-click a measurement on a rectangular graph legend and choose then click and drag the mouse on the graph to define start and end points for the goal. The points snap to a grid that is defined along the horizontal axis by the sweep points and along the vertical axis as one tenth of the division step size. Holding down the Shift key while dragging disables this snap-to-grid feature. The end points also snap to this grid when editing goals by dragging them on a graph, with the Shift key disabling the point snap.
In comparison to optimization goals, there are fewer options associated with the yield goals. For example, there is not an equality yield goal and there is no weighting associated with a yield goal. The Meas > Goal type indicates that the measured value must be greater than the goal for passing performance.
You can add as many yield goals to the project as desired. During a yield analysis, if any of the yield goals indicate a violation, that trial is considered to have failed.
You do not need to add goals if you do not want to get a yield value (number of trials passed). With no goals, you see the variation of the response on graphs when the analysis runs.
In general, yield analysis runs many simulations of the circuit and collects statistical data. For each trial, all enabled parameters for statistics are set to some value other than the nominal value (details are covered in each analysis type that follows). A simulation is then performed and the yield goals are computed to determine if the trial failed or passed.
The AWR Design Environment platform has several types of yield analysis:
Yield Analysis: variables are randomly varied.
Yield Optimization: variables are randomly varied and nominal values are adjusted to optimize the yield percentage.
Corners Analysis: all combinations of variables are at their extreme values.
User Defined Corners: variables are run at user-specified values.
Parallel Analysis: simulations are run locally in parallel, on machines in a remote simulation queue, or in parallel on machines in a remote simulation queue. See “Performing Parallel Analysis” for details.
The number of iterations and the percentage of circuits that are passing yield displays in the Yield Analysis dialog box during the analysis. The percentage passing also includes a "+/-" error component displayed after the number. You can also plot these numbers on graphs using the Yield and YldError measurements. If no yield goals are defined, the yield percentage is always 100% and the error is 0%. The following equation shows how the error is calculated.
where N
is the number of trials,
Y
is the estimated yield, ε is the percent error and
C
_{σ} is the confidence level
expressed as a number of standard deviations. In all calculations,
C
_{σ} is set to 2, which
corresponds to a 95.4% confidence level and means that the error estimate is
within a 2 standard deviation bound. You cannot change this parameter.
Solving the previous equation for ε and using 2 for
C
_{σ} you get the following
equation:
This equations shows exactly how the yield error is calculated at each yield iteration.
The following table solves the first equation for various values of yield and error to help you estimate how many yield iterations must be run to achieve a certain yield error value.
Yield (%) | 10% Error | 5% Error | 2% Error | 1% Error |
---|---|---|---|---|
10 | 36 | 144 | 900 | 3600 |
20 | 64 | 256 | 1600 | 6400 |
30 | 84 | 336 | 2100 | 8400 |
40 | 96 | 384 | 2400 | 9600 |
50 | 100 | 400 | 2500 | 10000 |
60 | 96 | 384 | 2400 | 9600 |
70 | 84 | 336 | 2100 | 8400 |
80 | 64 | 256 | 1600 | 6400 |
90 | 36 | 144 | 900 | 3600 |
Yield Analysis is typically called "Monte Carlo analysis" of the circuit. In this mode, the parameter values set to have statistical distributions are set to random values based on the parameter's statistical distribution at each iteration.
To perform yield analysis, choose Yield Methods, select Yield Analysis as shown in the following figure.
to display the Yield Analysis dialog box. InThe Maximum Iterations value determines how many iterations are run if you do not manually stop the analysis.
A peak of the component sensitivity histogram distribution that is not centered about the nominal value indicates that the yield can be improved by changing the nominal value. The yield optimization capability automatically adjusts selected nominal parameter values to improve the yield.
To perform yield optimization, choose Yield Methods, select Yield Optimization as shown in the following figure.
to display the Yield Analysis dialog box. InTo run yield optimization, you must have at least one model parameter or variable set to use statistics, and its Yield Optimize column must be selected. Yield optimization uses the Ysens data (the component sensitivity histogram) for each variable to center the distribution. Therefore, you cannot run statistics on one set of parameters or variables and adjust a different set of parameters or variables to optimize the yield.
Yield optimization requires a Monte Carlo analysis for each iteration of the yield optimization process. The number of yield iterations needed for each optimization iteration depends on the desired error in the yield estimate, and on the yield value. See “Performing Yield Analysis” for details on the relationships between these values, and a table to help estimate the number of trials needed to achieve a specified error. The error used during the yield optimization is specified by Maximum error entered as a number, not a percent. The number of trials that are needed for the error are computed automatically as the yield analysis continues using the current computed yield value. Each iteration of the yield optimization finishes when there are a sufficient number of trials to drop below the given error level. The lower error number specified requires more yield iterations for each optimization iteration. The number of yield optimization iterations has a maximum as set in Maximum Iterations, and stops with fewer iterations if no yield improvements are found. Typically only a few optimization iterations are required.
The yield analysis also allows you to specify a Dampening factor that determines the size of the corrections that are used for each iteration of the yield optimization algorithm. If the dampening factor is close to 1, then large corrections are made (that may overshoot the optimum yield values). If the dampening factor is small, the yield optimization makes small corrections in the nominal parameter values as the yield is optimized. Usually the default dampening is sufficient. If the yield tends to "bounce around" during yield optimization, you can try a smaller Dampening value. A larger Dampening value may be helpful for speeding the search for an optimum solution.
In Corners Analysis mode the parameter values that are set to have statistical distributions are set to either their maximum or minimum values at each iteration.
To perform Corners Analysis, choose Yield Methods as shown in the following figure.
to display the Yield Analysis dialog box, then choose in
When performing Corners Analysis, the maximum and minimum values for each parameter must be calculated. For uniform distributions, the maximum value is the nominal value plus the variation, and the minimum value is the normal value minus the variation. For normal distribution, there are no absolute minimum and maximum values since a normal distribution is a continuous curve. You must therefore specify how many standard deviations to use to calculate the minimum and maximum values using the Scale Relative to Sigma value. The extremes are the nominal ± this value of sigma away from nominal. Corners Analysis is not supported for distribution types other than uniform and normal.
The number of trials required to cover all of the corners combinations is
2^{N}, where N
is the number of
varying parameters or variables. You are only guaranteed an accurate range of
performance if you run the full 2^{N} trials, therefore
the Maximum Iterations value is determined for you. For
large numbers of N
, you should consider using yield
analysis to get a better sampling of performance. This analysis stops when it
has reached 2^{N} iterations.
In User Defined corners mode (also called "design of experiments"), you first define all of the values to use for each parameter or equation in a text file.
To perform User Defined analysis, choose Yield Methods as shown in the following figure.
to display the Yield Analysis dialog box, then choose inMaximum Iterations is not editable because the text file you specify in Data file name specifies the number of parameter sets to run.
The text file that defines the parameters used in User Defined corners is stored in the AWR Design Environment platform project. You can create a new data file by right-clicking Data Files in the Project Browser, choosing , and specifying Text Data Files as the type. The data file has the following syntax:
!Optional comments !First, you must specify all the names of the variables to be used. These define the columns in the data section V1 = "Schematic_A\R1\R" V2 = "Schematic_A\R2\R" !Optional statement that allows the values to be relative if set to greater than zero RELATIVE_VALUE=0 !Each row below is a sample for the Corners Analysis. Each column corresponds to a variable defined previously 1 1 3 4 4 5
The
variable name (on the left side of = ) must be Vn
, where
n
is sequential from 1..N and represents the column in the
data part of the table. For Vn
, V
must be
uppercase. Sample values are interpreted as relative to the nominal value of the
variable if RELATIVE_VALUE is defined and set to a value greater than zero
(otherwise, the sample values are the actual component values). In general, the
syntax is "Document Name"\"Element ID"\"Parameter Name". For equations, the
"Document Name" and "Element ID" are the same, and are the name of the variable
(left half of the equation). The "@" symbol also indicates an equation when in a
schematic or system diagram. The following shows some variable name example
formats:
!Parameter R on Resistor R1 in a schematic named Schematic_A V1 = "Schematic_A\R1\R" !Variable X on a schematic named Schematic_A V2 ="@Schematic_A\\X" !Variable Y in Global Definitions V3 ="Global Definitions\\Y"
Parameter values are entered in base units (for example, Farads, Meters, or Henries) so that the values used in simulation do not change if you change any of the project units.
When setting up this analysis, every parameter or variable enabled for yield analysis must have an entry in the control file. If not, the analysis does not run and a warning that the values for a yield variable were not configured is issued, as shown in the following example.
In this example, the T parameter on the SUB1 model in Schematic 1 does not have an entry in the control file. Additionally, the W parameter on the TL1 model in Schematic 1 does not have an entry in the control file.
The measurements that display on the graphs plot traces for each statistical trial simulated during a Monte Carlo analysis. The following graph shows a plot of the input reflection coefficient for a simple circuit.
There are several options for displaying the performance variations of measurements plotted during a yield analysis. You can set these options on the Graph Options dialog box Yield Data tab. See “Graph Options Dialog Box: Yield Data Tab” for details. The options control how the traces for each trial display on the graph. You can display the traces for passed trials, failed trials, or all trials. You can also display the mean value of the traces. Other available options include the display of the range of values as a standard deviation, two standard deviations, or as the lowest and highest values computed so far. These ranges can display as a trace of min/max values, as range bars that are centered about the trace, or as a filled envelope that shows the computed range.
The colors used to display each yield iteration are controlled on the Graph Options dialog box Format tab in the Trace Color Styles section. Yield sets the style for each individual yield run and Yield Range sets the style for any traces that aggregate the yield data (for example, range, median, or standard deviation). See “Graph Options Dialog Box: Format Tab” for setting details.
Yield measurements are available for the aggregate yield data (for example, range, median, and standard deviation). These measurements provide more flexibility for plotting this data than options such as trace color, thickness, and API access. The following is a list of these measurements:
To clear the traces that accumulate on a graph as a result of a yield analysis, you can click the
button in the Yield Analysis dialog box or choose .When running yield analysis, the yield data is available for the current AWR Design Environment software instance. If you want the yield data to be available after the project is closed and reopened, you need to use yield data sets. Selecting the Create data set for yield analysis option saves the yield data for later use. See “Yield Data Sets” for details on using yield data sets.
Each parameter with a statistical variation can take on a range of values. This range of values is divided into a set of discrete ranges called bins. The bins are used to keep track of the number of passes and fails as a function of the parameter value. As the yield analysis proceeds, the statistical parameter space is sampled at random. The value of each parameter falls into one of these bins for each trial. A parameter value that falls within the range of one of the bins for a single passing or failing trial adds to the number of passing or failing trials for that bin. The percentage of trials that pass are computed from the number of passing trials and the number of failing trials.
You can plot the binned results for each statistical parameter as a histogram. The histogram displays the percentage yield as a function of the parameter value. Each bar in the histogram represents one bin. The numbers on the top of the bars display the total number of trials that fell into that bin. The height of each bar represents the percentage of values that passed the yield. To create a component sensitivity histogram, you can create a histogram graph and then add a YSens measurement to the graph (select Yield as the Measurement Type in the Add/Modify Measurement dialog box). An example histogram is shown in the following figure.
You can use the component sensitivity histogram to determine how sensitive the yield is to the parameter variations. The histogram can also indicate if the yield can be improved by changing the nominal value of the parameter. If the peak of the component sensitivity histogram distribution is centered around the nominal value of the component, then the parameter is said to be "centered". A peak of the distribution that is not centered about the nominal value indicates that the yield could be improved by changing the nominal value. If the distribution of the component sensitivity histogram is flat, then the yield is not sensitive to the value of the component. A narrow distribution indicates that the yield is sensitive to the component value.
The previous distribution indicates a variable that is perfectly centered. The following histogram shows an uncentered variable.
The following histogram shows a parameter to which yield is not sensitive.
Often it is too time consuming to view the Ysens measurement for each variable in a design. Several measurements can help analyze the component sensitivity and provide a rank-ordered list of the variables that contribute the most to yield degradation. See the “Ranked Yield Improvement Estimate (by Component Variation Elimination): YRank ” and “Ranked Yield Improvement Estimate (by Component Variation Elimination and Centering): YRankCenter ” measurements for details on setting up these measurements.
Additionally, a Pareto measurement is available that can determine which variables most strongly influence a specified measurement. The results are rank ordered from largest to smallest making it very easy to identify which variables have the biggest influence on a given measurement. See the “Pareto Measurement for Yield: YPareto” measurement for details.
Statistical variation produces ranges of responses for simulation results. As previously discussed, you can see all of the yield results on current measurements for your project. You can also plot specific statistics from those results (for example, range and median). You can plot the binned simulation results as well as a histogram. The histogram displays the percentage of trials in a bin as a function of the measurement y-axis values. Each bar in the histogram represents one bin, and the numbers on top of the bars display the total number of trials that fell into that bin. The height of each bar represents the percentage of the values that fell into that bin. To create these performance histograms, create a histogram graph and then add a Ymeas measurement to the graph (select Yield as the Measurement Type in the Add/Modify Measurement dialog box).
The following graph shows S(1,1) of a filter after running 500 yield iterations.
At approximately 370 MHz, there is a null where it is unclear how many of the trials are at specific values; it just looks like a solid area. The area gives you an idea of what is possible. The performance histogram helps you understand how many trials are at binned values. The following graph shows the performance histogram for the null at this frequency.
See the “Performance Histogram: YMeas” measurement to better understand the results from these measurements.
By default, the performance histogram "bins up" the data at all x-axis points. However, on the measurement, you can specify that it only bin the data between a range of x-axis values. In the previous example, the data was only collected for 370 MHz.
In the setup represented here, the frequencies of simulation are done every 10 MHz. The setting of 368.0e6 (368 MHz) is the lower frequency in Hz and the setting of 370.0e6 (370 MHz) is the upper frequency in Hz. Only the 370 MHz frequency falls into this range, so only data for that frequency is used for the performance histogram.
When running yield iterations, you can use the following steps to find the exact parameter and equation values that produce a given yield trace.
Run a yield analysis to look for interesting results.
Click and hold on a yield trace for which you want to find the parameter values.
View the Status bar in the lower left of the AWR Design Environment platform interface for a list of yield iterations that produced the chosen result.
On a tabular graph, add the YSample measurement (the only measurement parameter is the yield index read from the Status bar in the previous step).
Simulate again to view the values used to produce the chosen yield result on the tabular graph.
The following figure is an example of this data.
In general, the syntax is "Document Name"\"Element ID"\"Parameter Name". For equations, the "Document Name" and "Element ID" are the same and are the name of the variable (left half of the equation). The "@" symbol also indicates an equation when in a schematic or system diagram.
You can store the results from yield analysis in a data set, and then use the data set to view the yield data at a later time. See “Yield Data Sets” for more information on using data sets.
All data generated in yield analysis can also be written to a file in XML format. This data includes the measurement data as well as the parameter/variable values used. XML format is used since it can be easily transformed into other formatting if you need to use this data in other programs.
The file generated has three general sections and there are comments above each to help you understand what the file contains.
Measurement data for each graph with the x and y data separated.
Identifiers for the variables or parameters used in the yield analysis. The syntax is the same as that used in the User Defined corners. See “User Defined Corners (Design of Experiments)” for more information.
The values used for each variable or parameter in the yield analysis. There is one vector of values for each component identified in the previous section.
The following is a simple two resistor example with one simple measurement to show an example of this file.
<YieldData> <!-- The 'trial' attribute indicates the yield trial index --> <Measurements> <measure name="Schematic 1:R_SRC(1)"> <data trial="0"> <x_data>1e+009 2e+009</x_data> <y_data>310.36 310.36</y_data> </data> <data trial="1"> <x_data>1e+009 2e+009</x_data> <y_data>301.567 301.567</y_data> </data> <data trial="2"> <x_data>1e+009 2e+009</x_data> <y_data>298.4 298.4</y_data> </data> <data trial="3"> <x_data>1e+009 2e+009</x_data> <y_data>316.759 316.759</y_data> </data> </measure> </Measurements> <Samples> <SampleCompNames numb_comp="3"> <!-- The index is just the index into the vector of component values (matches the order in SampleValues) --> <Component index="0">Schematic 1\IN1\R</Component> <Component index="1">Schematic 1\IN2\R</Component> <Component index="2">Schematic 1\IN3\R</Component> </SampleCompNames> <SampleCompValues numb_comp="3"> <!-- The Sample Values are ordered the same as the SampleCompNames above --> <!-- The pass property is used to indicate if the particular sample passed or failed yield --> <SampleValues trial="0" pass="1">97.9955 107.939 104.426</SampleValues> <SampleValues trial="1" pass="1">100.476 108.618 92.4725</SampleValues> <SampleValues trial="2" pass="1">98.1768 99.8825 100.341</SampleValues> <SampleValues trial="3" pass="1">115.092 97.0817 104.585</SampleValues> </SampleCompValues> </Samples> </YieldData>
When running yield analysis, your computer needs to keep various information in memory. If your computer runs out of memory during operations, you need to change settings by choosing Yield Options tab and adjust your options according to the descriptions of what yield information is lost if you turn off an option.
to display the Project Options dialog box. Click theWhen running yield analysis, parameter values can produce simulation errors. For example, the width of a line might exceed the allowed limits. At the default settings, yield analysis stops when it first encounters a simulation error. You can try the following settings to determine the cause of simulation errors:
In the Yield Analysis dialog box (choose Stop on simulation error check box and clear the Simulate nominal when finished check box. With these settings, when a simulation error occurs the errors in the Status Window provide clues as to what model is problematic.
), select theIn the Yield Analysis dialog box, clear the Stop on simulation error check box. Add the YPassFail measurement to a tabular graph (see “Iteration Status: YPassFail ” for measurement details). With this setup, when you run yield analysis you can see which trials produced a simulation error when you see a value of -1 for the YPassFail measurement. You can then use the YSample measurement (see “Vector of Sample Values Used in the Yield Analysis: YSample” for measurement details) to see the model parameters that produced the simulation error.
The AWR Design Environment platform has very powerful hierarchical design capabilities. This section includes hierarchical design topics.
Subcircuits are easily added to an existing schematic or system diagram. See “Adding Subcircuits to a Schematic or System Diagram ” for details. Cadence recommends adopting a "test bench" design approach where one schematic contains your design components (for example, transistors, microstrip lines, and capacitors) and then a higher level "test bench" is created to use the design as a subcircuit. All of the sources, sweeps, etc. are set up at the top level. This approach makes it very easy to set up new test benches for different types of simulation or to share a design with a co-worker.
Connections through hierarchy are determined by the PORT element or the PORT_NAME element. See “Adding and Editing Ports” for details on these elements. Ports can contain impedance information as well as signal information (for example, powers and signal types). This information is ONLY used when measurements are made on that schematic. If this schematic is used as a subcircuit, then the impedance and signal information are not used and a port is only used for determining connectivity.
Parameters can be instructed from a subcircuit to be available to pass to the subcircuit using passed parameters. See “Using Parameterized Subcircuits” for details.
Parameters can also be pushed from a top level to all levels of hierarchy. See “Using Inherited Parameters” for details.
Simulation filters and Switch Views are two advanced means of controlling what is simulated. Note that these approaches apply to all simulations including EM and transient simulations.
Simulation filters are an alternate way to configure what is simulated during a project simulation. By default, the AWR Design Environment platform is a measurement driven environment-- the simulators needed for measurements on graphs are the simulators that are used. For large projects this can be difficult to track, and can result in lengthy simulations. You can use simulation filters to control what is simulated by specifying filters for simulator type, document name, and other criteria.
You can access the simulation filters by right-clicking the Simulation Filters node in the Project Browser and choosing , or by choosing . In the Simulation Filters dialog box, click the button to add a new filter. See “Simulation Filters Dialog Boxes ” for more information. You can add many different filters and can also select All Documents or All Simulators to , or to multi-select filters. Any selected filter is applied to the overall simulation filter.
Switch Views are an alternate way to view the documents in your project. The view names are global to all documents in the project. A common Switch View use is to have an LVS (layout versus schematic) view. For each document in your system that requires a different representation for LVS, you can create an LVS Switch View. See “LVS (Layout vs Schematic) ” for LVS details. Another example of Switch View use is when you want to have different schematics for linear and nonlinear simulation.
Switch Lists control the views used during a simulation, or during LVS netlisting. The simulation Configuration you select in the Add/Modify Measurement dialog box controls which Switch List is used. You can create any number of simulation configurations.
A Switch List can contain one or more view names that are ordered in the list by priority. When Switch Lists are used, Switch Views with higher priority (those at the top) are used in preference to views with lower priority. This matching based on priority is designed to allow different documents in the system to have different Switch View names, so you can use a single global Switch List to choose which Switch Views are used for individual documents. If a document has more than one Switch View that matches one of the views in the Switch List, the view with the higher priority is used.
NOTE: Switch Views are not necessary for an LVS representation of distributed elements (which are shorts in LVS netlists). The distributed elements do this automatically.
When using Switch Views and Switch Lists, you may need to change the way you build schematics. Each design component that uses a Switch View must be a subcircuit rather than a direct component in a schematic. The following is an example of building a lumped element filter.
Without Switch Views and Switch Lists, you can build a schematic as shown. If you want to use Switch Views, however, you have three different options for modeling the inductor:
Distributed model for a microstrip inductor
Lumped element inductor
EM simulation
In the filter schematic, the inductor component must be a subcircuit as shown in the following figure.
The name of each Switch View is important to consider early in your design. You should establish a naming convention that makes sense for the entire design. Typically, a Switch View is named for its purpose, such as LVS or EM. In this example, the LVS named "Switch Views" is used for the proper LVS representation of each component, and the EM named "Switch Views" is used when there is an EM simulation available for the component.
Using Switch Views and EM extraction are not compatible. See “Extraction and Switch Views” for more information.
Any schematic, data file, or EM structure can have a Switch View. To create a
Switch View, you create a new document (schematic, data file or EM structure) or
rename an existing document using the naming convention
"default_name/switch_view_name
" where
"default_name
" is the name of the existing document
you are making the Switch View. Switch View documents must have the same number
of ports as the "default_name" schematic.
In the previous filter example you create Switch Views for the inductor in the circuit. The schematic setup for Switch Views is shown. The model in the inductor1 schematic is shown in the following figure.
You need to add a Switch View for the lumped element inductor used for the LVS netlist. Here you add a new schematic named "inductor1/LVS".
When you add this schematic, it displays differently in the Project Browser, showing you that it is a Switch List of the "inductor1" schematic.
Inside this schematic is a lumped element inductor with two ports, as show in the following figure.
The same steps are repeated for the EM Switch View with the schematic named "inductor1/EM". The Project Browser now displays two Switch Views for the "inductor1" schematic.
The EM Switch View should look like the following figure.
When you push into a subcircuit using the Edit Subcircuit command, if there are switch views of the referenced document an “Edit Subcircuit Dialog Box ” displays to allow you to choose which document to push into.
A Switch List is the mechanism through which you pick a different Switch View for each measurement. A Switch List tells the simulator which model to substitute for the default model when performing a simulation.
To create a Switch List:
Right-click the Switch Lists node in the Project Browser and choose Manage Switch Lists, or choose Simulate > Manage Switch Lists to display the Switch Lists dialog box.
Under Switch Lists, click the button. In the New Switch List dialog box, enter a name for the new switch list, then click . The name you enter is the name chosen when adding measurements to the project.
Select the new switch list name under Switch Lists, and then under Design Views click the button.
In the Edit Switch List dialog box, ensure that Select top level design for the Switch List specifies a schematic that is using a Switch View, as this populates the Available list. You can also click the button to add additional Switch View names.
From the Available list, select any view name and click the button to add the view to the current View List. Note that you can add several Switch View names to a view list. When you do, the first Switch View found in this list is used in simulation. Use the and buttons to re-order the Switch List.
Click
to add the View List to your project.Repeat these steps to create Switch Lists for the other Switch Views in your project.
When you add or edit a measurement in the Add/Modify Measurement dialog box, in Configuration you select the Switch List name you want for the measurement. The options available are Default or any Switch List names you configured.
For example, to create a Switch List to use the LVS Switch Views:
Create a Switch List named "For LVS".
Under Design Views, click the button to display the Edit Switch List dialog box. Select Filter from Select top level design for the Switch List since this schematic uses a subcircuit that has Switch Views.
Click LVS in the Available list and then click the button to add it to the View List, then click .
When complete, the new Switch List displays under Design Views in the Switch Lists dialog box and you can use it in simulation. You can select this Switch List and edit or delete it using the buttons at the bottom of the dialog box.
For example, to look at the magnitude of S21 of the filter using the lumped element inductor Switch View, select For LVS as the Configuration when adding the measurement from the Add/Modify Measurement dialog box.
In this example, a second Switch List is set up to use any EM models. The following graph shows results with the "For LVS" and "Electromagnetic" Switch Lists.
The Scripted APLAC simulator is created by placing a SCRIPT block in the schematic and setting its SIM parameter to “Yes” (the default). If a schematic contains one or more SCRIPT blocks with its SIM parameter set to “Yes”, a Scripted APLAC simulation is created with that schematic as a data source. Subcircuit hierarchy is maintained in the netlist; subcircuits are netlisted out as APLAC subcircuits (DefModels) and there may be several instances of the same subcircuit with different parameter values.
You need to write a Prepare statement if necessary. The Prepare statement is always required when running harmonic balance or noise simulations. Only one Prepare statement is allowed per netlist. You also need to define the analysis setup and the plotting commands. Analysis type and frequency or other parameters, as well as any loops, are typically defined using the Sweep statement. The APLAC simulator plot command is Show, which must be used within the Sweep … EndSweep block. When plotting nodal voltages, you need to refer to the node names. You can use a named connector (NCONN) in the schematic to specify a unique node name that can be referenced in the APLAC simulator scripts. Any curves requested by Show are automatically exported into the AWR Design Environment platform and the graphical result windows are added to the project.
You can add SCRIPT blocks in the schematic to utilize APLAC simulator scripting capabilities. Here, APLAC simulator scripting refers to all APLAC input language statements such as defining variables, performing numerical computations, defining simulation and optimization tasks, displaying simulation results graphically, file I/O, and others. The SCRIPT block is in the Simulation Control category of the Elements Browser.
The APLAC netlist written by the AWR Design Environment software has the following structure:
Prepare statement (if present, Prepare must always be the first item in the netlist)
“Before circuit description” SCRIPT blocks
Circuit description (all the elements in the schematic except SCRIPT blocks)
Sweep statement(s) defining the analysis and the measurements
“After circuit description” SCRIPT blocks
The SCRIPT block POSITION parameter defines whether the SCRIPT block belongs in the “Before circuit description” or the “After circuit description” group. The ORDER parameter defines the netlisting order of the SCRIPT blocks within each group, such that the SCRIPT block with the greatest order is netlisted last. The SIM parameter defines whether the SCRIPT block is allowed to spawn a Scripted APLAC simulation or not.