Worst-case distance


In fabrication the yield is one of the most important measures. Also in the design phase engineers already try to maximize the yield by using simulation techniques and statistical models. Often the data follows the well-known bell-shaped normal distribution, and for such distributions there is a simple direct relationship between the design margin and the yield. If we express the specification margin in terms of standard deviation sigma, we can immediately calculate yield Y according to this specification. The concept of worst-case distance extends this simple idea for applying it to more complex problems.
The WCD is a metric originally applied in electronic design for yield optimization and design centering, nowadays also applied as a metric for quantifying electronic system and device robustness.
For yield optimization in electronic circuit design the WCD relates the following yield influencing factors to each other:
Although the strict mathematical formalism may be complex, in a simple interpretation the WCD is the maximum of all possible
performance variances divided by the distance to the performance specification, given that the performance variances are evaluated under the space spanned by the operating range range.
Note: This interpretation is valid for normal distributed variables and performances, luckily the "specification-margin" of a design is almost intuitively related to the yield, e.g. if we have a larger "safety margin" in our design to the limit we are more on the safe side and the production will contain less fail samples. Actually, the advantage of WCD is that it offers an elegant method to treat also non-normal and multi-variate distributions while still offering a picturial, intuitive understanding.

Most simple non-trivial example

In the most simple non-trivial case there is only one normally distributed performance parameter with mean and standard deviation and one single upper limit for the performance specification.
The WCD then calculates to:
In this example it is assumed that only statistical variances contribute to the observed performance variations, and that the performance parameter does not depend operating conditions. Once we found the WCD, we can calculate from it the yield by using the error function or by using look-up tables.
For the discussion of any case, more complex than the above-mentioned example, see Antreich et al., 1993. In design environments the WCD calculation is not done analytically but in a numerical way. Most WCD algorithms start with a short Monte-Carlo analysis, and use then optimization techniques to find the point in the statistical variable space which hits the specification border with minimum vector length. For cases with many statistical variables, there is usually a filtering step after the MC run. The more points are spent in the MC run, the better the optimization starting point; and the more reliable optional the filtering step.

Relation to process capability index

In the above-mentioned one-dimensional example the WCD is closely related to the process capability index value:
which is used in process control and from process yield can be derived.
Note: The Cpk is also defined for having a lower and upper specification limit, but for WCD we have to treat both specifications separately.

Limitations of the WCD concept

If we run a WCD analysis on multiple specifications we will have at least as many WCDs as specifications, but usually the worst-case dominates the yield. However, the assumption that the lowest WCD accurately represents the total yield is violated in several difficult cases, e.g. with nonlinear specifications or in case of many highly competings specifications.
Examples:
For a specification like offset voltage < 30mV=f, we get for a normal distribution with mean=0 and sigma=10mV a WCD of 3 - which is equivalent to Y=99.87%.
However, for a spec like |Voffset| < 30mV we would get again WCD=3, but the yield loss is now 2x higher, because now the region of fail is split.
As real-world designs can be very complex and highly nonlinear, there are also examples where the WCD can be much more wrong, e.g. in case of an ADC or DAC and e.g. specifications on differential nonlinearity. Also for CMOS timing analysis a WCD analysis is very difficult.
On the other hand: Although the WCD might be wrong compared to the true yield, it can be still a very useful optimization criterion to improve a design. The WCD concept also offers really defining the set of statistical parameters to choose as worst-case, being a perfect measure to start an optimization.
However, a very important limitation is on just finding the WCD point, i.e. the set of statistical variable values which hits the spec-region, because even small real-world problems can have thousands of such variables. This makes a slow brute-force search impractical, and very robust optimizers are needed to find the WCDs.
Of course, even the concept of WCD is questionable to some degree, it covers e.g. not what happens beyond the WCD. Surely a design is better if it not completely breaks for "outliers", but remains at least functionable. So WCD is a helpful piece in the whole design flow and does not replace understanding.
In opposite, random Monte-Carlo is a concept which comes with much less restricting prerequisites. It even works for any mix of any kind of variables, even with an infinite number of them or even with a random number of random variables. All advanced methods typically need to exploit extra assumptions to be faster – there is no free lunch. This is the reason give e.g. WCD can offer sometimes a huge speed-up, but sometimes fail hopelessly.

Alternative concepts

WCD allows to simplify yield problems, but it is not the only way to do this. A simpler way is not to find the margin in terms of sigma in the space of statistical variables, but just to evaluate the performance margin itself. The worst-case performance margin WPM is much easier to obtain, but here the problem is usually, that although your statistical variables might be normal Gaussian distributed, the performances will often not follow that distribution type, usually it will be an unknown more difficult distribution. For this reason, the performance margin in terms of sigma is at best a relative criteria for yield optimization. This often leads to pure Monte-Carlo methods for solving the WPM problem, whereas WCD allows a more elegant mathematical treatment, only partially based on Monte-Carlo.
Random Monte-Carlo becomes inefficient for high yield estimation if the distribution type is uncertain. One method to speed-up MC is using non-random sampling methods like Latin hyper-cube or low-discrepancy sampling. However, the speed-up is quite limited in real design problems. A promising newer technique is e.g. scale-sigma sampling. With SSS there is a higher chance to hit the fail region and more samples in that will lead to a more stable statistic, thus tighter confidence intervals. In opposite to importance sampling or WCD SSS makes no assumptions on the fail boundary shape or number of such fail regions, so it is most efficient in cases with many variables, strong nonlinearity, difficult and many specifications.