Robustification
From Wikipedia, the free encyclopedia
Robustification is a form of optimisation whereby a system is made less sensitive to the effects of random variability, or noise, that is present in that system’s input variables and parameters. The process is typically associated with engineering systems, but the process can also be applied to a political policy, a business strategy or any other system that is subject to the effects of random variability.
Contents |
[edit] Clarification on definition
Robustification as it is defined here is sometimes referred to as parameter design and is often associated with Taguchi methods. Within that context, robustification can include the process of finding the inputs that contribute most to the random variability in the output and controlling them, or tolerance design. At times the terms design for quality or Design for Six Sigma (DFSS) might also be used as synonyms.
[edit] Principles
Robustification works by taking advantage of two different principles.
[edit] Non-linearities
Consider the graph below of a relationship between an input variable x and the output Y, for which it is desired that a value of 7 is taken, of a system of interest. It can be seen that there are two possible values that x can take, 5 and 30. If the tolerance for x is independent of the nominal value, then it can also be seen that when x is set equal to 30 the expected tolerance of Y is less than if x were set equal to 5. The reason is that the gradient at x=30 is less than at x=5, and the random variability in x is suppressed as it flows to Y.
This basic principle underlies all robustification, but in practice there are typically a number of inputs and it is the suitable point with the lowest gradient on a multi-dimensional surface that must be found.
[edit] Non-constant variability
Consider a case where an output Z is a function of two inputs x and y that are multiplied by each other.
Z = x y
For any target value of Z there is an infinite number of combinations for the nominal values of x and y that will be suitable. However, if the standard deviation of x was proportional to the nominal value and the standard deviation of y was constant, then x would be reduced (to limit the random variability that will flow from the right hand side of the equation to the left hand side) and y would be increased (with no expected increase random variability because the standard deviation is constant) to bring the value of Z to the target value. By doing this, Z would have the desired nominal value and it would be expected that its standard deviation would be at a minimum: robustified.
By taking advantage of the two principles covered above, one is able to optimise a system so that the nominal value of a systems output is kept at its desired level while also minimising the likelihood of any deviation from that nominal value. This is despite the presence of random variability within the input variables.
[edit] Methods
There are three distinct methods of robustification, but a practitioner might use a mix that provides the best in results, resources and time.
[edit] Experimental
The experimental approach is probably the most widely known. It involves the identification of those variables that can be adjusted and those variables that are treated as noises. An experiment is then designed to investigate how changes to the nominal value of the adjustable variables can limit the transfer of noise from the noise variables to the output. This approach is attributed to Taguchi and is often associated with Taguchi methods. While many have found the approach to provide impressive results, the techniques have also been criticised for being statistically erroneous and inefficient. Also, the time and effort required can be significant.
Although it is not known to many, another experimental method that was used for robustification is the Operating Window. It was developed in the US before the wave of quality methods from Japan came to the West, but still remains unknown to many[1]. In this approach, the noise of the inputs is continually increased as the system is modified to reduce sensitivity to that noise. This increases the robustness, but also provides a clearer measure of the variability that is flowing through the system. After optimisation, the random variability of the inputs is controlled and reduced, and the system exhibits improved quality.
[edit] Analytical
The analytical approach relies initially on the development of an analytical model of the system of interest. The expected variability of the output is then found by using a method like the propagation of error or functions of random variables[2]. These typically produce an algebraic expression that can be analysed for optimisation and robustification. This approach is only as accurate as the model developed and it can be very difficult if not impossible for complex systems.
The analytical approach might also be used in conjunction with some kind of surrogate model that is based on the results of experiments or numerical simulations of the system.
[edit] Numerical
In the numerical approach a model is run a number of times as part of a Monte Carlo simulation or a numerical propagation of errors to predict the variability of the outputs. Numerical optimisation methods such as hill climbing or evolutionary algorithms are then used to find the optimum nominal values for the inputs. This approach typically requires less human time and human effort than the other two, but it can be very demanding on computational resources during simulation and optimisation.
[edit] See also
[edit] Footnotes
- ^ See Clausing (2004) reference for more details
- ^ See the 'Probabilistic Design' link in the external links for more information.
[edit] References
- Clausing (1994) Total Quality Development: A Step-By-Step Guide to World-Class Concurrent Engineering. American Society of Mechanical Engineers. ISBN 0791800350
- Clausing, D. (2004) Operating Window: An Engineering Measure for Robustness Technometrics. Vol. 46 [1] pp. 25-31.
- Siddall (1982) Optimal Engineering Design. CRC. ISBN 0824716337