Fundamentals of Optimization
Optimization is the art of getting the greatest value, whether it's finding the best deal on a new pair of shoes or operating a catalytic cracker at its point of maximum efficiency. It can be a relatively simple exercise when the choices are limited and the cost of each is clear, but some optimization problems can involve thousands of variables requiring sophisticated mathematical analysis.

Optimization is the art of getting the greatest value, whether it's finding the best deal on a new pair of shoes or operating a catalytic cracker at its point of maximum efficiency. It can be a relatively simple exercise when the choices are limited and the cost of each is clear, but some optimization problems can involve thousands of variables requiring sophisticated mathematical analysis.
However, certain steps are common to all optimization problems:
Identify the cost to be minimized or the benefit to be maximized;
Determine the manipulatable variables or independent parameters most affecting the cost or benefit; and
Search for the solution or optimal parameters yielding the lowest cost or highest benefit.
Television signal optimization


For example, suppose that the strength of a satellite TV signal drops off exponentially as the receiving dish is rotated increasingly out of line with the satellite. The cost function for this optimization problem is shown in the "Singlevariable quadratic cost function" example, momentarily assuming that only the dish elevation is adjustable.
In the unlikely event that satellite position is known, this problem can be solved easily by rotating the dish to point directly at the satellite, thereby reducing the cost function to zero. However, trialanderror is a far more common technique for more complex optimization problems, either because the cost function is not explicitly known or the optimal parameters are difficult to calculate analytically.
Trialanderror techniques work well so long as a set of possible solutions can be guessed and the costs of each compared. With the satellite dish, any guess that points the dish towards the southern U.S. sky would be reasonable. The cost of each can be measured directly using a meter that becomes audibly louder as the signal strength increases.
Trialanderror is more effective if results of prior guesses can be used to make future estimates more accurate. For the dishpointing problem, that would entail noting the relative volume of previous meter readings and rotating the dish in the direction that yields a louder reading—as opposed to randomly pointing the dish, hoping to find a stronger signal. Mathematically, that procedure corresponds to the gradient method —essentially moving the best guess "downhill" along the cost function to reach the parameter yielding lowest cost.
Complications
On the other hand, checking hundreds of randomly selected parameters for the one yielding the lowest cost also can be useful, since plotting the parameters against their associated costs yields an empirical costfunction chart. Then, the optimal parameter can be read right from the chart.
This Monte Carlo method is especially useful if the cost function happens to have more than one local minimum ; that is, multiple points on the cost function that are lower than their neighbors but are not the lowest overall. See the "Local minimums" example. Like the gradient method, systematic trialanderror techniques tend to home in on one of the local minimums, without seeking better solutions elsewhere. That's not as much of a problem with the Monte Carlo method or any other optimization technique that considers essentially all possible solutions.


Constraints can also complicate an optimization problem. These conditions preclude a particular solution, even though the solution might otherwise yield the lowest overall cost. For example, a poorly located satellite dish might encounter physical barriers that would prevent it from being pointed directly towards the strongest satellite. See the "Constraints" example.
Multivariable optimization
Optimization problems are trickier when more than one parameter can be manipulated to lower the cost or increase the benefit. Consider a satellite dish adjustable for azimuth and elevation. See the "Twovariable quadratic cost function" example.
This problem can be solved easily by optimizing each parameter independently—set the elevation to an arbitrary value and rotate the dish to its optimal azimuth, then rotate the dish to its optimal elevation without changing the azimuth. Oneatatime optimization works in this case because each parameter makes a separate contribution to the cost function and each is constrained independently of the other.
However, most multivariable optimization problems involve constraints that are functions of more than one parameter, such as "the total of flow rate A plus flow rate B must not exceed 100 gpm," rather than "flow rate A must not exceed 70 gpm" and "flow rate B must not exceed 30 gpm." Altering one parameter's value changes limits on the others, so individual parameters cannot be optimized separately.
An angled downspout adds an additional constraint (10b), where the constants a, b, and c depend on the geometry of the downspout and the dish. The other two constraints are the same as for the previous example, but the cost function has been replaced with a linear objective or benefit function B(e, á). As shown in equation (7), it is a sum of the parameters rather than a sum of squared errors. It is structured to maximize the signal rather than minimize the cost of a poor signal. The higher up the inclined plane a possible solution lies, the closer it comes to being optimal. The feasible region—as defined by the constraints—is all of the positions assumable by the dish. Linear programming problems are always constructed so the parameters closest to optimum will be found at one corner of the feasible region. In this case, that would literally be the lower corner of the downspout, where the dish can come as close as possible to direct alignment with the satellite. 
Such problems typically require linear programming —that uses a simplified benefit function rather than a cost function—to reduce the total number of possible solutions. See the "Twovariable linear programming" example. Linear programming also provides the simplex method algorithm to search for the optimal solution by manipulating all parameters in concert, rather than individually.
Control applications
Many of these techniques are applicable to feedbackcontroller design and operation. To determine the optimal control effort, virtually all controllers use some sort of trialanderror algorithm that minimizes a cost based on the error between the setpoint and the process variable. For example, as a measure of how well a PID controller has minimized its cost, it will use the current error, the sum of past errors, and the difference between the last two errors. If the error isn't zero yet, it will compute another guess for the optimal control effort using some variation on the gradient method.
Minimum variance controllers use quadratic cost functions, integrating the squared error over time. In addition to past history, they use process models to guess what future control efforts will be required to minimize cost. Minimum variance controllers can also be designed with cost functions that penalize excessive controleffort variations as well as deviations from the setpoint.
PID loop tuning is a multivariable optimization problem with threeparameters:
Proportional gain, P;
Integral gain, I; and
Derivative gain, D.
Manual tuning procedures adjust each parameter one at a time to maximize the performance of the closed loop measured in terms of rise time, settling time, overshoot, and so on. Unfortunately, since the benefit function is far from linear, PID tuning is not suited to simultaneous parameter adjustments afforded by linear programming.
Linear programming is used more often for setpoint selection when multiple controllers are attempting to achieve competing objectives simultaneously. Linear programming can determine the setpoints that maximize overall profitability of a multivariable process, provided each individual controller is successful in forcing its process variable to match its assigned setpoint. Furthermore, if all loops' operating ranges are known, constraints can be placed on the problem to guarantee that controllers are assigned only achievable setpoints.
 Events & Awards
 eGuides
 Magazine Archives
 Digital Reports
 Global SI Database
 Oil & Gas Engineering
 Survey Prize Winners
 CFE Edu