You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information.
|
|
Optimization Methods
The importance of optimization in data analysis is reflected in the
large number of methods and algorithms developed to solve optimization
problems. Here is a short survey on optimization methods:
-
Closed-form mathematical solutions. These are only available if
the function to be optimized is well-known in a mathematical sense. Maxima
or minima can then be calculated by differentiating the function and setting
the first derivative to zero.
-
Brute force approach.
The optimum is found by calculating all possible combinations. This approach
is feasible only with a restricted phase space.
-
Gradient descent methods.
These methods are based on the classical idea of stepping down a gradient
in order to find a minimum. Gradient descent methods tend to be caught
in local minima.
-
Monte Carlo methods.
Searching in phase space is done by random walks.
-
Combination approaches. Genetic
algorithms combine gradient descent and Monte Carlo methods. They are most
efficient with large phase spaces.
Before selecting one specific optimization method, an important
constraint has to be considered: it makes a big difference, whether the
value of the
response function can
be obtained by inserting the parameters into a mathematical equation, or
if a real-world experiment has to be performed with the new parameter set
(as is the case, for example, in the optimization of processes in the chemical
industry).
Last Update: 2006-Jän-17