The concept of optimization is intrinsically tied to humanity's desire to excel. Trough we may not consciously recognize it and trough the optimization process takes different forms in different fields of endeavor, this drive to do better than before cosumes much of our energy, whether we are athletes ,artists,business-man or engineers. The field we are concerned with is that of engineering and, while the emphasis here is on mecchanical,aeronautical, and civil engineering, this is by no means the limit of applicability of optimizaztion processes. We will consider the use of numerical optimization techniques to devise a rational, directed design procedure. Altrough these methods provide a computational tool for design , there is much more to be gained from this study. Indeed numerical optimization provides us with a new design philosophy. It gives us an ordered approach to design decisions where before we relied heavily on intuition and experience. It is this order which makes the techniques presented so very attractive, because it provides insight into the actual design process.However, this should not be construed to suggest that the design process can be reduced to a few computer runs or that out intuition and experience are unimportant. Rather, the computer can now be used to relive us of the tedium of repetive calculations, freeing us to spend time in the truly creative apects of engineering design.

Garret N. Vanderplaats

The most famous local optimization algorithms are the gradient method and the Newton one. The first algorithm is easy to write and has low-cost iterations but the convergence to a solution is weak and slow. The second one has the desiderable attribute of quadratic convergence, but it requires evalutation of second partial derivatives (the Hessian Matrix) and inverses matrices, and it is therefore well-suited for end-game search moreover Newton-Method doesn't work for hessian-indefinite case. Then the objective of this paper is to obtain a new algorithm with no compomises and without the weakness of the Newton and Gradient one. The Hessian matrix is replaced by fast-hessian product with the use of directional operators ( See Barak A. Pearlmutter Fast Exact Multiplication by the Hessian , Siemens Corporate Research) and matrix inversion is replaced by minimization of the second-order taylor series with the use of an internal CG-optimization algorithm.

Source Forge Theory and Papers Download Page

Source Forge c++ Optimization Library Download Page


[0] Algorithms and Consistent Approximations

Barak A. Pearlmutter

[1] Fast Exact Multiplication by the Hessian , Siemens Corporate Research