!!!! PAGE UNDER COSTRUCTION !!!! 
WHY OPENDDPT IS BETTER THAN STANDARD TRAINING PROCESS 
OpenDDPT is a free open source program hosted by sourceforge. The objective of this project is to make a tool accessible and easy to use that solves the problem of dynamic nonlinear programming. This is as to say that OpenDDPT is a software dedicated to the solution of general nonlinear and time variant equation systems. A subproblem concerning nonlinear programming is the neuralnetwork one and in this web page we are describing why OpenDDPT training processes is better and more flessible than other neural network training programs and we can prove it with some comparing bechmarks . Then we can start describing standard training theory standing on neuralnetwork. 
The neural network training process used by the most part of programs has an easy interpretation that is the gradient descent on the space of the weights. The gradient is computed by those programs from the error surface (or residuum) that is the quadratic error sum of each example of the neural network . An example of error surface is shown in picture 3. A point in this surface is identified by a complete set of weights. The standard training process in this point measures the surface slope along each axis (that is called partial derivative of the surface along weights) that rappresentes how error surface changes when weights are modified by a little quantity. Then it can modify weights proportionaly of the slope of each direction. This makes the network moving in a descent direction along the error surface. 
Picture 2: The analitic form of error surface of a neural network where g(x) is the nonlinear activation function. 

The standard training process uses the backpropagation to subdivide the gradient evaluation between each network unit so the weights extimation can be done from the unit where the weight is associated to using only local informations. 
