Home|Help Forum|Discussion Forum|Download|Linux-HowTo|Optimization|Neural|OptimalControl|Benchmarks|Links|Help Wanted|Bio



!!!! PAGE UNDER COSTRUCTION !!!!

WHY OPENDDPT IS BETTER THAN STANDARD TRAINING PROCESS


OpenDDPT is a free open source program hosted by sourceforge. The objective of this project is to make a tool accessible and easy to use that solves the problem of dynamic non-linear programming. This is as to say that OpenDDPT is a software dedicated to the solution of general non-linear and time variant equation systems. A sub-problem concerning non-linear programming is the neural-network one and in this web page we are describing why OpenDDPT training processes is better and more flessible than other neural network training programs and we can prove it with some comparing bechmarks . Then we can start describing standard training theory standing on neural-network.




The neural network training process used by the most part of programs has an easy interpretation that is the gradient descent on the space of the weights. The gradient is computed by those programs from the error surface (or residuum) that is the quadratic error sum of each example of the neural network . An example of error surface is shown in picture 3. A point in this surface is identified by a complete set of weights. The standard training process in this point measures the surface slope along each axis (that is called partial derivative of the surface along weights) that rappresentes how error surface changes when weights are modified by a little quantity. Then it can modify weights proportionaly of the slope of each direction. This makes the network moving in a descent direction along the error surface.


Picture 2: The analitic form of error surface of a neural network where g(x) is the non-linear activation function.


Picture 3: A error surface for a gradient based search in the weights space. For w1=a and w2=b the error is minimal.


The standard training process uses the backpropagation to subdivide the gradient evaluation between each network unit so the weights extimation can be done from the unit where the weight is associated to using only local informations.





Source Forge Download Page