Numeric Optimization and it's background theory is the main concept standing on the base of OpenDDPT. This math theory studies the minimization of countinuos surfaces on multidimensional spaces and can be used for all problems that can be solved finding points of minimum of a surface. So the first benchmark is done for giving an extimation of the convergence of the core optimization algorithm. The quality of this test is assured that comparison are done with other famous and very performant algorithm and many of them are implemented in matlab too. 
Function  Initial Position  GRADIENT  BFGS  Trustregion PCG 
Centi Algorithm 
Objective  
Powell  (3,1,0,1)  5144 (f*=9e5)  27  24  7  1e8  
Maratos  (0.9999,0.0141418)  5389 (f*=0.99)  217  86  46  1e6  
Rosenbrock  (1.2,1)  4278 (f*=0.039)  28  29  14  1e23  
Schwefel  (100,100,100)      20  7  0  
Schwefel Perturbed  (100,100,100)      19  7  4.9e6  
Beale  (1,1)      13  8  1.33e7  
Generaly optimization benchmarks for large scale system are done with neural network too. Then we can use OpenDDPT , that is a dynamic programming engine , for emulating neuralnetwork with huge overhead per iteration. Indeed this benchmark is not a measurement test for C/C++ sourcecode optimization but it's a benchmark for measuring performance of computational complexity of the used optimization algorithm. ( We are not using a simple gradientdescentmethod like in the most popular neuralnetwork but a new nonconvex secondorder algorithm based on directional derivatives. Using this algorithm we can take back time spent on sourcecode overheads, from improved computational complexity and convergence speed.) Comparisons are done between openddpt.sourceforge.net and fann.sourceforge.net (a very slim and fast backpropagation network). For both libraries time spent on trainingprocess is comparable but the solutions of OpenDDPT , in most cases, have better quality (less residual and more generalization). All tests are done on Amd 64 3000+ with Linux Mandrake 10.1beta. All tests are done using openddpt package but you can use directly fann library replacing quality.cc in the benchmark directory with the one linked here : download fann 1.2.0 patch file 
All value in the table below are expressed in dB that's a logaritmic way to make a quality comparison. The value on the left of the parentesis are the gain express on dB for the training set and the value on the right is for the test one. The scale is logaritmic so 3dB is equal to say twice better , 6dB four time better , 9dB sixteen time better. For all green colors OpenDDPT has better training and test residual values. For yellow colors OpenDDPT has better test value (so better network generalization) but a worst training residual. For red colors OpenDDPT loses in all cases. For graphics (resdisual/time) about training session and numerical values about iterations and residuals we can click on the cells of the benchmark table obtaining more details. 
