Numeric Optimization and it's background theory is the main concept standing on the base of OpenDDPT. This math theory studies the minimization of countinuos surfaces on multidimensional spaces and can be used for all problems that can be solved finding points of minimum of a surface. So the first benchmark is done for giving an extimation of the convergence of the core optimization algorithm. The quality of this test is assured that comparison are done with other famous and very performant algorithm and many of them are implemented in matlab too.


Function Initial Position GRADIENT BFGS Trustregion
PCG
Centi
Algorithm
Objective
Powell (3,-1,0,1) 5144 (f*=9e-5) 27 24 7 1e-8
Maratos (0.9999,0.0141418) 5389 (f*=-0.99) 217 86 46 -1e6
Rosenbrock (-1.2,1) 4278 (f*=0.039) 28 29 14 1e-23
Schwefel (100,100,100) - - 20 7 0
Schwefel Perturbed (100,100,100) - - 19 7 4.9e-6
Beale (-1,1) - - 13 8 1.33e-7



Generaly optimization benchmarks for large scale system are done with neural network too. Then we can use OpenDDPT , that is a dynamic programming engine , for emulating neural-network with huge overhead per iteration. Indeed this benchmark is not a measurement test for C/C++ source-code optimization but it's a benchmark for measuring performance of computational complexity of the used optimization algorithm. ( We are not using a simple gradient-descent-method like in the most popular neural-network but a new non-convex second-order algorithm based on directional derivatives. Using this algorithm we can take back time spent on source-code overheads, from improved computational complexity and convergence speed.) Comparisons are done between openddpt.sourceforge.net and fann.sourceforge.net (a very slim and fast back-propagation network). For both libraries time spent on training-process is comparable but the solutions of OpenDDPT , in most cases, have better quality (less residual and more generalization). All tests are done on Amd 64 3000+ with Linux Mandrake 10.1-beta. All tests are done using openddpt package but you can use directly fann library replacing quality.cc in the benchmark directory with the one linked here : download fann 1.2.0 patch file


Benchmark Rprop Rprop* Quickprop Quickprop* Batch Batch* Incremental Incremental*
Pumadyn-32fm
(+3,+2)
(+1,+0)
(+1,+1)
(+1,+0)
(+0.2,0.0)
(+0.3,0.0)
(+0.3,0.0)
(+0,+0)
Two-spiral
(+224,+10)
(+218,+8)
(+225,+11)
(+225,+11)
(+225,+11)
(+225,+11)
(+224,+11)
(+224,+10)
Parity8
(+211,+211)
(-300,-300)
(+222,+222)
(+222,+222)
(+222,+222)
(+222,+222)
(+151,+151)
(+222,+222)
Building
(+5,+5)
(+5,+5)
(+6,+6)
(+6,+5)
(+6,+6)
(+6,+6)
(+5,+5)
(+5,+5)
Thyroid
(+3,+3)
(-2,+2)
(+10,+8)
(+2,+2)
(+10,+8)
(+9,+7)
(-6,+3)
(-7,+4)
Cancer
(+51,+3)
(-300,+3)
(+198,+3)
(+198,+3)
(+196,+3)
(+194,+3)
(+165,+3)
(-300,+3)
Mushroom
(-266,-269)
(-300,-45)
(-16,-18)
(-23,-25)
(-7,-10)
(-8,-11)
(-61,-65)
(-300,-42)
Soybean
(+34,+17)
(+34,+16)
(+16,+12)
(+14,+13)
(+14,+11)
(+14,+12)
(+13,+11)
(+13,+12)
Parity13
(+12,+12)
(+20,+20)
(+20,+20)
(+20,+20)
(+20,+20)
(+20,+20)
(-3,-3)
(+20,+20)
Robot
(+20,+2)
(+19,+3)
(+25,+3)
(+26,+3)
(+23,+3)
(+21,+3)
(+17,+3)
(+9,+3)
Horse
(+2,+5)
(+2,+5)
(+2,+6)
(+2,+6)
(+2,+5)
(+2,+5)
(+2,+6)
(+5,+6)
Heart
(+5,+2)
(+4,+2)
(+1,+2)
(+2,+2)
(+2,+2)
(+3,+2)
(+3,+3)
(+2,+3)
Glass
(+12,+8)
(+13,+8)
(+11,+8)
(+10,+8)
(+9,+8)
(+7,+8)
(+10,+8)
(+8,+8)
Flare
(+6,+5)
(+6,+5)
(+5,+5)
(+5,+5)
(+5,+5)
(+5,+5)
(+5,+5)
(+5,+5)
Diabetes
(+3,+3)
(+5,+3)
(+4,+3)
(+4,+3)
(+3,+3)
(+3,+3)
(+4,+3)
(+3,+3)
Gene
(+19,+7)
(+16,+5)
(+8,+5)
(+15,+6)
(+1,+6)
(+7,+6)
(+6,+5)
(+9,+6)
Card
(-45,+0)
(-45,-0)
(-18,-0)
(-14,-0)
(-20,-0)
(-21,-0)
(-46,+1)
(-100,+0)
*Stepwise fann.sourceforge.net


All value in the table below are expressed in dB that's a logaritmic way to make a quality comparison. The value on the left of the parentesis are the gain express on dB for the training set and the value on the right is for the test one. The scale is logaritmic so 3dB is equal to say twice better , 6dB four time better , 9dB sixteen time better. For all green colors OpenDDPT has better training and test residual values. For yellow colors OpenDDPT has better test value (so better network generalization) but a worst training residual. For red colors OpenDDPT loses in all cases. For graphics (resdisual/time) about training session and numerical values about iterations and residuals we can click on the cells of the benchmark table obtaining more details.



 
Apache