Publications:Training neural networks by stochastic optimisation

From ISLAB/CAISR

Do not edit this section

Keep all hand-made modifications below

Title Training neural networks by stochastic optimisation
Author Antanas Verikas and Adas Gelzinis
Year 2000
PublicationType Journal Paper
Journal Neurocomputing
HostPublication
Conference
DOI http://dx.doi.org/10.1016/S0925-2312(99)00123-X
Diva url http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:286845
Abstract We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data.