DTU >
IMM >
ISP >
THOR >>>
HBP >
VRML >
ANNimation
Training of a neural network[VRML2] Training of a neural network for the prediction of the sunspot benchmark serie. The weights are optimized using a hybrid optimization scheme: The first order gradient descent (backpropagation) and the second order pseudo-gauss-newton (diagonale approximation to the Levenberg-Marquardt Hessian) - both with soft line search.An artificial neural network that is working with the sunspot benchmark series tries to predict the sunspot activity of the 13th year based on the 12th previous years. A neural network model has been the best for such a prediction. The animation shows a two-layer feed-forward neural network with the 13th year as the output of the neural network and the 12th previous years as the input; - the years closest to the 13th year is to the right. The weights are initialized to a very small value, and steadily increase. Blueish cold-colored are negative weights; reddish hot-colored are positive. The weights are first optimized with the first order optimization scheme. Approximately in the middle of the animation the magnitude of the weights change dramatically. This is when the more effective second order optimization takes over. | |
Finn Årup Nielsen, fn@imm.dtu.dk, 1995 |
Pruning a neural network[VRML2] Pruning by Optimal Brain Damage on the sunspot series. To improve generalization performance a fully connected, trained neural network is reduced in complexity: Usually down to between 10-20 parameters for the sunspot serie. Between the individual prunings the neural network is retrained a little with hybrid optimization.Pruning is removing of unnecessary parameters from the mathematical model (that is the artificial neural network). The unnecessary parameters overfit rather than help in the prediction. For statistician: It is similar to the Wald test. | |
Finn Årup Nielsen, fn@imm.dtu.dk, 1995 |
ANNimation - animation of an artificial neural network -
was made in connection with Benny Lautrup's
'lecture'
in the Danish National Television: DR1.
It was in connection with the popular science serie entitled
'Videnskaben eller Gud' ('Science or God').
Benny Lautrup spoke of many things, but the main point was his 'klamphuggeri' theory:
If the artificial neural network does not know what it models, -
does the biological neural network then?
The two VRML2 models above were shown live in the taped broadcasting on a 'High Impact'
workstation from Silicon Graphics. The VRML2 models are quite large
(417 ROUTEs and 208 Interpolators),
so smaller computers might have some difficulties with them.
There is a smaller pruning animation (It uses vrmlscript).
This toy model has scissors for the pruning of the weights.
There is also a large pruning animation with scissors.
The far right weights in this model are the threshold weights.
Finn Årup Nielsen, fn@imm.dtu.dk
HBP | THOR Center for Neuroinformatics, Human Brain Project Repository (This server) | |
THOR | THOR Center for Neuroinformatics | |
ISP | Intelligent Signal Processing | |
IMM | Informatics and Mathematical Modelling | |
DTU | Technical University of Denmark |
© IMM DTU, 1996, 2003
http://hendrix.imm.dtu.dk/vrml/ANNimation.html
$Id: ANNimation.html,v 1.5 2003/11/20 11:21:33 fnielsen Exp $