Genetic learning of neural networks

number: 
381
English
department: 
Degree: 
Imprint: 
Computer Science
Author: 
Ogus Mohammed Hussein Aranay
Supervisor: 
Dr. Riyadh A. K. Mehdi
year: 
1999

A neural network has a parallel-distributed architecture with a large number of nodes and connection weights. The learning rule is one of the important attributes to specify for a neural network. It determines how to adapt connection weights in order to optimize the network performance. The Backpropagation network is the most well known and widely used among the current types of neural network systems available. It is a multilayer feedforward network with a powerful learning rule. The learning rule is known as Backpropagation. The Backpropagation network is prone to local minima just like any other supervised learning algorithm. Genetic algorithms are algorithms for optimization and learning based on the mechanism of genetic evolution. In this thesis functional approximation is introduced. In such application, the number of input units is determined by the number of arguments taken by the function and one output unit is needed. Genetic algorithm is used in conjunction with the Backpropagation algorithm. Genetic operators are applied to modify the weights learned by the neural network to escape the local minima the network may encounter. Two combined genetic neural learning algorithms are introduced and compared with the conventional Backpropagation algorithm. The results showed that the combined genetic learning can escaped local minima., and can, also, generalize better than the conventional Backpropagation learning. Some statistical evaluations are performed to show such results.