Artificial neural networks have proven to be powerful and general techniques for machine learning. However, they have several well-known shortcomings; perhaps the most significant one is the black box problem (i.e., they capture hidden relations between input and output without explicitly identifying the nature of the mapping between input and output or giving reason for the output). Rule extraction methods solve this problem by deriving a symbolic description for trained artificial neural network. Actually, rule extraction provides an explanation for the behavior of the network. This project presents three different approaches for three different architecture of neural networks. The first approach used to extract simple symbolic rules from Multi-Layer Neural Network (MLNN) using backpropagation learning algorithm. This approach is used to extract rules for both binary and continuous data set types. It consists of two main stages; first one is the training & pruning stage, while the other is the rule extraction stage and each stage consist of a number of steps. The second method used Self-Organization Map (SOM) to extract fuzzy symbolic rules for continuous input data. While the third one used Adaptive Resonance Theory (ART1) to extract rules for binary input data only. This project also discusses the problem of network initialization and how to obtain a minimum network size. All methods have been empirically evaluated and tested on three different data sets and the results obtained gives a good impression of the performance of these methods. The principle conclusion of this work is that different data set types may require different techniques for extracting rules.