DESIGN AND DEVELOPMENT OF NEURAL NETWORK SIMULATOR

Neural Networks (NN) are computational models with the capacity to learn, to generalize, or to organize data based on parallel processing. Among all kinds of networks, the most widely used are multi-layer feed-forward Neural Networks that are capable of representing non-linear functional mappings between inputs and outputs and are hailed as “Universal Approximators”. These networks can be trained with a powerful and computationally efficient method called error back-propagation. This paper presents the development of Neural NetworkSimulator using Backpropagation algorithm using Visual Basic 6.0. The methodology used in this development is System Development that consists of five phases. The phases include Preliminary Study, Analysis, Design, Implementation and Maintenance. Testing has been made on logic gate data AND, OR and XOR. Results show that the Neural Networks is 99% accurate.


Introduction
Artificial Neural Network (NN) is a computational paradigm that comprises mathematical, statistical, biological sciences and philosophy.ANN has been shown to be successful as predictive tool (Wahab, et. al., 2010) and one of the most commonly used NN tool to perform prediction, classification and forecasting is the supervised algorithm known as Backpropagation algorithm (Haykin, 1999).Some organizations have advanced sophisticated systems that constrain the training of hundreds or even thousands of NN on a weekly basis to predict stock market index movements as well as individual stock price behavior (Abhishek, et. al., 2012).This paper presents the design and development of NNsimulator which used Backpropagation algorithm in order to render an understanding on how NN works.Many courses on neural networks primarily concern on the mathematical model of Backpropagation without considering the practical development and visualization to fortitude the concept.Besides that Backpropagation algorithm is one of the sophisticated algorithms and hard to understand how it works.This scrutiny develops a simulator to assist student's comprehension the whole process by acclimatizing the values for desired parameters.This paper is organized as follows, the next section discussed the biological to look how the artificial neuron inspire from the biological neuron followed by the section backpropagation algorithm which described in detail how the algorithms work.Then the section discussed the development of the simulator using standard SDLC method and finally discusses the training and testing using sample data.

Biological Background
Artificial Neural Network (ANN) (Fig. 2) is inspired from the biological nervous system as illustrated in Fig. 1.The basic anatomical unit in the nervous system is a specialized cell called 'neuron' Neuron can be characterized as an independent electric device transmitting and receiving electrical signals.
There are at least 500 dissimilar types of biological neurons, but many neurons have a general structure similar to that described here (Ku Ruhana, Azuraliza and Norita, 1998).The following description of the function of a biological neuron is simplified and a typical neuron is shown in Fig. 1.

Backpropagation Algorithm
This section discussed the steps how the Backpropagation algorithm works.Backpropagation learning algorithm is one of the well known algorithms in NNs.Backpropagation algorithm has been popularized by Rumelhart, Hinton, and Williams (1986) in 1980s as a euphemism for generalized delta rule.Backpropagation of errors or generalized delta rule is a decent method to minimize the total squared error of the output computed by the net.The introduction of backpropagation algorithm has overcome the drawback of previous NN algorithm in 1970s where single layer perceptron failed to solve a simple XOR problem (Wan Hussain Wan Ishak, 2004).(See Fig. 2).
Figure 3.Backpropagation Neural Network model The rationale of backpropagation algorithm is to exercise the net to achieve a balance between the ability to respond correctly to the input patterns that are used for training (memorization) and the ability to give reasonable (good) responses to input that is similar, but not indistinguishable to that used in training (generalization) (Fausset, 1994).Gunaseeli and Karthikeeyan (2007) described backpropagation algorithm as follows;  Method for computing the gradient of the case-wise error functions with respect to the weights for a feed forward network. A training method that uses backpropagation to compute the gradient. A feedforward network trained by backpropagation.
In feed forward phase, each input (X i ) receives an input signal and broadcasts this signal to the hidden units Z 1 …Z p .Each hidden units (Z p ) computes its activation and sends its signal (z j ) to each output unit (Y 1 …Y k ).Each output unit (Y k ) computes its activation (y k ) to form the response of the net for the given pattern.
Associated error for the pattern is determined from a comparison between output unit (y k ) and its associate target value t k .Based on the error, the factor k During the backpropagation phase of learning, signals are sent in the reverse direction.k  is used to distribute the error from output unit y k back to all units in the previous layer (hidden units that are connected to Y k ).The error information is then used to update the weights between the output and the hidden layer.In a similar manner, the factor j  (j = 1, …, p) propagates the error back to the input layer and updates the weights between hidden and input layer.Taken as a whole, the adjustment to the weight jk w is based on the factor k  and the activation j z of the hidden unit j Z .The adjustment to the weight ij v is based on the factor j  and the activation i x of the input unit.The training procedure for the backpropagation is as follows; Step 0: Initialize Weight Step 1: While stopping condition is false, do steps 2 -9 Step 2: For each training pair, do steps 3 -8 Broadcasts input signal (x i where i = 1, …, n) to all units in hidden layer.

(Update weights and biases)
Step 8: Update bias and weights (j = 0,…,p) for each output unit (Y

Update bias and weights
Step 9: Test stopping condition NN have been shown as effective implementation in many medical applications such as basic sciences (Abidi and Goh, 1998;Prank et. al., 1998), clinical medicine (Bottaci and Drew, 1997;Pofahl, W et.al., 1998), signal processing and interpretation (Lagerholm, et. al., 2000;Dybowski, 2000) and image processing (Poli and Valli, 1995;Ahmed and Farag, 1998) have discussed several of related research in this applications domain.
Since the aim of this paper is to implement backpropagation NN, three simple problems are taken as sample data.The data are logical gates AND, OR and XOR.The truth tables of the logic gates are illustrated in Table 1, 2, and 3.
Table 1 : AND truth table Table 1, 2 and 3 illustrate the sample data (binary input) that are used in the simulator.The simulator runs training and testing based on the backpropagation algorithm and then demonstrates the results that are indicated as Y.

Development Of The Simulator
In order to develop the simulator, a system development methodology which comprises of five phases was used to construct the simulator.The    As illustrated in Figure 6, it shows the detailed process 1.0 which is related to parameter setting.The parameters involved are activation function, input type, data allocation, stopping criteria, hidden unit, weight, learning rate, momentum and seed.The parameters reflect the results of training and testing process.
Design phase implicated the design of the user interface.It was designed using Microsoft Visual Basic 6.0.Designing an interface is one of the issues on how to captivate user and at the same time to make sure a user understands how NNs works.Figure 6 illustrates the screenshot for the simulator.
Figure 7 : Graphical user interface(GUI) for the simulator As shown in Figure 7, all the parameters can be altered on the same interface.This facilitates the user in discernment the algorithm in terms of how it works.The bottom right area of the GUI indicates the output originated by the system.When the training process is executed, the file train.txt is created to store all training results, and the file weight.txt is created to store the weight from the simulator.After training process is done, testing can be made by clicking the button View Test and testing process will be executed.

Training And Testing Sample Data
This section deliberates the results of training and testing.In this experiment, the AND logic gate with Bipolar inputs and bipolar output has been screened.The results are shown are depicted in Figure 8.
From Figure 7, the total number of correctness is 4 out of 4, which are equivalent to 100% of the training successful.Based on the results, the output and the target are equated to analyze the differences.In this case, the difference (error) is too small, so this study infers that the training done was are successful.In relation to Table 1, the input and output were changed into bipolar and the outcomes are shown in Table 4.After the training, process is accomplished the testing processes will take place to see whether the model is working successfully or not.The testing results are provided in Figure 9.In figure 8, testing results shows that the percentage of correctness is 100%.This discloses that the dissimilarity between output and target is small.From the results, the output and the target were then compared to analyze the differences.Since the difference (error) is too small, this study concludes that the training was running successfully.By relating to the Table 1, we changed the input and output into bipolar and the outcome can be seen in Table 5.
Table 5 : Comparison of the testing results and the actual target Table 5 depicts the comparison between the target, and the output produced by the simulator.The error is very small in which every input yielded result approaching 1.Instead of the input, several parameters such as learning rate, momentum, stopping criteria is the parameter that supplies to the algorithm to learn and produce accurate results.The use of a high number of hidden layers also contributed to the network to learn due to too many weight updates and until the errors are less than 0.005, and the network stops the process.

Conclusion
In this study, we have verified that by using sample data that is the representative data for the desired task, NNs able to approximate any function and behave like an associative memory.In addition NNs is also accomplished of solving complex problem based on the large number of training data in a model free estimator environment.This is the key advantage comparing to traditional approaches in estimation such as the statistical methods.NNs estimate a function without a mathematical description of how the outputs functionally depend on the inputs, and they represent a good approach that is potentially robust and fault tolerant.In this simulator, we examine the properties of the backpropagation NNs and the process of determining the appropriate network inputs and architecture, and built up AND, OR and XOR problem.

Future Work
The simulator can be further extended to provide the various kind of data with some automatic preprocessing mechanism to be included into the simulator.In addition, the other algorithm of neural network can be further considered to be included into the simulator to enhance the capability and extending the scope of the problem that the neural network can solve.

Figure
Figure 1 : Biological neuron collecting the information regarding the implementation of NN algorithm available on the World Wide Web.Since the implementation of NN program immerses more on the application development to solve a specific problem, the parameters used are varied based on the problem that the application needs to solve.As this study is cogitated to discover on how NN algorithm works, very simple data like the logic gate is used to be the parameter for backpropagation algorithm.Based on the preliminary study, a context diagram was defined as in Fig.4.

Figure 8 :
Figure 8 : Training result for AND logic gate

Figure 9 :
Figure 9 : Testing result for AND gate

Table 4 :
Comparison between Training results and the actual target

Table 4
depicts that the training results achieve 100% training and this to indicate that the backpropagation algorithm has an ability to learn from the data.