Introduction to Neural Networks: A Mechanism Taking Lessons From The Past

Neural Networks inspired from human central nervous system. They are based on making mistakes and learning lessons from past errors. They produces results very fast like human reflexes in contrast to learning which lasts continous.

For instance, a kid who never touch a hot surface would decide to touch it. However, he would not touch hot surfaces anymore if his hand is burnt once. Adapting this example to the neural networks discipline makes the concept easier to figure out.

Basic neural network cell consists of inputs, weights, activation function and output. In the case, touch decision is boolean output whereas sense organs are inputs for human nervous system. For instance, sight sense could contribute to network if smoke and red color exists on the surface. So, that’s an input. Weights specify strength of input on output. For example, hand palm is more sensitive than the outer hand. In other words, weight of the hand palm is greater than the outer hand. Initially, weight of the detecting smoke and red color on a surface could be low value for a kid. Weight of the input is updated when kid’s hand is burnt. Thus, kid would not touch hot surface anymore.

Activation function is deserved to be said a few words. Net input (∑) is calculated by sum of the input and weight values multiplication whereas activation function calculates net output. In other words, activation function provides threshold for reaction or pain. Withdrawal reflex is an example of it. A person who ever touched a hot surface,witdraws his hand without thinking about it. This means activation function produces high value and person decides to react. In contrast, activation function produces low value when he touches on a cold surface and then he wouldn’t react.

human-neural-network-cell

Adapting Human Neuron to Artificial Neural Network Cell (Andrew L. Nelson)

So, learning is applied similarly for neural networks. Firstly, a neural network model is created and inputs of the network are defined. Decision of correlative inputs over output is state of the art. Secondly, random values are assigned to the weights. Thirdly, attributes of historical instances are given to the network and output is calculated. The actual output value of historical instance is already known. Then, difference of actual and calculated output value is reflected to  the weights as errors. After then, weights are updated (decreased or increased) based on previous errors. Updating weights is also called as learning or training.

artificial-neural-network-cell

Neural Network Illustration from Hilary Mason’s Keynote

A neural network system consists of multiple neural network cells. An output of a neural network cell would be input of another cell. Moreover, there are layers located between inputs and outputs. These are called as hidden layers. These layers provide to solve non-linear problems with this approach. Designing network structure (decision of hidden layer depth and units) shows a change depending on the problem. That’s an another state of art issue.

neural-network-system

Complex Neural Network System

So, neural networks have ability to learn, recall and forecast. Mostly, they are applied on regression and classification problems. Although, learning process has high cost, they produce results very fast just like human reflexes if learning process is complete and weights are already updated. Because, output calculation requires only addition and multiplication. That’s why, neural networks are one of the most common and also powerful algorithm in machine learning studies.

6 thoughts on “Introduction to Neural Networks: A Mechanism Taking Lessons From The Past

  1. Pingback: The Math Behind Backpropagation | Sefik Ilkin Serengil

  2. Pingback: Backpropagation Implementation: From Theory To Action | Sefik Ilkin Serengil

  3. Pingback: Hyperbolic Tangent as Neural Network Activation Function | Sefik Ilkin Serengil

  4. Pingback: Working on Trained Neural Networks in Weka | Sefik Ilkin Serengil

  5. Pingback: Incorporating Momentum Into Neural Networks Learning | Sefik Ilkin Serengil

  6. Pingback: Step Function as a Neural Network Activation Function | Sefik Ilkin Serengil

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s