Evolution of Neural Networks

Today, AI lives its golden age whereas neural networks make a great contribution to it. Neural networks change our lifes without even realizing it. It lies behind the image, face and speech recognition, also language translation, even in future predictions. However, it is not coming to the present form in a day. Let’s travel to the past and monitor its previous forms.

Brief history of neural networks


Perceptron idea is produced in this decade. It includes updating weights, deciding and reacting based on the threshold. In other words, learning would be handled in this form of historical neural network for the first time. In those days, common logic functions such as AND, OR and NOT can be solved by the invention. Thus, people believe that they live AI golden age. But that is not true.


In this decade, it is revealed that perceptron thing cannot learn the XOR function. Moreover, it is proven that it is impossible to solve this logic function with perceptrons. Thus, people realized that they still live dark age of AI.

Previous applications such as AND, OR and OR logic functions are linear problems whereas XOR function is non-linear problem. In other words, function results cannot be seperated by a single linear line in two dimensional graph.


In this decade, it is understood that adding layers between inputs and outputs in perceptrons can handle non-linear problems. These layers are also called as hidden layers and this type of perceptron is called as multilayer perceptron. In this way, the real golden age started. Creator of this idea is Geoffrey Hinton. We will come up again him later! Keep his name in your mind.

Additionaly, in theory, any function can be learned if the approach can solve XOR logic function. Even though, xor logic function were a challenge for 70s, it still would be letter of A of the alphabet for machine learning studies.


Neural networks did go beyond time! Even though, humanity has the cure, they do not have the enough computation power in those days. But humans are vanity driven beings. They blame the neural networks instead of the computation power they’ve had. This causes neural networks to hibernate. Researchers canalised to invent new algorithm which is support vector machine (SVM).


In this decade, Geoffrey Hinton gets on the stage again! He transforms the concept of neural networks to deep learning which includes too many hidden layers. Rising of the computation power enables the motivation to be adopted in academy and sector. But still it does not become widespread in personal usage.


In this decade, graphic processing units (GPU) become widespread in personal use. Processing big data in parallelly is enabled. Moreover, cloud services provide to consume these things without investing any hardware. That would be the breaking point. This reveals that deep learning defeats older learning algorithms.

Why deep learning? (What data scientist should know about deep learning, Andrew Ng)

Today, deep learning stands in back of most of challenging technologies such as speech recognition, image recognition, language translation. Especially, think about that the developments in language translation in last years.

So, neural networks even passed through the period of stagnation, after all it would be the winner among other machine learning algorithms. And history writes Geoffrey Hinton‘s name in letters of gold. He twice changes the destiny of science. Nowadays, he announced capsule networks and triggered the third jump in AI world. We are grateful to him.

Geoffrey Hinton, Godfather of Neural Networks and Deep Learning


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s