Tag Archives: backpropagation

Homer Simpson Guide to Backpropagation


Homer Simpson has a low IQ of 55

Backpropagation algorithm is based on complex mathematical calculations. That’s why, it is hard to understand and that is the reason why people have an edge on neural networks. Adapting the concept into the real world makes even Homer Simpson easier to figure out. In this post, we’ll mention how to explain backpropagation to beginners.

What if an approved loan application would be outstanding loan (or dead loan)? The bank loses money. So, how can this financial institution derive lesson from this mistake?

Loan application is a process. In other words, an application is required to be examined by multiple authorized employees respectively. For instance, a customer makes an application to bank branch agent, then agent delivers the application to branch supervisor or branch manager. After then, head office employees examine the application when branch manager approved. To sum up, a loan application follows a path and comes to hands of in charged of employees. Should these employees responsible for the lose? The answer is yes based on backpropagation.

Backpropagation algorithm proposes to reflect the lost money amount on the same path, but backwardly. That’s why, it is named as back-propagation. Fine head office employees first, then punish branch manager, supervisor, and agent respectively. What’s more, how much the total lose amount should be reflected to a branch agent? Total lose amount should be divided between in charged of employees based on their contributions on total lose. (Actually, that is the derivative of total lose amount with respect to the employee. E.g. ∂TotalLoseAmount / ∂BranchAgent).  In this way, these employees would be more careful in the next time. That is the principle of backpropagation algorithm. Thus, examination process would progress in time.

As the phrase goes, backpropagation advices slapping ones who are on the tracked path backwardly and in the ratio of their contribution on total error. I would like to thank Dr. Alper Ozpinar for this metaphor.


Batman backpropagates Robin

Step Function as a Neural Network Activation Function

Activation functions are decision making units of neural networks. They calculates net output of a neural node. Herein, heaviside step function is one of the most common activation function in neural networks. The function produces binary output. That is the reason why it also called as binary step function. The function produces 1 (or true) when input passes threshold limit whereas it produces 0 (or false) when input does not pass threshold. That’s why, they are very useful for binary classification studies.


Heaviside Step Function Dance Move

Human reflexes act based on the same principle. A person will withdraw his hand when he touces on a hot surface. Because his sensory neuron detects high temperature and fires. Passing threshold triggers to respond and withdrawal reflex action is taken. You might think true output causing fire action.

Continue reading

Backpropagation Implementation: Neural Networks Learning From Theory To Action

We’ve focused on the math behind neural networks learning and proof of the backpropagation algorithm. Let’s face it, mathematical background of the algorihm is complex. Implementation might make the discipline easier to be figured out.

Now, it’s implementation time. We would transform extracted formulas into the code. I would prefer to impelement the core algorithm in Java. This post would also be a tutorial of the neural network project that I’ve already shared on my GitHub profile. You might play around the code before reading this post.


Non-linear sinus wave is chosen as dataset. The same dataset is used in the time-series post. Thus, we’ll be able to compare the prospective forecasts for both neural network and time series approaches. Basically, a random point in the wave would be predicted based on previous known points.

Continue reading

The Math Behind Neural Networks Learning with Backpropagation

Neural networks are one of the most powerful machine learning algorithm. However, its background might confuse brains because of complex mathematical calculations. In this post, math behind the neural network learning algorithm and state of the art are mentioned.


Backpropagation is very common algorithm to implement neural network learning. The algorithm is basically includes following steps for all historical instances. Firstly, feeding forward propagation is applied (left-to-right) to compute network output. That’s the forecast value whereas actual value is already known. Secondly, difference of the forecast and actual value is calculated and it is called as error. Thirdly, error is reflected to the all the weighs and weights are updated based on calculated error. Finally, these procedures are applied until custom epoch count (e.g. epoch=1000).

Continue reading