Category Archives: Machine Learning

Step Function as a Neural Network Activation Function

Activation functions are decision making units of neural networks. They calculates net output of a neural node. Herein, heaviside step function is one of the most common activation function in neural networks. The function produces binary output. That is the reason why it also called as binary step function. The function produces 1 (or true) when input passes threshold limit whereas it produces 0 (or false) when input does not pass threshold. That’s why, they are very useful for binary classification studies.

step_function_dance_move-v2

Heaviside Step Function Dance Move

Human reflexes act based on the same principle. A person will withdraw his hand when he touces on a hot surface. Because his sensory neuron detects high temperature and fires. Passing threshold triggers to respond and withdrawal reflex action is taken. You might think true output causing fire action.

Continue reading

Homer Sometimes Nods: Error Metrics in Machine Learning

Even the worthy Homer sometimes nods. The idiom means even the most gifted person occasionally makes mistakes. We would adapt this sentence to machine learning lifecycle. Even the best ML-models should make mistakes (or else overfitting problem). The important thing is know how to measeure errors. There are lots of metrics for measuring forecasts. In this post, we will mention evalution metrics meaningful for ML studies.

homer-doh-small

Homer Simpson uses catchphrase D’oh! when he has done something wrong

Sign of actual and predicted value diffence should not be considered when calculation total error of a system. Otherwise, total error of a series including equally high underestimations and overestimations might measure very low error. In fact, forecasts should include low underestimations and overestimations, and total error should be measured low. Discarding sign values provides to get rid of this negative effect. Squaring differences enables discarding signs. This metric is called as Mean Squared Error or mostly MSE.

Continue reading

AI: a one-day wonder or an everlasting challenge

Debates between humans and computers start with mechanical turk. That’s an historical autonomous chess player costructed in 18th century. However, that’s a fake one. The mechanism allows to hide a chess player inside the machine. Thus, the turk operates while hiding master playing chess. (Yes, just like Athony Deniels and Kenny Baker hid inside of 3PO and R2D2 in Star Wars). So, there is no intelligence for this ancient example. Still, this fake machine shows expectations of 18th century people for an intelligent system to involve in daily life.

IBM Deep Blue is first chess playing computer won against a world champion. Garry Kasparov were defeated by Deep Blue in 1997. Interestingly, development of Deep Blue has began in 1985 at Carnegie Mellon University (remember this university). In other words, with 12 years study comes success.

Continue reading

Adaptive Learning in Neural Networks

Gradient descent is one of the most powerful optimizing method. However, learning time is a challange, too. Standard version of gradient descent learns slowly. That’s why, some modifications are included into the gradient descent in real world applications. These approaches would be applied to converge faster. In previous post, incorporating momentum is already mentioned to work for same purpose. Now, we will focus on applying adaptive learning to learn faster.

rango

Learning Should Adapt To The Environment Like Chameleons (Rango, 2011)

As you might remember, weights are updated by the following formula in back propagation.

wi = wi – α . (∂Error / ∂wi)

Alpha refers to learning rate in the formula. Applying adaptive learning rate proposes to increase / decrease alpha based on cost changes. The following code block would realize this process.

Continue reading

Incorporating Momentum Into Neural Networks Learning

Newton’s cradle is the most popular example of momentum conservation. A lifted and released sphere strikes the stationary spheres and force is transmitted through the stationary spheres. This action pushes the last sphere upward. This shows that the last ball receives the momentum of the first ball. We would apply similar principle in neural networks to improve learning speed. The idea including momentum into neural networks learning is incorporating previous update in the current change.

newtons-cradle

Newton’s Cradle Demonstrates Conservation of Momentum

Gradient descent guarantees to reach the local minimum when iteration approaches to infinity. However, that is not applicable in reality. Gradient descent iterations have to be terminated by a reasonable value. Moreover, gradient descent converges slowly. Herein, momentum improves the performance of the gradient descent considerably. Thus, cost might converge faster with less iterations if momentum is involved in the weight update formula.

Continue reading

Working on Trained Neural Networks in Weka

Applying neural networks could be divided into two phases as learning and forecasting. Learning phase has high cost whereas forecasting phase produces results very quickly. Epoch value (aka training time), network structure and historical data size specify the cost of learning phase. Normally, the larger epoch produces the better results. However, increment of epoch value will cause to be taken longer time. That’s why, picking up very large epoch value would not be applicable for online transaction if learning is implemented instantly.

However, we can apply learning and forecasting steps asynchronously. We would perform neural network learning as batch application (e.g. periodic day-end or month-end calculation). Thus, epoch would be picked up as very large value. Besides, weights of neural networks will be calculated on low system load (most probably late night hours). In this way, no matter how long neural networks learning lasts. Thus, we can even make forecasts for online transactions in milliseconds. You might imagine this approach like that human nervous system updates its own weights while sleeping.

snoozing-superhero

Even Superheroes Need to Rest

Continue reading

Building Neural Networks with Weka In Java

Building neural networks models and implementing learning consist of lots of math this might be boring. Herein, some tools help researchers to build network easily. Thus, a researcher who knows the basic concept of neural networks can build a model without applying any math formula.

So, Weka is one of the most common machine learning tool for machine learning studies. It is a java-based API developed by Waikato University, New Zealand. Weka is an acronym for Waikato Environment for Knowledge Analysis. Actually, name of the tool is a funny word play because weka is a bird species endemic to New Zealand. Thus, researchers can introduce an endemic bird to world wide.

Continue reading