Gradient descent is one of the most powerful optimizing method. However, learning time is a challange, too. Standard version of gradient…
Incorporating Momentum Into Neural Networks Learning
Newton’s cradle is the most popular example of momentum conservation. A lifted and released sphere strikes the stationary spheres and…
Even Superheroes Need to Rest: Working on Trained Neural Networks in Weka
Applying neural networks could be divided into two phases as learning and forecasting. Learning phase has high cost whereas forecasting…
Building Neural Networks with Weka In Java
Building neural networks models and implementing learning consist of lots of math this might be boring. Herein, some tools help researchers to…
Hyperbolic Tangent as Neural Network Activation Function
In neural networks, as an alternative to sigmoid function, hyperbolic tangent function could be used as activation function. When you…
Backpropagation Implementation: Neural Networks Learning From Theory To Action
We’ve focused on the math behind neural networks learning and proof of the backpropagation algorithm. Let’s face it, mathematical background…
The Math Behind Neural Networks Learning with Backpropagation
Neural networks are one of the most powerful machine learning algorithm. However, its background might confuse brains because of complex…
Sigmoid Function as Neural Network Activation Function
Sigmoid function (aka logistic function) is moslty picked up as activation function in neural networks. Because its derivative is easy to demonstrate.…
Introduction to Neural Networks: A Mechanism Taking Lessons From The Past
Neural Networks inspired from human central nervous system. They are based on making mistakes and learning lessons from past errors. They…
Exponential Smoothing: A Forecasting Approach Smoke Pleasure Triggered
Smoothing methods basically generalize the time series functions based on previous examples’ seasonal effects and trends. In this way, these methods…