Softplus as a Neural Networks Activation Function

Activation unit calculates the net output of a neural cell in neural networks. Backpropagation algorithm multiplies the derivative of the activation function. That’s why, picked up activation function has to be differentiable. For example, step function is useless in backpropagation because it cannot be backpropageted. That is not a must, but scientists tend to consume activation functions which have meaningful derivatives. That’s why, sigmoid and hyperbolic tangent functions are the most common activation functions in literature. Herein, softplus is a newer function than sigmoid and tanh. It is firstly introduced in 2001. Softplus is an alternative of traditional functions because it is differentiable and its derivative is easy to demonstrate. Besides, it has a surprising derivative!

softplus
Softplus function dance move (Imaginary)

Softplus function:   f(x) = ln(1+ex)

And the function is illustarted below.

activation-softplus
Softplus function

Outputs produced by sigmoid and tanh functions have upper and lower limits whereas softplus function produces outputs in scale of (0, +∞). That’s the essental difference.

Derivative

You might remember the derivative of ln(x) is 1/x. Let’s adapt this rule to softplus function.

f'(x) = dy/dx = (ln(1+ex))’ = (1/(1+ex)).(1+ex)’ = (1/(1+ex)). ex = ex / (1+ex)

So, we’ve calculated the derivative of the softplus function. However, we can transform this derivative to alternative form. Let’s express the denominator as multiplier of ex.

dy/dx = ex / (1+ex) = ex / ( ex.(e-x + 1) )

Then, numerator and denominator both include ex. We can simplify the fraction.

dy/dx = 1 / (1 + e-x)

So, that’s the derivative of softplus function in simpler form. You might notice that the derivative is equal to sigmoid function. Softplus and sigmoid are like russian dolls. They placed one inside another!

sigmoid-function
Surprisingly, derivative of softplus is sigmoid

To sum up, the following equation and derivate belong to softplus function. We can consume softplus as an activation function in our neural networks models.

f(x) = ln(1+ex)

dy/dx = 1 / (1 + e-x)

 

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s