Activation unit calculates the net output of a neural cell in neural networks. Backpropagation algorithm multiplies the derivative of the activation function. That’s why, picked up activation function has to be differentiable. For example, step function is useless in backpropagation because it cannot be backpropageted. That is not a must, but scientists tend to consume activation functions which have meaningful derivatives. That’s why, sigmoid and hyperbolic tangent functions are the most common activation functions in literature. Herein, softplus is a newer function than sigmoid and tanh. It is firstly introduced in 2001. Softplus is an alternative of traditional functions because it is differentiable and its derivative is easy to demonstrate. **Besides, it has a surprising derivative!**

Softplus function: f(x) = ln(1+e^{x})

And the function is illustarted below.

Outputs produced by sigmoid and tanh functions have upper and lower limits whereas softplus function produces outputs in scale of (0, +∞). That’s the essental difference.

## Derivative

You might remember the derivative of ln(x) is 1/x. Let’s adapt this rule to softplus function.

f'(x) = dy/dx = (ln(1+e^{x}))’ = (1/(1+e^{x})).(1+e^{x})’ = (1/(1+e^{x})). e^{x} = e^{x} / (1+e^{x})

So, we’ve calculated the derivative of the softplus function. However, we can transform this derivative to alternative form. Let’s express the denominator as multiplier of e^{x}.

dy/dx = e^{x} / (1+e^{x}) = e^{x} / ( e^{x}.(e^{-x} + 1) )

Then, numerator and denominator both include e^{x}. We can simplify the fraction.

**dy/dx = 1 / (1 + e ^{-x})**

So, that’s the derivative of softplus function in simpler form. You might notice that the derivative is equal to sigmoid function. Softplus and sigmoid are like russian dolls. They placed one inside another!

To sum up, the following equation and derivate belong to softplus function. We can consume softplus as an activation function in our neural networks models.

f(x) = ln(1+e^{x})

dy/dx = 1 / (1 + e^{-x})