🙋♂️ You may consider to enroll my top-rated machine learning course on Udemy


Softplus function: f(x) = ln(1+ex)
And the function is illustarted below.

Outputs produced by sigmoid and tanh functions have upper and lower limits whereas softplus function produces outputs in scale of (0, +∞). That’s the essental difference.
Derivative
You might remember the derivative of ln(x) is 1/x. Let’s adapt this rule to softplus function.
f'(x) = dy/dx = (ln(1+ex))’ = (1/(1+ex)).(1+ex)’ = (1/(1+ex)). ex = ex / (1+ex)
So, we’ve calculated the derivative of the softplus function. However, we can transform this derivative to alternative form. Let’s express the denominator as multiplier of ex.
dy/dx = ex / (1+ex) = ex / ( ex.(e-x + 1) )
Then, numerator and denominator both include ex. We can simplify the fraction.
dy/dx = 1 / (1 + e-x)
So, that’s the derivative of softplus function in simpler form. You might notice that the derivative is equal to sigmoid function. Softplus and sigmoid are like russian dolls. They placed one inside another!

To sum up, the following equation and derivate belong to softplus function. We can consume softplus as an activation function in our neural networks models.
f(x) = ln(1+ex)
dy/dx = 1 / (1 + e-x)
Proof of concept
Additionally, I’ve captured step by step derivative calculation video for softplus function.
Let’s dance
These are the dance moves of the most common activation functions in deep learning. Ensure to turn the volume up 🙂
Support this blog if you do like!
Your graph of the softplus function is a little misleading, what it’s actually showing is a scaled version, something like:
5y = ln(1 + e^(5x) )
The function 5y = ln(1 + e^(5x)) is differentiable function, but it is not softplus. Why you think it is misleading?
Because softplus(0) = ln(1+e^0) = ln(2) ~ 0.693, but on the graph it’s somewhere around 0.1 – 0.2, and softplus(1) ~ 1.3, but on the graph, it’s much closer to 1 than 1.5.