A Gentle Introduction to Cross-Entropy Loss Function

Neural networks produce multiple outputs in multiclass classification problems. However, they do not have ability to produce exact outputs, they can only produce continuous results. We would apply some additional steps to transform continuos results to exact classification results.

Applying softmax function normalizes outputs in scale of [0, 1]. Also, sum of outputs will always be equal to 1 when softmax is applied. After then, applying one hot encoding transforms outputs in binary form. That’s why, softmax and one hot encoding would be applied respectively to neural networks output layer. Finally, true labeled output would be predicted classification output. Herein, cross entropy function correlate between probabilities and one hot encoded labels.

one-hot-encoding
Applying one hot encoding to probabilities

Cross Entropy Error Function

We need to know the derivative of loss function to back-propagate. If loss function were MSE, then its derivative would be easy (expected and predicted output). However, things become complexer when error function is cross entropy.

E = – ∑ ci . log(pi) + (1 – ci ). log(1 – pi)

c refers to one hot encoded classes (or labels) whereas p refers to softmax applied probabilities. BTW, base of log is e in the equation above.

PS: some sources might define the function as E = – ∑ ci . log(pi).

Derivative

Notice that softmax is applied to neural networks scores and probabilities calculated first. Then, cross entropy is applied to softmax applied probabilities and one hot encoded classes calculated second. That’s why, we need to calculate the derivative of total error with respect to the each score.

∂E/∂scorei

chain-rule-for-cross-entrophy-v1
Backwardly error calculation

We can apply chain rule to calculate the derivative.

chain-rule-for-cross-entrophy-v2
Chain rule

∂E / ∂scorei = (∂E/∂pi) . (∂pi/scorei)

Let’s calculate these derivatives seperately.

∂E/∂pi = ∂(- ∑[ ci . log(pi)+ (1 – ci) . log(1 – pi)])/∂pi

Let’s expand the sum term

– ∑ [ci . log(pi) + (1 – ci) . log(1 – pi)] = – [c1 . log(p1) + (1 – c1) . log(1 – p1)] – [c2 . log(p2) + (1 – c2) . log(1 – p2)] – …  – [ci . log(pi) + (1 – ci) . log(1 – pi)] – … – [cn . log(pn) + (1 – cn). log(1 – pn)]

Now, we can derive the expanded term easily. Only bold mentioned part of the equation has a derivative with respect to the pi.

∂E/∂pi = ∂(-ci . log(pi) + (1 – ci) . log(1 – pi))/∂pi = – ∂(ci . log(pi))/∂pi – ∂((1 – ci) . log(1 – pi))/∂pi

Notice that derivative of ln(x) is equal to 1/x.

– ∂(ci . log(pi))/∂pi – ∂((1 – ci) . log(1 – pi))/∂pi = -ci/pi – [(1 – ci)/ (1 – pi)] . ∂(1 – pi)/∂pi = -ci/pi – [(1 – ci)/ (1 – pi)] . (-1) = -c/ pi + (1 – ci)/ (1 – pi)

∂E/∂pi = – ci / pi + (1 – ci)/ (1 – pi)

Now, it is time to calculate the ∂pi/scorei. However, we’ve already calculated the derivative of softmax function in a previous post.

∂pi / ∂scorei = pi.(1 – pi)

Now, we can combine these equations.

∂E/∂scorei = (∂E/∂pi).(∂pi/scorei)

∂E/∂pi = – ci / pi + (1 – ci)/ (1 – pi)

∂pi / ∂scorei = pi.(1 – pi)

∂E/∂scorei = [- ci / pi + (1 – ci)/ (1 – pi)] . p. (1 – pi)

∂E/∂scorei = (- ci / pi) . p. (1 – pi) + [(1 – ci) . p. (1 – pi)]/ (1 – pi)

∂E/∂scorei = – ci + ci . pi + pi – ci . pi = – ci + pi = pi – ci

∂E/∂scorei = pi – ci

As seen, derivative of cross entropy error function is pretty.

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s