Neural networks produce multiple outputs in multiclass classification problems. However, they do not have ability to produce exact outputs, they can only produce continuous results. We would apply some additional steps to transform continuos results to exact classification results.

Applying softmax function normalizes outputs in scale of [0, 1]. Also, sum of outputs will always be equal to 1 when softmax is applied. After then, applying one hot encoding transforms outputs in binary form. That’s why, softmax and one hot encoding would be applied respectively to neural networks output layer. Finally, true labeled output would be predicted classification output. Herein, **cross entropy** function correlate between probabilities and one hot encoded labels.

## Cross Entropy Error Function

We need to know the derivative of loss function to back-propagate. If loss function were MSE, then its derivative would be easy (expected and predicted output). However, things become complexer when error function is cross entropy.

E = – ∑ c_{i} . log(p_{i}) + (1 – c_{i} ). log(1 – p_{i})

c refers to one hot encoded classes (or labels) whereas p refers to softmax applied probabilities. BTW, base of log is e in the equation above.

*PS: some sources might define the function as E = – ∑ c _{i} . log(p_{i}).*

## Derivative

Notice that softmax is applied to neural networks scores and probabilities calculated first. Then, cross entropy is applied to softmax applied probabilities and one hot encoded classes calculated second. That’s why, we need to calculate the derivative of total error with respect to the each score.

∂E/∂score_{i}

We can apply chain rule to calculate the derivative.

∂E / ∂score_{i} = (∂E/∂p_{i}) . (∂p_{i}/score_{i})

Let’s calculate these derivatives seperately.

∂E/∂p_{i} = ∂(- ∑[ c_{i} . log(p_{i})+ (1 – c_{i}) . log(1 – p_{i})])/∂p_{i}

Let’s expand the sum term

– ∑ [c_{i} . log(p_{i}) + (1 – c_{i}) . log(1 – p_{i})] = – [c_{1} . log(p_{1}) + (1 – c_{1}) . log(1 – p_{1})] – [c_{2} . log(p_{2}) + (1 – c_{2}) . log(1 – p_{2})] – … **– [c _{i} . log(p_{i}) + (1 – c_{i}) . log(1 – p_{i})]** – … – [c

_{n}. log(p

_{n}) + (1 – c

_{n}). log(1 – p

_{n})]

Now, we can derive the expanded term easily. Only bold mentioned part of the equation has a derivative with respect to the p_{i}.

∂E/∂p_{i} = ∂(-c_{i} . log(p_{i}) + (1 – c_{i}) . log(1 – p_{i}))/∂p_{i} = – ∂(c_{i} . log(p_{i}))/∂p_{i }– ∂((1 – c_{i}) . log(1 – p_{i}))/∂p_{i}

Notice that derivative of ln(x) is equal to 1/x.

– ∂(c_{i} . log(p_{i}))/∂p_{i }– ∂((1 – c_{i}) . log(1 – p_{i}))/∂p_{i} = -c_{i}/p_{i} – [(1 – c_{i})/ (1 – p_{i})] . ∂(1 – p_{i})/∂p_{i} = -c_{i}/p_{i} – [(1 – c_{i})/ (1 – p_{i})] . (-1) = -c_{i }/ p_{i} + (1 – c_{i})/ (1 – p_{i})

∂E/∂p

_{i}= – c_{i}/ p_{i}+ (1 – c_{i})/ (1 – p_{i})

Now, it is time to calculate the ∂p_{i}/score_{i}. However, we’ve already calculated the derivative of softmax function in a previous post.

∂p

_{i}/ ∂score_{i}= p_{i}.(1 – p_{i})

Now, we can combine these equations.

∂E/∂score_{i} = (∂E/∂p_{i}).(∂p_{i}/score_{i})

∂E/∂p_{i} = – c_{i} / p_{i} + (1 – c_{i})/ (1 – p_{i})

∂p_{i} / ∂score_{i} = p_{i}.(1 – p_{i})

∂E/∂score_{i} = [- c_{i} / p_{i} + (1 – c_{i})/ (1 – p_{i})] . p_{i }. (1 – p_{i})

∂E/∂score_{i} = (- c_{i} / p_{i}) . p_{i }. (1 – p_{i}) + [(1 – c_{i}) . p_{i }. (1 – p_{i})]/ (1 – p_{i})

∂E/∂score_{i} = – c_{i} + c_{i} . p_{i} + p_{i} – c_{i} . p_{i} = – c_{i} + p_{i} = p_{i} – c_{i}

∂E/∂score

_{i}= p_{i}– c_{i}

As seen, derivative of cross entropy error function is pretty.

## 1 Comment