Difference between revisions of "Binary cross-entropy loss"

From GISAXS
Jump to: navigation, search
 
Line 3: Line 3:
 
L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot  \log{ (1-\hat{y_i}) } \right]
 
L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot  \log{ (1-\hat{y_i}) } \right]
 
</math>
 
</math>
for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example. Note that we are adding
+
for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example.
 +
 
 +
 
 +
====True Positive====
 +
:<math>
 +
\left[ 1 \cdot 0 + (0) \cdot  -\infty \right] \approx 0
 +
</math>
 +
====False Negative====
 +
:<math>
 +
\left[ 1 \cdot -\infty + (0) \cdot  0 \right] \approx -\infty
 +
</math>
 +
====False Positive====
 +
:<math>
 +
\left[ 0 \cdot 0 + (1) \cdot  -\infty \right] \approx -\infty
 +
</math>
 +
====True Negative====
 +
:<math>
 +
\left[ 0 \cdot -\infty + (1) \cdot  0 \right] \approx 0
 +
</math>

Latest revision as of 16:54, 3 February 2023

The binary cross-entropy loss is given by:

for training examples (indexed by ) where is the class label (0 or 1) and is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus is the probability that it is a negative example.


True Positive

False Negative

False Positive

True Negative