Difference between revisions of "Binary cross-entropy loss"
KevinYager (talk | contribs) (Created page with "The binary cross-entropy loss is given by: :<math> L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot \log{ (1-\hat{y_i}) } \right] </math>...") |
KevinYager (talk | contribs) |
||
(One intermediate revision by the same user not shown) | |||
Line 4: | Line 4: | ||
</math> | </math> | ||
for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example. | for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example. | ||
+ | |||
+ | |||
+ | ====True Positive==== | ||
+ | :<math> | ||
+ | \left[ 1 \cdot 0 + (0) \cdot -\infty \right] \approx 0 | ||
+ | </math> | ||
+ | ====False Negative==== | ||
+ | :<math> | ||
+ | \left[ 1 \cdot -\infty + (0) \cdot 0 \right] \approx -\infty | ||
+ | </math> | ||
+ | ====False Positive==== | ||
+ | :<math> | ||
+ | \left[ 0 \cdot 0 + (1) \cdot -\infty \right] \approx -\infty | ||
+ | </math> | ||
+ | ====True Negative==== | ||
+ | :<math> | ||
+ | \left[ 0 \cdot -\infty + (1) \cdot 0 \right] \approx 0 | ||
+ | </math> |
Latest revision as of 15:54, 3 February 2023
The binary cross-entropy loss is given by:
for training examples (indexed by ) where is the class label (0 or 1) and is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus is the probability that it is a negative example.