Difference between revisions of "Binary cross-entropy loss"

From GISAXS
Jump to: navigation, search
(Created page with "The binary cross-entropy loss is given by: :<math> L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot \log{ (1-\hat{y_i}) } \right] </math>...")
 
Line 3: Line 3:
 
L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot  \log{ (1-\hat{y_i}) } \right]
 
L = - \frac{1}{m} \sum_{i=1}^{m} \left[ y_i \cdot \log{ (\hat{y_i}) } + (1-y_i) \cdot  \log{ (1-\hat{y_i}) } \right]
 
</math>
 
</math>
for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example.
+
for <math>m</math> training examples (indexed by <math>i</math>) where <math>y_i</math> is the class label (0 or 1) and <math>\hat{y_i}</math> is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus <math>1-\hat{y_i}</math> is the probability that it is a negative example. Note that we are adding

Revision as of 15:37, 3 February 2023

The binary cross-entropy loss is given by:

for training examples (indexed by ) where is the class label (0 or 1) and is the prediction for that example (i.e. the predicted probability that it is a positive example). Thus is the probability that it is a negative example. Note that we are adding