1 |
1. Binary Cross-Entropy Loss, Log Loss. This is the most common loss function used in classification problems. The cross-entropy loss decreases as the predicted probability converges to the actual label. It measures the performance of a classification model whose predicted output is a probability value between 0 and 1. ... We derive the cross-entropy loss formula from the regular likelihood function, but with logarithms added in. |
2 |
2. Hinge Loss. This loss function used for classification problems and an alternative to the cross-entropy loss function is hinge loss, primarily developed for support vector machine (SVM) model evaluation. ... Hinge loss penalizes the wrong predictions and the right predictions that are not confident. It's primarily used with SVM classifiers with class labels as -1 and 1. Make sure you change your malignant class labels from 0 to -1. |
3 |
3. Mean Square Error, Quadratic Loss, L2 Loss. We define MSE loss function as the average of squared differences between the actual and the predicted value. It's the most commonly used regression loss function. ... The corresponding cost function is the mean of these squared errors (MSE). The MSE loss function penalizes the model for making large errors by squaring them and this property makes the MSE cost function less robust to outliers. Therefore, you shouldn't use it if the data is prone to many outliers. |
4 |
4. Mean Absolute Error, L1 Loss. We define MAE loss function as the average of absolute differences between the actual and the predicted value. It's the second most commonly used regression loss function. It measures the average magnitude of errors in a set of predictions, without considering their directions. ... The corresponding cost function is the mean of these absolute errors (MAE). The MAE loss function is more robust to outliers compared to the MSE loss function. Therefore, you should use it if the data is prone to many outliers. |
Комментарии