-
Notifications
You must be signed in to change notification settings - Fork 0
Experiment: Weighted Loss Update
Shakleen Ishfar edited this page May 7, 2024
·
1 revision
tldr; Tackling class imbalance with weighted loss updates
The competition dataset has a huge class imbalance issue. Scores 1 and 6 have very few samples while the others have large number of samples. As a result, most of the time, scores 1 and 6 aren't predicted at all.
In this experiment, I train 3 models and evaluate the effects of weighted loss updates.
- Baseline (Green) [CV 0.686, LB: 0.738]: All scores having equal weights.
- Exp 1 (Blue) [CV 0.699, LB: ???]: All scores have equal weight of 1 except for scores 1 and 6, which have 1.25 and 1.5 weights respectively.
-
Exp 2 (Orange) [CV 0.72, LB: 0.756]: I set weights in the following manner:
- Scores 2 and 3 have equal weight of 0.25 as they are the most abundant.
- Score 4 has a weight of 0.5.
- Scores 1 and 5 have equal weight of 1 as they have the second lowest abundance.
- Finally, score 6 has the highest weight of 2, as it is the scarcest.
Orange seems to have an overall better score in almost all folds. The notable exception being fold 3, where it first got the better score, but then dipped.
However, orange seems to have a higher evaluation loss when compared to the other two.