You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: book/3-classification.tex
+26Lines changed: 26 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -760,6 +760,32 @@ \subsection{D-squared Log Loss Score}
760
760
\section{P4-metric}
761
761
\subsection{P4-metric}
762
762
763
+
The P4-metric (also known as Symmetric F) is an evaluation metric for binary classifiers that addresses several limitations of the F1 score. Unlike metrics that focus
764
+
on only some aspects of classification performance, P4 comprehensively considers all four key conditional probabilities in binary classification, making it a
765
+
more balanced measure of classifier performance.
766
+
767
+
%
768
+
% FORMULA GOES HERE
769
+
%
770
+
771
+
The metric ranges from 0 to 1, where 1 indicates perfect classification (all probabilities equal 1) and 0 indicates complete failure (any probability equals 0).
772
+
773
+
\textbf{When to use P4-metric}
774
+
775
+
Use P4 in scenarios where both positive and negative predictions are equally important. Or in sases where F1 score's limitations (ignoring true negatives)
776
+
could be problematic.
777
+
778
+
\coloredboxes{
779
+
\item Considers all four fundamental probabilities of binary classification.
780
+
\item Symmetric with respect to positive and negative classes meaning that it does not change when the dataset labeling is changed.
781
+
}
782
+
{
783
+
\item Less widely adopted than traditional metrics.
784
+
\item May be harder to interpret for non-technical stakeholders.
785
+
\item Goes to zero if any of the key probabilities goes to zero. This can be a strength too.
0 commit comments