F1 score


In statistical analysis of binary classification, the F1 score is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results returned by the classifier, and r is the number of correct positive results divided by the number of all relevant samples.
The F1 score is the harmonic mean of the precision and recall, where an F1 score reaches its best value at 1.
The F1 score is also known as the Sørensen–Dice coefficient or Dice similarity coefficient.

Etymology

The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to MUC-4.

Definition

The traditional F-measure or balanced F-score is the harmonic mean of precision and recall:
The general formula for positive real β, where β is chosen such that recall is considered β times as important as precision, is:
The formula in terms of Type I and type II errors:
Two commonly used values for β are those corresponding to the measure, which weighs recall higher than precision, and the measure, which weighs recall lower than precision.
The F-measure was derived so that "measures the effectiveness of retrieval with respect to a user who attaches β times as much importance to recall as precision". It is based on Van Rijsbergen's effectiveness measure
Their relationship is where.

Diagnostic testing

This is related to the field of binary classification where recall is often termed "sensitivity".

Applications

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance. Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall and so is seen in wide application.
The F-score is also used in machine learning. Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.
The F-score has been widely used in the natural language processing literature, such as the evaluation of named entity recognition and word segmentation.

Criticism

and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.
According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient in binary evaluation classification.
David Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation.

Difference from G-measure

While the F-measure is the harmonic mean of recall and precision, the G-measure is the geometric mean.

Extension to multi-class classification

The F-score is also used for evaluating classification problems with more than two classes. In this setup, the final score is obtained by micro-averaging or macro-averaging. For macro-averaging, two different formulas have been used by applicants: the F-score of class-wise precision and recall means or the arithmetic mean of class-wise F-scores, where the latter exhibits more desirable properties.