Skip to contents

Computes macro-averaged precision, recall, and F1 score using a confusion matrix between true labels and predicted labels. Suitable for multi-class classification tasks.

Usage

Precision_Recall_macroF1(actual, predicted)

Arguments

actual

A vector of true class labels.

predicted

A vector of predicted class labels (must have same length as actual).

Value

A named list containing:

  • Precision: Macro-averaged precision across all classes.

  • Recall: Macro-averaged recall across all classes.

  • f1_score: Macro-averaged F1 score.

Details

Macro-averaging computes the mean of the metric (precision, recall, F1) over all classes, treating all classes equally regardless of their support. Missing values due to undefined metrics are excluded from the averaging (e.g., divisions by zero).

Author

Bin Duan

Examples

if (FALSE) { # \dontrun{
actual <- c("A", "B", "A", "C", "B", "C")
predicted <- c("A", "B", "C", "C", "B", "A")
Precision_Recall_macroF1(actual, predicted)
} # }