Machine learning metrics – Cohen’s Kappa

Data scientists face tons of metrics when we use machine learning. We don’t really need to know the tiny details of every one to make our machine learning models shine.

We just need to know the most important metrics to be sure that our machine learning models are performing to their best using our data.

What is Cohen’s Kappa?

When two binary variables are attempts by two individuals to measure the same thing, you can use Cohen’s Kappa (often simply called Kappa) as a measure of agreement between the two individuals.

Kappa measures the percentage of data values in the main diagonal of the table and then adjusts these values for the amount of agreement that could be expected due to chance alone.

How to interpret Cohen’s Kappa

Kappa is always less than or equal to 1. A value of 1 implies perfect agreement and values less than 1 imply less than perfect agreement.

In rare situations, Kappa can be negative. This is a sign that the two observers agreed less than would be expected just by chance.

It is rare that we get perfect agreement. Different people have different interpretations as to what is a good level of agreement.

Here is one possible interpretation of Kappa.

Poor agreement = Less than 0.20
Fair agreement = 0.20 to 0.40
Moderate agreement = 0.40 to 0.60
Good agreement = 0.60 to 0.80
Very good agreement = 0.80 to 1.00

Example how to read Cohen’s Kappa

After designing your predictive model you can find the Cohen’s kappa alongside with the Accuracy statistics in the confusion matrix derived from the Scorer task in KNIME.

predictive-model-summary

From here we see that the predictive model has Accuracy or statistical confidence of 92% and Cohen’s kappa of 0.82 which can be translated that two independent observers would have very good agreement that the model is correct 92% of the time.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.