- The AUC can be intuitively motivated by the motivation that if one ROC curve lies strictly above another then the respective classifier performs better at all thresholds levels
- The AUC is an aggregate or portmanteau measure equivalent to integrating over a range of possible values for the threshold
- The error rate is simply the weighted average with the weights given by the class proportions in the population
To overcome the deficiency of the AUC, the H measure is proposed using a fixed relative misclassification severity distribution - The H measure requires that the same w distribution is used for all classifiers
In highly unbalanced situations, one regards it as more likely that misclassifications from the smaller class will be more serious that misclassifications from the larger class - The ratio of the costs of the two types of misclassifications errors is given by r=c/(1-c) where r measures how much more severe misclassifying a class 0 instance is than misclassifying a class 1 instance
- The H measure should be presented with two forms of distributions: first a subjective distribution and second a universal standard distribution with a beta of Beta(2,2)
A better Beta for the H measure of classification performance
Original by D.J Hand, C. Anagnostopolous, 2013, 12 pages
Latest Hamster Notes
- Measure what Matters posted in Management
- PSPO I posted in Agile
- Stuff on Scrum posted in Agile
- 3 tips to create a courageous space posted in Management
- The Lean Strategy posted in Management Personal Development
- 6 traits of an inclusive leader posted in Management
- Myers Briggs Type Indicator posted in Personal Development
- Positive Influence posted in Management
- Start with Why posted in Management
- 4 steps to optimise product value posted in Agile Management