Welcome to the L3 Harris Geospatial documentation center. Here you will find reference guides and help documents.
﻿
>  Docs Center  >  ENVI API  >  Classification Framework  >  Evaluate the Classifier

### Evaluate the Classifier

Evaluate the Classifier

The ENVIEvaluateClassifier procedure evaluates the performance of a classifier. It takes the following input arguments:

• Examples and corresponding truth class values that were not used to train the classifier. Use the examples object from ENVISplitExamples that was designated for evaluation. See Shuffle and Split the Examples for details.
• A classifier

ENVIEvaluateClassifier calculates predicted class values from the input examples. It then calculates a confusion matrix and accuracy metrics between the truth class values and the predicted class values.

This code example continues from the Iterative Trainer section of the Define and Train the Classifier topic.

confusionMatrix = ENVIEvaluateClassifier(splitExamples[1], classifier)

The result is an ENVIConfusionMatrix object. See Confusion Matrix Example for details on interpreting a confusion matrix.

Print the confusion matrix:

Print, confusionMatrix.Confusion_Matrix

The following example shows how IDL prints the confusion matrix from the SVM classifier. Because of the random nature of shuffling the examples, the resulting confusion matrix and accuracy metrics will vary.

 2383 4 0 0 5 0 320 1 0 3 0 1 912 8 0 0 0 0 1120 6 14 0 0 1 2030

Print the column totals:

columnTotals = confusionMatrix.ColumnTotals()
FOR i=0, (outExamples.NCLASSES)-1 DO \$
Print, 'Ground truth total for ', \$
outExamples.CLASS_NAMES[i],': ', \$
columnTotals[i]

Result:

Ground truth total for asphalt: 2397.00
Ground truth total for concrete: 325.000
Ground truth total for grass: 913.00
Ground truth total for tree: 1129.00
Ground truth total for building: 2044.00

Print the row totals:

rowTotals = confusionMatrix.RowTotals()
FOR i=0, (outExamples.NCLASSES)-1 DO \$
Print, 'Predicted total for ', \$
outExamples.CLASS_NAMES[i],': ', \$
rowTotals[i]

Result:

Predicted total for asphalt: 2392.00
Predicted total for concrete: 324.000
Predicted total for grass: 921.00
Predicted total for tree:       1126.00
Predicted total for building:       2045.00

Accuracy metrics reveal how well the classifier performed. The following code prints accuracy metrics:

accuracy = confusionMatrix.Accuracy()
Print, 'Overall accuracy: ', accuracy

kappa = confusionMatrix.KappaCoefficient()
Print, 'Kappa coefficient: ', kappa

commissionError = confusionMatrix.CommissionError()
Print, 'Error of commission: ', commissionError

omissionError = confusionMatrix.OmissionError()
Print, 'Error of omission: ', omissionError

F1 = confusionMatrix.F1()
Print, 'F1 value: ', F1

precision = confusionMatrix.Precision()
Print, 'Precision: ', precision

producerAccuracy = confusionMatrix.ProducerAccuracy()
Print, 'Producer accuracy: ', producerAccuracy

recall = confusionMatrix.Recall()
Print, 'Recall: ', recall

userAccuracy = confusionMatrix.UserAccuracy()
Print, 'User accuracy: ', userAccuracy

Result:

Overall accuracy:      0.993684
Kappa coefficient:      0.991445
Error of commission:    0.00376254    0.0123457   0.00977200   0.00532860   0.00733495
Error of omission:    0.00584066    0.0153846   0.00109529   0.00797164   0.00684929
F1 value:      0.995197     0.986133     0.994547     0.993348     0.992908
Precision:      0.996237     0.987654     0.990228     0.994671     0.992665
Producer accuracy:      0.994159     0.984615     0.998905     0.992028     0.993151
Recall:      0.994159     0.984615     0.998905     0.992028     0.993151
User accuracy:      0.996237     0.987654     0.990228     0.994671     0.992665

The last seven accuracy metrics are arrays, where each value corresponds to a class (asphalt, concrete, etc.)

## Next Steps

The next step is to classify the attribute image. See Run the Classifier.

© 2020 Harris Geospatial Solutions, Inc. |  Legal