Let a be the number of interference samples that are correctly identified as interference targets, 6 be the number of interference samples that are incorrectly identified as ideal targets, c be the number of ideal target samples that are incorrectly identified as interference targets, and d be the number of ideal target samples that are incorrectly identified as interference targets. The number of samples correctly identified as ideal targets can be defined as the following classification performance evaluation indicators.
(1) The detection rate, also known as the true positive rate (TPR), refers to the proportion of ideal target samples that are correctly identified. TPR is also called sensitivity or recall.
(2) False alarm rate, also known as false positive rate (false positive rate, FPR), refers to the proportion of interference samples that are identified as ideal targets.
(3) The missed detection rate, also known as the false negative rate (FNR), refers to the proportion of ideal target samples identified as interference samples.
(4) True negative rate (true negative rate, TNR), refers to the proportion of interference samples that are accurately identified. TNR is also called specificity.
(5) Accuracy (ACC) refers to the proportion of all correctly divided samples to the total sample set.
(6) Precision refers to the proportion of ideal target samples among all detected targets.
(7) F soore is the harmonic mean of precision and recall. F soore is a comprehensive indicator that considers the balance between precision and recall. The harmonic mean between two numbers tends to be close to the smaller of the two numbers, F soore is distributed between [0,1], the closer to 1 the better, as high as possible F soore can guarantee accuracy and The recall rate is high, indicating that the classification performance is relatively robust.
(8) Receiver operating characteristic curve (receiver operating characteristic curve, ROC) can obtain a series of (TPR, FPR) points by adjusting the parameters of the classifier. The ROC curve is a curve drawn with the false alarm rate (FPR) as the horizontal axis (x-axis) and the detection rate (TPR) as the vertical axis (y-axis). The (0,1) point in the graph is an ideal classifier, which accurately separates all ideal targets and interference targets. At (1,1), all samples are identified as target samples.
The ROC curve is usually used to compare the performance of different recognition technology solutions horizontally. Generally, the larger the area under the curve (AUC) under the ROC curve, the better the performance of the classifier. The farther the ROC curve deviates from the diagonal, the stronger the discriminative power of the classifier.
(9) The confusion matrix (CM) is an NxN matrix, where N represents the number of sample classes. Each row and column of the matrix corresponds to each category in the sample output, the row indicates the actual category, and the column indicates the predicted category. The data in row i and column j indicates the number or proportion of samples of class i that are predicted to be samples of class j. Therefore, the data on the diagonal are all the number or proportion of samples that were predicted correctly, while the data reported elsewhere in the matrix indicate the cases where the model misclassified. From the confusion matrix, multiple evaluation indicators such as the above-mentioned detection rate and accuracy rate can be calculated. For multi-category cases, CM can comprehensively describe the classification recognition results.