11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

14.3 Metrics and Performance Prediction 635TABLE 14-1Notional Confusion MatrixATR Target 1 ATR Target 2 . . . ATR Target NTrue Target 1 c (1,1) c (1,2) c (1,N)True Target 2 c (2,1) c (2,2) c (2,N)...True Target N c (N,1) c (N,2) c (N,N)sophisticated techniques for feature extraction and observation are implemented, whilePD gains are generally realized in Step 4 (test the feature set), as increasingly advancedtests are applied to pre-processed feature sets [1].Confusion matrices essentially display snapshots <strong>of</strong> the same information in tabularform, as shown in Table 14-1. The ATR answers appear across the top row and the truetarget list appears in the leftmost column. Hence, the (i, j) element <strong>of</strong> the confusion matrixcorresponds to the probability that target i is labeled target j by the ATR algorithm. Themore the confusion matrix resembles the identity matrix, the better the results [2].If Monte Carlo trials <strong>of</strong> simulated or real data are available, then the confusion matricescan be found by computing the fraction <strong>of</strong> Monte Carlo trials in which target i is labeledas target j. Furthermore, as long as the feature sets are statistically well-characterized,then performance prediction tools can predict the confusion matrices, alleviating the needfor extensive Monte Carlo analysis.This is particularly helpful in the design process. If the <strong>of</strong>f-diagonal terms in thepredicted confusion matrix are sufficiently small for the application, this justifies the ATRscheme design. If they are too large, then the ATR scheme design should be revisited.Maybe the observations are not accurately measured, preventing them from matching thepredictions. Perhaps the chosen feature set simply fails to separate the target set reliably. Ifother features are available, then their inclusion via multi-dimensional analysis may solvethe problem; otherwise, the granularity <strong>of</strong> the ATR answer may need to be adjusted.14.3.2 Performance PredictionTwo general performance prediction metrics are available, both <strong>of</strong> which flow fromhypothesis-testing. If the application is well-suited for the Neyman-Pearson framework,then the Type II error bounds (β) may be approximated from the relative entropy (i.e.,Kullback-Leibler distance) [110]. Similarly, if the application is well-suited for the Bayesiancontext, then the probability <strong>of</strong> error bounds (P E ) may be approximated from the Chern<strong>of</strong>finformation [110].14.3.2.1 Performance Prediction in the Neyman-Pearson FrameworkThe Neyman-Pearson framework applies most naturally to applications in which onetype <strong>of</strong> error has more dire consequences than the other. Consider an ATR scheme thatsupports fire-control decisions, where the null hypothesis is that a given target is friendlyand the alternate hypothesis is that it is an enemy meriting weapon fire. Only a smallprobability <strong>of</strong> friendly fire (i.e., firing upon a friendly target) may be acceptable. For agiven null hypothesis and alternate hypothesis, the acceptance threshold is then set to keepthe probability <strong>of</strong> a Type I error (α) - incorrectly classifying a friendly target as an enemy- within the desired bounds. Since the probabilities <strong>of</strong> Type I and Type II errors cannot besimultaneously minimized, this implies that the resulting Type II error will be tolerated.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!