08.11.2014 Views

Probabilistic Performance Analysis of Fault Diagnosis Schemes

Probabilistic Performance Analysis of Fault Diagnosis Schemes

Probabilistic Performance Analysis of Fault Diagnosis Schemes

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Definition 3.9. The upper boundary between the set W k and the ideal point (0,1) is called<br />

the receiver operating characteristic (roc) for the set <strong>of</strong> all tests V .<br />

Since the set W k changes with time, the roc is time-varying, as well. Also, since W k is<br />

convex (Fact 3.6), the roc is concave. By Fact 3.8, there is a equivalent convex curve that<br />

separates W k from the point (1,0). However, the term roc only refers to the upper boundary.<br />

Characterizing the ROC<br />

Although it may not be possible to compute the roc for the set <strong>of</strong> all tests V , the set <strong>of</strong><br />

tests whose performance points lie on the roc can be characterized theoretically. For any<br />

α ∈ (0,1], let V α be the set <strong>of</strong> tests for which P f,k ≤ α, at time k. The set <strong>of</strong> Neyman–Pearson<br />

tests are defined as<br />

V np = argmax P d,k (V ). (3.15)<br />

V ∈V α<br />

In general, the set V α is too abstract to properly formulate and solve this constrained optimization<br />

problem. However, the following lemma shows that V np is nonempty and explicitly<br />

characterizes one element in V np .<br />

Lemma 3.10 (Neyman–Pearson [71]). The likelihood ratio test with P f,k = α is in V np .<br />

Therefore, the roc is given by the set <strong>of</strong> likelihood ratio tests (see [61] for details).<br />

In the optimization problem (3.15), the probability <strong>of</strong> a false alarm is constrained to<br />

be less than some α ∈ (0,1]. However, we can also interpret the roc in terms <strong>of</strong> the vector<br />

optimization problem<br />

max (−P f,k,P d,k ). (3.16)<br />

V ∈V<br />

Since the objective takes values in [0,1] 2 , it not immediately clear what it means for one<br />

point to be better than another. Clearly, the ideal point (0,1) is the best and points on the<br />

diagonal are <strong>of</strong> little use. The notion <strong>of</strong> Pareto optimality provides one way to compare<br />

values <strong>of</strong> the objective (−P f,k ,P d,k ). We say that a point (P f,k ,P d,k ) = (α,β) is Pareto optimal<br />

if no other test can simultaneously improve both P f,k and P d,k . That is, for any other test<br />

with performance (α ′ ,β ′ ) ≠ (α,β), either α ′ > α or β ′ < β. Hence, the roc can be defined<br />

as the set <strong>of</strong> Pareto optimal points for the vector optimization problem (3.16). One wellknown<br />

method for generating the set <strong>of</strong> Pareto optimal points (i.e., the roc) is to solve the<br />

“scalarized” optimization problem<br />

max<br />

V ∈V<br />

−γP f,k + (1 − γ)P d,k (3.17)<br />

for all γ ∈ [0,1] [5, 106]. Since the roc is concave, a lower bound may be computed by<br />

solving (3.17) at a finite collection <strong>of</strong> points 0 < γ 0 < γ 1 < ··· < γ m < 1 and linearly interpo-<br />

34

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!