06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D = distribution by which the sample examples are drawn,<br />

m = cardinality of examples in the training set,<br />

H = the set of possible hypothesis,<br />

<strong>and</strong> f = function that is approximated by the hypothesis h.<br />

We now define an error of hypothesis h by<br />

Error(h) = P(h(x) ≠ f(x) ⏐xεD)<br />

A hypothesis h is said to be approximately correct when<br />

error(h) ≤ ε,<br />

where ε is a small positive quantity.<br />

When an approximate hypothesis h is true, it must lie within the ε-ball around<br />

f. When h lies outside the ε-ball we call it a bad hypothesis. [13]<br />

Now, suppose a hypothesis hb ∈ Hbad is supported by first m examples. The<br />

probability that a bad hypothesis is consistent with an example ≤(1-ε). If we<br />

consider m examples to be independent, the probability that all m samples will<br />

be consistent with hypothesis hb is ≤(1-ε) m . Now, if Hbad has to contain a<br />

consistent hypothesis, at least one of the hypothesis of Hbad should be<br />

consistent. The probability of this happening is bounded by the sum of<br />

individual probabilities.<br />

H bad<br />

Fig. 13.15: The ball around f.<br />

Thus P(a consistent hb ∈ Hbad) ≤ | Hbad | (1 - ε ) m<br />

≤ |H| (1 - ε) m<br />

where | Hbad | <strong>and</strong> |H| denotes the cardinality of them respectively. If we put a<br />

small positive upper bound δ to the above quantity: we find<br />

f<br />

H

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!