07.07.2014 Views

uniform test of algorithmic randomness over a general ... - CiteSeerX

uniform test of algorithmic randomness over a general ... - CiteSeerX

uniform test of algorithmic randomness over a general ... - CiteSeerX

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

24 PETER GÁCS<br />

Now, we have<br />

ν y 2 −m−Hν(y|x,m,µ) 2 −m ,<br />

Therefore µ x ν y 2 −G<br />

ν y 2 −Gµ,ν(x,y) ∗ < 2 −Hµ(x|µ) .<br />

∗<br />

< 1, implying H µ,ν (x, y) + < G µ,ν (x, y) by the minimality property <strong>of</strong><br />

H µ,ν (x, y). This proves the + < half <strong>of</strong> our theorem.<br />

To prove the inequality + >, let<br />

According to Proposition 6.5,<br />

f ν (x, y, µ) = 2 −Hµ,ν(x,y) ,<br />

F ν (x, µ) = log ν y f ν (x, y, µ).<br />

H ν (y | x, ⌊F ⌋, µ) + < − log f ν (x, y, µ) + F ν (x, µ),<br />

H µ,ν (x, y) + > −F + H ν (y | x, ⌈−F ⌉, µ).<br />

Proposition 6.4 implies −F ν (x, µ) + > H µ (x | ν). The monotonity lemma 6.7 implies from<br />

here the + > half <strong>of</strong> the theorem.<br />

6.3. Some special cases <strong>of</strong> the addition theorem; information. The function H µ (·) behaves<br />

quite differently for different kinds <strong>of</strong> measures µ. Recall the following property <strong>of</strong><br />

complexity:<br />

H(f(x) | y) + < H(x | g(y)) + < H(x). (6.7)<br />

for any computable functions f, g This implies<br />

In contrast, if µ is a probability measure then<br />

H(y) + < H(x, y).<br />

H ν (y) + > H µ,ν (x, y).<br />

This comes from the fact that 2 −Hν(y) is a <strong>test</strong> for µ × ν.<br />

Let us explore some <strong>of</strong> the consequences and meanings <strong>of</strong> the additivity property. As<br />

noted in (4.2), the subscript µ can always be added to the condition: H µ (x) + = H µ (x | µ).<br />

Similarly, we have<br />

H µ,ν (x, y) := H µ×ν (x, y) + = H µ×ν (x, y | µ × ν) + = H µ×ν (x, y | µ, ν) =: H µ,ν (x, y | µ, ν),<br />

where only before-last inequality requires new (easy) consideration.<br />

Let us assume that X = Y = Σ ∗ , the discrete space <strong>of</strong> all strings. With <strong>general</strong> µ, ν such<br />

that µ(x), ν(x) ≠ 0 for all x, using (4.6), the addition theorem specializes to the ordinary<br />

addition theorem, conditioned on µ, ν:<br />

H(x, y | µ, ν) + = H(x | µ, ν) + H(y | x, H(x | µ, ν), µ, ν).<br />

In particular, whenever µ, ν are computable, this is just the regular addition theorem.<br />

Just as above, we defined mutual information as I(x : y) = H(x) + H(y) − H(x, y), the<br />

new addition theorem suggests a more <strong>general</strong> definition<br />

I µ,ν (x : y) = H µ (x | ν) + H ν (y | µ) − H µ,ν (x, y).<br />

In the discrete case X = Y = Σ ∗ with everywhere positive µ(x), ν(x), this simplifies to<br />

I µ,ν (x : y) = H(x | µ, ν) + H(y | µ, ν) − H(x, y|µ, ν),<br />

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!