Algorithmic Learning Theory - of Marcus Hutter
Algorithmic Learning Theory - of Marcus Hutter
Algorithmic Learning Theory - of Marcus Hutter
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Editors’ Introduction 3<br />
John Case, Timo Kötzing and Todd Paddock study a setting <strong>of</strong> learning<br />
in the limit in which the time to produce the final hypothesis is derived from<br />
some ordinal which is updated step by step downwards until it reaches zero,<br />
via some “feasible” functional. Their work first proposes a definition <strong>of</strong> feasible<br />
iteration <strong>of</strong> feasible learning functionals, and then studies learning hierarchies<br />
defined in terms <strong>of</strong> these notions; both collapse results and strict hierarchies<br />
are established under suitable conditions. The paper also gives upper and lower<br />
runtime bounds for learning hierarchies related to these definitions, expressed in<br />
terms <strong>of</strong> exponential polynomials.<br />
John Case and Samuel Moelius III study iterative learning. This is a variant <strong>of</strong><br />
the Gold-style learning model described above in which each <strong>of</strong> a learner’s output<br />
conjectures may depend only on the learner’s current conjecture and on the<br />
current input element. Case and Moelius analyze two extensions <strong>of</strong> this iterative<br />
model which incorporate parallelism in different ways. Roughly speaking, one <strong>of</strong><br />
their results shows that running several distinct instantiations <strong>of</strong> a single learner<br />
in parallel can actually increase the power <strong>of</strong> iterative learners. This provides<br />
an interesting contrast with many standard settings where allowing parallelism<br />
only provides an efficiency improvement. Another result deals with a “collective”<br />
learner which is composed <strong>of</strong> a collection <strong>of</strong> communicating individual learners<br />
that run in parallel.<br />
Sanjay Jain, Frank Stephan and Nan Ye study some basic questions about<br />
how hypothesis spaces connect to the class <strong>of</strong> languages being learned in Goldstyle<br />
models. Building on work by Angluin, Lange and Zeugmann, their paper<br />
introduces a comprehensive unified approach to studying learning languages in<br />
the limit relative to different hypothesis spaces. Their work distinguishes between<br />
four different types <strong>of</strong> learning as they relate to hypothesis spaces, and gives<br />
results for vacillatory and behaviorally correct learning. They further show that<br />
every behaviorally correct learnable class has a prudent learner, i.e., a learner<br />
using a hypothesis space such that it learns every set in the hypothesis space.<br />
Sanjay Jain and Frank Stephan study Gold-style learning <strong>of</strong> languages in<br />
some special numberings such as Friedberg numberings, in which each set has<br />
exactly one number. They show that while explanatorily learnable classes can<br />
all be learned in some Friedberg numberings, this is not the case for either behaviorally<br />
correct learning or finite learning. They also give results on how other<br />
properties <strong>of</strong> learners, such as consistency, conservativeness, prudence, iterativeness,<br />
and non U-shaped learning, relate to Friedberg numberings and other<br />
numberings.<br />
Complexity aspects <strong>of</strong> learning. Connections between complexity and learning<br />
have been studied from a range <strong>of</strong> different angles. Work along these lines has<br />
been done in an effort to understand the computational complexity <strong>of</strong> various<br />
learning tasks; to measure the complexity <strong>of</strong> classes <strong>of</strong> functions using parameters<br />
such as the Vapnik-Chervonenkis dimension; to study functions <strong>of</strong> interest<br />
in learning theory from a complexity-theoretic perspective; and to understand<br />
connections between Kolmogorov-style complexity and learning. All four <strong>of</strong> these<br />
aspects were explored in research presented at ALT 2007.