27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

i 2 Learning: Perpetual Learning through Bias Shifting<br />

Du Zhang<br />

Department of Computer Science<br />

California State University<br />

Sacramento, CA 95819-6021<br />

zhangd@ecs.csus.edu<br />

Abstract<br />

How to develop an agent system that can engage in<br />

perpetual learning to incrementally improve its problem<br />

solving performance over time is a challenging research<br />

topic. In this paper, we describe a framework called<br />

i 2 Learning for such perpetual learning agents. The<br />

i 2 Learning framework has the following characteristics: (1)<br />

the learning episodes of the agent are triggered by<br />

inconsistencies it encounters during its problem-solving<br />

episodes; (2) the perpetual learning process is embodied in<br />

the continuous knowledge refinement and revision so as to<br />

overcome encountered inconsistencies; (3) each learning<br />

episode results in incremental improvement of the agent’s<br />

performance; and (4) i 2 Learning is an overarching<br />

structure that accommodates the growth and expansion of<br />

various inconsistency-specific learning strategies. Using<br />

mutually exclusive inconsistency as an example, we<br />

demonstrate how i 2 Learning facilitates learning through<br />

bias shifting.<br />

Keywords: inconsistency, i 2 Learning, inductive bias, bias<br />

shifting, perpetual learning agents.<br />

1. Introduction<br />

One of the challenges in developing an agent system<br />

that can engage in perpetual learning to incrementally<br />

improve its problem solving performance over time is<br />

deciding on what will trigger its perpetual learning<br />

episodes. In this paper, we describe a f ramework called<br />

i 2 Learning for such perpetual learning agents. i 2 Learning,<br />

which stands for inconsistency-induced learning, allows for<br />

inconsistencies to be utilized as stimuli to learning episodes.<br />

The proposed framework has the following characteristics:<br />

(1) the learning episodes of th e agent are trigg ered by<br />

inconsistencies it encounters during its problem-solving<br />

episodes; (2) the perpetual learning process is embodied in<br />

the continuous knowledge refinement and revision so as to<br />

overcome encountered inconsistencies; (3) each learning<br />

episode results in incremental improvement of the agent’s<br />

performance; and (4) i 2 Learning is an overarching structure<br />

that accommodates the growth and expansion of various<br />

inconsistency-specific learning strategies.<br />

Inconsistencies are ubiquitous in the real world,<br />

manifesting themselves in a plethora of human behaviors<br />

and the computing systems we build [2,6,16-22].<br />

Inconsistencies are ph enomena reflecting various causes<br />

rooted in data, information, knowledge, meta-knowledge, to<br />

expertise [22]. As such, inconsistencies can be utilized as<br />

important heuristics in an agent’s pursuit of perpetual<br />

learning capability. When encountering an inconsistency or<br />

a conflicting circumstance during its problem solving<br />

episode, an agent recognizes the nature of such<br />

inconsistency, and overcomes the inconsistency through<br />

refining or augmenting its knowledge such that its<br />

performance at tasks is improved. This continuous and<br />

alternating sequence of problem-solving episodes and<br />

i 2 Learning episodes underpins the agent’s incremental<br />

performance improvement process.<br />

In this paper, our focus is on describing how the<br />

i 2 Learning framework facilitates the development of<br />

perpetual learning agents that incrementally improve their<br />

performance over time. Because of its generality and<br />

flexibility, the i 2 Learning framework accommodates<br />

different types of inconsistencies and allows various<br />

inconsistency-specific heuristics to be deployed in the<br />

learning process. Using mutually exclusive inconsistency as<br />

an example, we demonstrate how i 2 Learning facilitates<br />

learning through bias shifting.<br />

The rest of the paper is organized as follows. Section 2<br />

offers a bri ef review on related work. Section 3 des cribes<br />

i 2 Learning, the proposed framework for perpetual learning<br />

agents. In Section 4, we discuss the i 2 Learning for a<br />

particular type of inconsistency, mutually exclusive<br />

inconsistency, and describe how an iterative deepening bias<br />

shifting process can be incorporated into the framework to<br />

accomplish the learning process. Finally, Section 5<br />

concludes the paper with remarks on future work.<br />

2. Related Work<br />

The areas o f related work to the results in this paper<br />

include: lifelong learning agent systems, learning through<br />

overcoming inconsistencies, and inconsistency-induced<br />

learning through bias shifting.<br />

249

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!