27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A never-ending language learner called NELL was<br />

described in [3]. NELL u tilized semi-supervised learning<br />

methods and a collection of knowledge extraction methods<br />

to learn noun phrases from specified semantic categories<br />

and with specified semantic relations. NELL has four<br />

component learners: a pattern learner, a se mi-structured<br />

extractor, a morphological classifier, and a r ule learner.<br />

NELL also accommodates human interaction to approve or<br />

reject inference rules learned by the rule learner component.<br />

The work in [1] reported an agent system called ALICE<br />

that conducted lifelong learning to build a set o f concepts,<br />

facts and generalizations with regard to a particular topic<br />

directly from a large volume of Web text. Equipped with a<br />

domain-specific corpus of t exts, some background<br />

knowledge, and a control strategy, ALICE learns to update<br />

and refine a theory of the domain.<br />

The results in [10] defined continual learning to be a<br />

continual process where learning occurs over time, and time<br />

is monotonic. A continual learner possesses the following<br />

properties: the agent is autonomous; learning is embodied in<br />

problem solving, is incremental, and occurs at multiple time<br />

steps; and there is no fixed training set. <strong>Knowledge</strong> an agent<br />

acquires now can be built upon and modified later. A<br />

continual learning agent system called CHILD was<br />

described in [10].<br />

YAGO2 is a lar ge and extendable knowledge base<br />

capable of unifying facts automatically extracted f rom<br />

Wikipedia Web documents to concepts in WordNet and<br />

GeoNames [7]. YAGO2 exhibits its continuous learning<br />

capability by allowing new facts to be incrementally added<br />

to an existing knowledge base. <strong>Knowledge</strong> gleaned by<br />

YAGO2 is of high quality in terms of coverage and<br />

accuracy.<br />

The results in [4] deal with clustering with inconsistent<br />

advice. Advice such as must-link and cannot-link can be<br />

incorporated into clustering algorithms so as to produce<br />

more sensible groups of related entities. Advice can become<br />

inconsistent due to different reasons. Clustering in the<br />

presence of inconsistent advice amounts to finding<br />

minimum normalized cuts [4].<br />

Bias shifting can be a useful technique in the area of<br />

transfer learning [9]. However, there are a n umber of<br />

important differences between the aforementioned work and<br />

the focuses of research in this paper. (1) i 2 Learning<br />

emphasizes on the stimulus for perpetual learning, i.e., the<br />

learning episodes of the agent are tri ggered by<br />

inconsistencies it encounters during its problem-solving<br />

episodes. This is not necessarily the focus in related work.<br />

(2) i 2 Learning has a problem-solving slant, i.e., learning to<br />

incrementally improve performance for solving problems at<br />

hand, whereas the related work in [1,3,7,10] is p rimarily<br />

geared toward the general task of knowledge-acquisition, or<br />

building an ontology or a dom ain theory. (3) The learning<br />

episodes also differ: i 2 Learning adopts discrete learning<br />

episodes (as triggered by conflicting phenomena), whereas<br />

the learning episodes in related work of [1,3,7,10] is largely<br />

continuous, not necessarily triggered by any events. (4)<br />

Inconsistencies are utilized as essential heuristics for the<br />

perpetual i 2 Learning, whereas inconsistent advice in [4] is<br />

only used as constraint for clustering. (5) Mos t of the<br />

related work (with the exception of CHILD) is web-centric<br />

in the sense that learning is carried out with regard to web<br />

texts. i 2 Learning, on the other hand, accommodates a broad<br />

range of heuristics in its learning process.<br />

3. i 2 Learning: A Framework for Perpetual<br />

Learning Agents<br />

A perpetual learning agent is one that engages in a<br />

continuous and alternating sequence of problem-solving<br />

episodes and learning episodes. In such an alternating<br />

sequence, learning takes place in response to a whole host<br />

of stimuli, including inconsistencies encountered in the<br />

agent’s the problem solving episodes. Learning episodes<br />

result in the agent’s knowledge being refined or augmented,<br />

which in turn improves its p erformance at tasks<br />

incrementally. We use learning burst and applying burst to<br />

refer to reoccurring learning episodes and knowledge<br />

application (problem solving) episodes.<br />

The proposed i 2 Learning framework focuses on a<br />

particular scenario for the aforementioned perpetual<br />

learning agents: one in which the learning episodes of the<br />

agent are tri ggered by inconsistencies it encounters during<br />

its problem-solving episodes and the perpetual learning<br />

process is embodied in the continuous knowledge<br />

refinement and revision so as to overcome encountered<br />

inconsistencies.<br />

A perpetual learning agent has the following<br />

components: (1) a kn owledge base (KB) for persistent<br />

knowledge and beliefs, domain or ontological constraints,<br />

assumptions and defaults; (2) a meta knowledge base (mKB)<br />

for the agent’s meta-knowledge (knowledge on how to<br />

apply domain knowledge in KB during problem solving); (3)<br />

a working memory (WM) that holds problem specific facts,<br />

and facts deduced with activated beliefs from KB; (4) a<br />

reasoning mechanism to facilitate problem solving process;<br />

(5) a component called CAL (Coordinator for Applying burst<br />

and Learning burst) that recognizes inconsistency and<br />

initiates learning bursts; (6) a bias space containing<br />

candidate biases for the learning process; and (7) a learning<br />

module i 2 Learning that carries out inconsistency-induced<br />

learning to refine or a ugment KB, mKB, or WM (or any<br />

combination of the three) so as to ov ercome encountered<br />

inconsistencies. Figure 1 captures the structure of perpetual<br />

learning agents.<br />

When a co nflicting situation arises in WM during its<br />

problem solving process, the agent’s CAL detects it,<br />

suspends the current problem solving session, initiates the<br />

next learning burst via passing the specific inconsistent<br />

circumstance to t he learning module, and waits for the<br />

result from the learning module. The learning module<br />

i 2 Learning in turn carries out the learning process by<br />

recognizing the type of inconsistency in the conflicting<br />

circumstance, and selecting the appropriate in consistency-<br />

250

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!