27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

specific learning method or heuristics to explain, resolve, or<br />

accommodate the conflicting circumstance. Some human<br />

interaction may be required at this stage.<br />

Figure 1. Framework of i 2 Learning for Perpetual Learners.<br />

The outcome of the learning process results in<br />

knowledge refinement or augmentation to KB, mKB, or WM,<br />

or all of them. Afterward, i 2 Learning module notifies CAL of<br />

the learning result, and passes any WM revisions to CAL.<br />

CAL in turn refreshes WM with any changes from the<br />

i 2 Learning module and restarts the problem solving session<br />

that was suspended earlier. This signifies the end of the<br />

current learning burst and the agent is ready to return to the<br />

problem solving episode to pick up where it lef t off. The<br />

agent will continue engaged in problem solving until it<br />

detects the next inconsistent scenario. Each such iteration<br />

results in an incremental performance improvement for the<br />

agent. Figure 2 illustrates the continuous nature of such<br />

perpetual learning agents. The logic of t he i 2 Learning<br />

module is given in Figure 3.<br />

Figure 2. Spiral Model of Perpetual Learning Agents.<br />

The types of learning that help improve the agent’s<br />

performance are e ssentially embodied in the types of<br />

inconsistency handling. There have been various<br />

classifications of inconsistent data, information, knowledge,<br />

and meta-knowledge in different domains [16-22].<br />

1 Analyzing inconsistent scenario from WM;<br />

2 Identifying category c and morphology m for<br />

the inconsistency;<br />

3 Choosing appropriate learning method or<br />

heuristics;<br />

4 Retrieving knowledge and bias;<br />

5 Conducting i 2 Learning(c, m) {<br />

6 case:<br />

7 when (c=c i m=m ij ): i 2 Learning(c i ,m ij );break;<br />

8 when (c=c k m=m kl ): i 2 Learning(c k ,m kl );break;<br />

9 …………<br />

10 else: default handling; break;<br />

11 }<br />

12 if (human interaction is needed) then {<br />

13 Query expert;<br />

14 Receive human response;<br />

15 }<br />

16 if (KB needs to be refined) then {<br />

17 Refine KB;<br />

18 }<br />

19 if (mKB needs to be refined) then {<br />

20 Refine mKB;<br />

21 }<br />

22 if (WM needs to be revised) then {<br />

23 Revise WM and pass WM revisions to CAL;<br />

24 }<br />

25 Notify CAL to restart the current problem<br />

26 solving session<br />

Figure 3. The Learning Module.<br />

The proposed framework is an overarching structure that<br />

accommodates growth and ex pansion in various<br />

inconsistency specific learning. Depending on which types<br />

of inconsistency the learning module can recognize and<br />

what corresponding algorithms or heuristics it comes<br />

equipped with in handling the inconsistency at hand, its<br />

inconsistency-induced learning capacities can change<br />

dynamically. The inconsistent scenarios an agent encounters<br />

at different points in time may be different and the learning<br />

strategies it adopts in the subsequent learning bursts can<br />

vary accordingly. The lines of 511 in Figure 3 can embody<br />

a rich set of inconsistency-specific learning algorithms.<br />

There are two modes of i 2 Learning: problem-solving<br />

mode, and speedup mode, each having different pace or rate<br />

for learning. When inconsistencies are encountered in WM<br />

during problem solving sessions, an agent works under the<br />

problem-solving mode and its pace of learning is driven by<br />

how frequent conflicting decisions or act ions arise during<br />

knowledge application. On the other hand, inconsistencies<br />

can be intentionally injected into WM to induce learning<br />

bursts during the time when an agent is n ot engaged in<br />

problem solving. This latter case can be regarded as<br />

inconsistency-induced speedup learning. The primary<br />

objective of the agents is problem solving, and learning is<br />

just the means for agents to get progressively better at what<br />

they do.<br />

251

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!