25.07.2013 Views

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The experience conditions are described as follows:<br />

1. Experiments were performed on a 2.4 GHz Pentium IV PC with 1024 Mb of RAM, running Linux.<br />

2. All learning objects will be processed in random and one by one by LOFinder.<br />

3. After a learning object has processed by LOFinder, the new inference facts could be kept in the memory and the<br />

running times could be added up for each rule that is triggered in the inference.<br />

Total run time was 48.6 seconds. The search agent executed only one search of the LOM Base before extracting the<br />

relevant information needed to retrieve LOM-based learning objects, which was completed in only 1.1 seconds.<br />

Additionally, the remaining run time was spent inferring to get new learning objects, including ontology-based links<br />

and rule-based links. In total, 387 ontology-based links and 38 rule-based links were generated. The summary of test<br />

results is shown in Table 4.<br />

Table 4. Test results<br />

LOM-based links ontology-based links rule-based links<br />

Link numbers 217 387 38<br />

Times (ms) 1102 21235 26331<br />

Figure 10 shows the execution time for each rule. The experimental results showed that the more complicated rule<br />

needs more running times. The inference time for rule 5 was increased by the addition of a transitive property. The<br />

inference agent must execute a complicated recursive function to derive the transitive result. Compared to unary<br />

predicates, the binary predicates such as rule4, rule5, rule 6, rule 7, rule 8, and rule 9, have longer inference times.<br />

The last three rules have longer inference times due to their numerous clauses and binary predicates.<br />

Figure 10. Time expended for each rule to infer for relevant learning objects<br />

Summary and Concluding Remarks<br />

The LOM was developed based on the XML standard to facilitate the search, evaluation, sharing, and exchange of<br />

learning objects. The main weakness of LOM is its lack of semantic metadata needed for reasoning and inference<br />

functions. This study therefore developed LOFinder, an intelligent LOM shell based on Semantic Web technologies<br />

that enhance the semantics and knowledge representation of LOM. After introducing and defining the proposed<br />

multi-layered Semantic LOM Framework (MSLF) for LOM, the following discussion describes how the intelligence,<br />

modularity, and transparency of LOFinder enhance the discovery of learning objects.<br />

Cloud computing is a newly emerging computing paradigm that facilitates the rapid building of a next-generation<br />

information systems via the Internet. One future work would be to extend LOFinder to support the intelligent elearning<br />

applications in the cloud computing environment. Another future direction of development is upgrading<br />

LOFinder to a general framework for reusability, i.e., limiting the components of LOFinder to a LOM Base,<br />

Knowledge Base, Search Agent, and Inference Agent with no built-in domain knowledge in the Knowledge Base and<br />

with no domain metadata in the LOM Base. Furthermore, User-friendly interfaces are essential for enabling easy<br />

access to the LOM Base and Knowledge Base by domain experts.<br />

311

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!