27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

the ontology hierarchy; learning a new concept (in the learner<br />

agent); selecting positive and negative examples for a certain<br />

concept (in the teacher agents); searching the ontology<br />

hierarchy for the best matched concept to teach the learner<br />

agent; managing tie strengths between agents; and finding<br />

peers.<br />

B. Ontology<br />

There are several definitions of ontology. We will use<br />

Daconta's definition [2] as it is relevant to our work: “Ontology<br />

defines the common words and concepts (meanings) and their<br />

relationships used to describe and represent an area of<br />

knowledge, and so standardize the meanings.”<br />

If two agents use the same ontology or ar e able to<br />

understand each other’s ontology, communication between<br />

them is p otentially possible. Normally, diverse agents use<br />

different ontologies. In this case they need a mechanism to<br />

understand each other. In this paper, we illustrate how a single<br />

learner agent can learn new concepts from different teacher<br />

agents. Those teacher agents do not need to agree on the same<br />

definition of the new concept; their understanding of this new<br />

concept may be close but slightly different from each other.<br />

C. Social networks<br />

By social networks we do not mean Facebook, Twitter or<br />

other related web services. In this work, a social network is a<br />

set of actors (e.g. human, process, agent, document repository)<br />

and relationships between them. It can be represented as a set<br />

of nodes that have one or more types of relationships (ties)<br />

between them [3]. Using social networks gives us flexibility in<br />

dealing with the concepts in heterogeneous ontologies. It<br />

allows agents to understand the meaning of the same concept<br />

even though its definition might be slightly different in each<br />

agent’s ontology.<br />

The strength of a tie is affected by several factors.<br />

Granovetter [4] prop osed four dimensions that may affect tie<br />

strength: the duration of the relationship; the intimacy between<br />

the two actors participating in the relationship; the intensity of<br />

their communication with each other; and the reciprocal<br />

services they provide to each other. In social networks of<br />

humans, other factors, such as socioeconomic status,<br />

educational level, political affiliation, race and gender are also<br />

considered to affect the strength of ties [5]. Structural factors,<br />

such as network topology and information about social circles,<br />

may also affect the tie strength [6]. Gilbert et al [7] suggest<br />

quantitative measures (variables) for tie strength including<br />

intensity variable, days passed since the last communication<br />

and duration [7]. Another variable that may affect the strength<br />

of the tie is the neighborhood overlap variable [8] which refers<br />

to the number of common friends the two actors have. Petróczi<br />

et al [9] introduced mutual confidence between the actors of<br />

social networks. We proposed in [10] a method to calculate the<br />

strength of ties between agents in a social network using<br />

Hidden Markov Models (HMM) [11]. We showed that tie<br />

strength depends on several factors: Closeness factor: by<br />

measuring how close two agents are to each other (i.e. the<br />

degree of similarity between the two ontologies used by the<br />

two agents participating in the relationship); Time-related<br />

factor: combines all time factors that affect the strength of the<br />

relationship (e.g. duration of the relationship, frequency of<br />

communication between the two agents, time passed since the<br />

last communication); Mutual confidence factor: clarifying the<br />

nature of the relationship under measure, if it is a one-sided<br />

relationship or a mutual relationship. Then we built an HMM<br />

model to measure the strengths of ties between agents in a<br />

social network using those factors.<br />

D. Literature review<br />

In this section, we describe some of the previous work done<br />

in the concept learning area.<br />

In [12], Steels uses a distributed multi-agent system where<br />

no central control is required. He describes a “language game”<br />

in which each agent has to create its own ontology based on its<br />

experience of its environment. The agents communicate with<br />

each other to stabilize to a common ontology or a shared set of<br />

lexicons to be used afterwards in their interactions.<br />

Palmisano [13] uses a m achine-learning approach for<br />

creating a shared ontology among different agents that use<br />

different ontologies. The main purpose of Palmisano’s work is<br />

to make each agent maintain its own ontology and at the same<br />

time keep track of the concepts’ meanings in other agents’<br />

ontologies it communicates with.<br />

Afsharchi tries in his work [14] to enable an agent to learn<br />

new concepts from a group of agents then represent this new<br />

concept in its own terminology without the need of a shared<br />

ontology [15]. He deals with a multi-agent system as a group of<br />

agents with different ontologies and any of them can learn a<br />

new concept by asking the other agent about this concept and<br />

then representing it using its own taxonomies.<br />

III. CONCEPT LEARNING FRAMEWORK<br />

In our framework, we assume that in a society of n agents<br />

Ag 1 , Ag 2 … Ag n , each agent Ag i controls a repository R i<br />

(Figure 1). Each repository uses an ontology (O i ) that consists<br />

of a set of concepts, relationships. Each concept C in each<br />

repository possesses some supporting documents to represent<br />

instances of the concept.<br />

R 1<br />

R 2 R n<br />

Ontology<br />

Documents<br />

...<br />

Ontology<br />

Documents<br />

Ontology<br />

Documents<br />

MAS 1 MAS 2<br />

...<br />

MAS n<br />

Figure 1. System Architecture<br />

The goal of this paper is to advance the concept learning<br />

mechanism based on semantic interoperation between concept<br />

learning and semantic search modules based on MAS proposed<br />

in [16][17]. In this prototype system, each MAS controls a<br />

knowledgebase with a certain ontology. The system is expected<br />

to have two main modules: Concept Learning module [18]; and<br />

Semantic Search module [19]. In this paper we focus only on<br />

advancing the concept learning module by applying the social<br />

262

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!