11.07.2015 Views

Software Agent Testing - FBK | SE - Fondazione Bruno Kessler

Software Agent Testing - FBK | SE - Fondazione Bruno Kessler

Software Agent Testing - FBK | SE - Fondazione Bruno Kessler

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Testing</strong> <strong>Software</strong><strong>Agent</strong>s & MASCu D. Nguyen, PhD.<strong>Software</strong> Engineering (<strong>SE</strong>) unit,<strong>Fondazione</strong> <strong>Bruno</strong> <strong>Kessler</strong> (<strong>FBK</strong>)http://selab.fbk.eu/dnguyen/1


<strong>Testing</strong> is critical<strong>Software</strong> agents & multi-agent systems are enablingtechnologies to build today’s complex systems, thanks to:¨ Adaptivity and autonomy properties¨ Open and dynamic natureAs they are increasingly applied, testing to buildconfidence in their operations is extremelycrucial !FACT49NASA satellites use autonomous agents to balancemultiple demands, such as staying on course,keeping experiments running, and dealing withthe unexpected, thereby avoiding waste.NASA’s agents are designed to achieve the goals andintentions of the designers, not merely to respond topredefined events, so that they can react to unimaginedevents and still ensure that the spacecraft does notwaste fuel while keeping to its mission.FACT492


<strong>Software</strong> agents and MAS<strong>Software</strong> agents are programsthat are situated and havetheir own control and goals.Properties:perceiveenvironmentAutonomousMulti-agent systems (MAS) are composedof• Autonomous agents and theirinteractions• Environment where the agentsoperate• Rules, norms, constraints that restrictthe behaviors of the agentsresponseto changesReactivityProactivityGoal-orientedDeliberativeDistributed network(Internet)<strong>Agent</strong> ZSocial abilityEnvironment NHost N<strong>Agent</strong> A<strong>Agent</strong> BCollaborativeCompetitiveEnvironment 1Host 1MAS3


Challenges in testing agents & MASTraditional software• deterministic• observableinputsstate αoutputs<strong>Agent</strong>• non-deterministic, due to self-*and the instant changes of theenvironmentinputssensorsoutputsself-*MAS• distributed, asynchronous• message passing• cooperative, emergentbehaviours4


<strong>Testing</strong> phasesn Acceptance: ensure the system meets thestakeholder goalsn System: test for the macroscopic properties andqualities of the systemn Integration: check the collective behaviors andthe interaction of agents with the environmentn <strong>Agent</strong>: check for the integration of agentcomponents (goal, plan, beliefs, etc.) and agent’sgoal fulfillment.n Unit: testing agent units: blocks of code, agentcomponents (plan, goal, etc.)5


<strong>Testing</strong> BDI <strong>Agent</strong>sinputs sensors outputsself-*6


Some factsn Many BDI agent dev. languages exist:¨ JADEX, Jason, JACK Intelligent <strong>Agent</strong>s,<strong>Agent</strong>Speak(RT)n No “popular” de-factor language yetn Often built on top of Javan There are IDEs (integrated developmentenvironment)¨ With testing facilityn We will use JADEX as a reference language7


BDI Architecture (recap)n Beliefs: represent the informational state of theagentn Desires: represent the motivational state of theagent¨ Operationalized as Goals + [contextualconditions]n Intentions: represent the deliberative state ofthe agent – what the agent has chosen to do¨ Operationalized as Plansn Events: internal/external triggers that an agentreceives/perceives and will react to.8


<strong>Testing</strong> agent beliefsn Belief state is program state in traditional testingsense:nExample <strong>Agent</strong>: { belief: Bank-Account-Balancegoal: Buy-A-Car }state 1: Bank-Account-Balance = $1,000,0000state 2: Bank-Account-Balance = $100n What to test:¨ Belief update (read/write)n direct: injection, change the agent belief brutallyn indirect: perform belief update via planexecution9


<strong>Testing</strong> agent goalsn Goals are driven by contextual conditions¨ conditions to activate¨ conditions to hibernate/drop¨ target/satisfactions conditionn What to test:¨ goal triggering¨ goal achievement¨ goal interactionn one goal might trigger or inhibit other goalsn goal reasoning to solve conflicts or to archive higherlevel goals10


<strong>Testing</strong> agent plansn Plans are triggered by goals, a goalactivated will trigger a plan executionn Plan execution results in:¨ interacting with external world¨ changing the external world¨ changing agent belief¨ triggering other goalsn What to test:¨ plan instantiation¨ plan execution results11


<strong>Testing</strong> eventsn Events are primarily test inputs in agenttestingn Can be:¨ Messages¨ Observing state of the environmentn What to test:¨ Event filtering, what are the events an agentshould receive¨ Event handling, to trigger goals or update beliefs12


Example: testing a cleaning agentn Environment:¨ wastebins¨ charging stations¨ obstacles¨ wasten This agent has tokeep the floor clean13


Example (contd.)n Example of Beliefs:…………...new Location(0.5, 0.5)new Location(0.5, 0.5)new Location(0.5, 0.5)…………...n Test concerns:nnnis my_location updated after every move?is the next target_location determined? how does it differfrom current_location?…..14


Example (contd.)n Example of goal$beliefbase.my_chargestate > MyConstants.MINIMUM_BATTERY_CHARGE$beliefbase.my_chargestate >= 1.0n Test concerns:nnnare the conditions correctly specified?is the goal activated when the maintaincondition satisfied?…...15


Oraclesn Different agent types demand for different typesof oracle¨ Reactive agents: oracles can be pre-determined attest design¨ Proactive (autonomous, evolving) agents: “evolving& flexsible” oracles are neededn It’s hard to say if a behavior is correct because the agenthas evolved, learned overtimen Some exist type of oracles:¨ Constraint/contract based¨ Ontology based¨ Stakeholder soft-goal based16


Ontology-based oraclesn Interaction ontology defines the semantics of agentinteractionsn Mismatching ontology specifications is faulty<strong>Agent</strong>ActionProposeThingConcept+book: Book+price: floatBook+title: String+author: String+ title: string+ price: doubleFig. 2. The book-trading interaction ontology, specified as UML class diagramRule examplemin 0 and max 2000In the course of negotiation, a buyer initiates the interaction by sending acall for proposals for a given book (an instance of Book) to all the sellers that18


Requirement-based (stakeholder soft-goalsbased)n Soft-goals capture quality requirements, e.g. performance,safetyn Soft-goals can be represented as quality functions (metrics)n In turn, quality functions are used to assess the agentsunder testEfficientRobustεd > εtimeGood lookingtime19


Input space in testing agentsn Test inputs for an agents:¨ Passive:nnMessages from other agentsControl signals from users or controller agents¨ Active:nnInformation obtained from monitoring/sensing the environmentInformation obtained from querying third party servicesn <strong>Agent</strong>s often operate in an open & dynamicenvironment¨ other agents and objects can be intelligent, leading tonondeterministic behaviors¨ instant changes, e.g. contextual information20


Example of dynamic environmentn Environment:¨ wastebins¨ charging stations¨ obstacles¨ wasten Obstacles can moven The locations of theseobjects changesn New objects mightcome in21


Mock <strong>Agent</strong>sn Mock <strong>Agent</strong>s are sample implementation ofan agent used for testing¨ Mock <strong>Agent</strong>s simulate a few functionality of thereal agentn An <strong>Agent</strong> under test can interact with mockagents, instead of real agents, during testexecutionExample: during testing the Sale<strong>Agent</strong>, we use a Mock Payment<strong>Agent</strong> instead of the real one toavoid real paymentSale<strong>Agent</strong>MockPayment<strong>Agent</strong>Payment<strong>Agent</strong>22


Tester <strong>Agent</strong>n Tester <strong>Agent</strong> is a special agent that playsthe role of a human tester¨ interact with the <strong>Agent</strong> under test, use the samelanguage is the agent under test¨ manipulate test inputs, generate test inputs¨ monitoring the behavior of the agent under test¨ evaluating the agent under test, according to thehuman tester’s requirementsn Tester <strong>Agent</strong> stays on the different side, against theagent under test!!!!n Used in continuous testing (next part)23


Continuous <strong>Testing</strong> ofAutonomous <strong>Agent</strong>s24


Why?n Autonomous agents evolve over timen One single test execution is not enoughbecause the next execution of the same testcase, test result can be different¨ because of learning¨ because of self-programing (e.g. geneticprogramming)25


Continuous testingn Consists of input generation, executionand monitoring, and output evaluationn Test cases are evolved and executedcontinuously and automaticallyEvaluationfinal resultsoutputsTest execution& Monitoringself-*<strong>Agent</strong>inputsGeneration& Evolutioninitial test cases(random, or existing)26


Test input generation• Manual• Random‣ messages: randomly-selected interaction protocol + randommessage content‣ environment settings: random values of artefacts’ attributes• Ontology‣ rules and concept definitions can be used to generatemessages• Evolutionary‣ quality of test case is measured by a fitness function f(TC)‣ use f to guide the meta-heuristic search to generate bettertest cases‣ example: quality-function-based fitness27


Random generation• Messages:‣ select randomly a standard interaction protocol, combine itwith randomly-generated data or domain specific data• Environmental settings:‣ identify the attributes of theentities in the environmentExample:‣ generate randomly values forthese attributes28


Ontology-based generation• Information available inside interaction ontology:‣ concepts, relations, data types of properties. E.g. action Proposeis an <strong>Agent</strong>Action, two properties: book: Book , price: Double‣ instances of concepts, user-defined or obtained from ontologyalignment• Use these data to generate messages automaticallyExample:Book(title:”A”)Propose(book:”A”, price:10)tester agentPropose(book:”A”, price:9)agent under test29


Quality-function-based evolutionarygeneration• Build the fitness of test cases based on quality functions• Use this fitness measure to guide the evolution, using agenetic algorithm, e.g GA, NSGAII etc.• For example:‣ soft-goal: safety‣ quality function: the closest distance of the agent to obstaclesmust be greater than 0.5cm‣ fitness f = d - 0.5, search for test cases that give f < 0generation idistancegeneration i + Kdistance0.50.5timetime30


Example: evolutionary testing of the cleaneragentn Test case (environment)encoding¨ coordinates (x,y) ofwastebins, charging stations,obstacles, wastes¨ TCi = n Fitness functions:¨fpower = 1/Total power consumptionnsearch for environments where the agent consumesmore power¨fobs = 1/Number of obstacles encounterednsearch for environments where the agent encountersmore obstacles31


Example (contd)n Genetic algorithm, driven by fpower and fobs, will searchfor test cases in which the agent will have higherchance to run out of battery, and hit obstacle.¨ that violates the user’s requirementsnResults:nEvolutionary test generationtechnique founds test caseswhere:1) wastes are far away fromwastebin -> more power2) obstacles on the way towastes -> easy to hitBlack circles: obstaclered dots: wastes - squares: charging stations, red circles: wastebins32


Example (contd)n More about the evolutionary of theenvironment:¨ http://www.youtube.com/watch?v=xx3QG5OuBz0n The search converts to the test cases wherethe 2 fitness functions are optimized.33


Conclusionsn <strong>Testing</strong> software agents is important, yet stillimmature.n Concerns in testing BDI agent:¨ BDI components: beliefs, goals, plans, events¨ Their integrationsn Oracles:¨ Reactive agents: can be specified at design time¨ Proactive agents: new types of oracles are needed,e.g. use quality function derived from soft-goalsn Many approaches to generate test inputs¨ Evolutionary proved to be effective34


Additional resourcesnCD Nguyen (2009) <strong>Testing</strong> Techniques for <strong>Software</strong> <strong>Agent</strong>s. PhD thesis, University of Trento,<strong>Fondazione</strong> <strong>Bruno</strong> <strong>Kessler</strong>. http://eprints-phd.biblio.unitn.it/68/n CD Nguyen, Anna Perini, Paolo Tonella, Simon Miles, Mark Harman, and Michael Luck. 2009.Evolutionary testing of autonomous software agents. In Proceedings of The 8th InternationalConference on Autonomous <strong>Agent</strong>s and Multiagent Systems - Volume 1 (AAMAS '09), Vol. 1.International Foundation for Autonomous <strong>Agent</strong>s and Multiagent Systems, Richland, SC,521-528.nnnCD Nguyen, A Perini, C Bernon, J Pavón, J Thangarajah, <strong>Testing</strong> in multi-agent systems,<strong>Agent</strong>-Oriented <strong>Software</strong> Engineering X, 180-190.Zhiyong Zhang, Automated unit testing of agent systems, PhD Thesis, Computer Scienceand Information Technology, RMIT University.Roberta de Souza Coelho, Uirá Kulesza, Arndt von Staa, Carlos José Pereira de Lucena,Unit <strong>Testing</strong> in Multi-agent Systems using Mock <strong>Agent</strong>s and Aspects, In Proceedings of the2006 international workshop on <strong>Software</strong> engineering for large-scale multi-agent systems(<strong>SE</strong>LMAS '06). ACM, New York, NY, USA, 83-90. DOI=10.1145/1138063.1138079 http://doi.acm.org/10.1145/1138063.113807935

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!