11.07.2015 Views

Encyclopedia of Computer Science and Technology

Encyclopedia of Computer Science and Technology

Encyclopedia of Computer Science and Technology

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

442 s<strong>of</strong>tware agentLife could be studied to learn how people would be mostlikely to react to a real disease outbreak. And at CarnegieMellon University, a National <strong>Science</strong> Foundation–fundedproject will be studying interactions in online venues asdisparate as World <strong>of</strong> Warcraft <strong>and</strong> Wikipedia.Further ReadingDochartaigh, Niall O. The Internet Research H<strong>and</strong>book: A PracticalGuide for Students <strong>and</strong> Researchers in the Social <strong>Science</strong>s.Thous<strong>and</strong> Oaks, Calif.: Sage Publications, 2002.Gilbert, Nigel, <strong>and</strong> Klaus G. Troitzsch. Simulation for the SocialScientist. Philadelphia: Open University Press, 1999.“The Impoverished Social Scientist’s Guide to Free Statistical S<strong>of</strong>tware<strong>and</strong> Resources.” Available online. URL: http://www.hmdc.harvard.edu/micah_altman/socsci.shtml. AccessedOctober 5, 2007.Patterson, David A. “Using Spreadsheets for Data Collection, StatisticalAnalysis, <strong>and</strong> Graphical Representation.” Availableonline. URL: http://web.utk.edu/~dap/R<strong>and</strong>om/Order/Start.htm. Accessed October 5, 2007.Saam, Nicole J., <strong>and</strong> Bernd Schmidt. Cooperative Agents: Applicationsin the Social <strong>Science</strong>. Norwell, Mass.: Kluwer AcademicPublishers, 2001.Summary <strong>of</strong> Survey Analysis S<strong>of</strong>tware (Harvard). Available online.URL: http://www.hcp.med.harvard.edu/statistics/survey-s<strong>of</strong>t/.Accessed October 5, 2007.s<strong>of</strong>tware agentMost s<strong>of</strong>tware is operated by users giving it comm<strong>and</strong>sto perform specific, short-duration tasks. For example, auser might have a word processor change a word’s typestyleto bold, or reformat a page with narrower margins.On the other h<strong>and</strong>, a person might give a human assistanthigher-level instructions for an ongoing activity: for example,“Start a clippings file on the new global trade treaty <strong>and</strong>how it affects our industry.”In recent years, however, computer scientists <strong>and</strong> developershave created s<strong>of</strong>tware that can follow instructionsmore like those given to the human assistant than thosegiven to the word processor. These programs are variouslycalled s<strong>of</strong>tware agents, intelligent agents, or bots (shortfor “robots”). Some consumers have already used s<strong>of</strong>twareagents to comb the Web for them, looking, for example,for the best online price for a certain model <strong>of</strong> digital camera.Agent programs can also assist with online auctions,travel planning <strong>and</strong> booking, <strong>and</strong> filtering e-mail to removeunwanted “spam” or to direct inquiries to appropriate salesor technical support personnel. (See also Maes, Pattie.)Practical agents or bots can be quite effective, but theyare relatively inflexible <strong>and</strong> able to cope only with narrowlydefined tasks. A travel planning agent may be able to interfacewith online reservations systems <strong>and</strong> book airline tickets,for example. However, the agent is unlikely to be able torecognize that a recent upsurge in civil strife suggests thattravel to that particular country is not advisable.Researchers have, however, been working on a variety<strong>of</strong> more open-ended agents that, while not demonstrably“intelligent,” do appear to behave intelligently. The first programthat was able to create a humanlike conversation wasELIZA. Written in the mid-1960s by Joseph Weizenbaum,ELIZA simulated a conversation with a “nondirective” psychotherapist.More recently, Internet “chatterbots” such asone called Julia have been able to carry on apparently intelligentconversations in IRC (Internet Relay Chat) rooms,complete with flirting. Other “social bots” have served asplayers in online games (see chatterbots).Chatterbots are effective because they can mirror humansocial conventions <strong>and</strong> because much <strong>of</strong> casual humanconversation contains stereotyped phrases or clichés thatcan be easily imitated. Ideally, however, one would wantbots to be able to combine the ability to carry out practicaltasks with a more general intelligence <strong>and</strong> a more “sociable”interface. This requires that the bot have an extensiveknowledge base (see knowledge representation) <strong>and</strong> agreater ability to underst<strong>and</strong> human language (see linguistics<strong>and</strong> computing). Small strides have been made inproviding online help systems that can deal with naturallanguage questions, as well as being able to interactivelyhelp users step through a particular tasks.Agents or bots have also suggested a new paradigm fororganizing programs. Currently, the most widely acceptedparadigm treats a program as a collection <strong>of</strong> objects withdefined capabilities that respond to “messages” asking forservices (see object-oriented programming). A move to“agent-oriented programming” would carry this evolution astep further. Such a program would not simply have objectsthat wait passively for requests. Rather, it would have multipleagents that are given ongoing tasks, priorities, or goals.One approach is to allow the agents to negotiate with oneanother or to put tasks “up for bid,” letting agents thathave the appropriate ability contract to perform the task.With each task having a certain amount <strong>of</strong> “money” (ultimatelyrepresenting resources) available, the negotiationmodel would ideally result in the most efficient utilization<strong>of</strong> resources.If Marvin Minsky’s (see Minsky, Marvin) “society <strong>of</strong>mind” theory is correct <strong>and</strong> the human brain actually containsmany cooperating “agents,” then it is possible thatsystems <strong>of</strong> competing <strong>and</strong>/or cooperating agents mighteventually allow for the emergence <strong>of</strong> a true artificial intelligence.In the future, agents are likely to become more capable<strong>of</strong> underst<strong>and</strong>ing <strong>and</strong> carrying out high-level requestswhile enjoying a great deal <strong>of</strong> autonomy. Some possibleapplication areas include data mining, marketing <strong>and</strong> surveyresearch, intelligent Web searching, security, <strong>and</strong> intelligencegathering. However, autonomy may cause problemsif agents get out <strong>of</strong> control or exhibit viruslike behavior.Further ReadingDenison, D. C. “Guess Who’s Smarter.” Boston Globe, May 26,2003, p. D1. Available online. URL: http://web.media.mit.edu/~lieber/Press/Globe-Common-Sense.html. Accessed August21, 2007.D’Inverno, Mark, <strong>and</strong> Michael Luck. Underst<strong>and</strong>ing Agent Systems.2nd ed. New York: Springer, 2004.Lieberman, H., et al. “Commonsense on the Go: Giving MobileApplications an Underst<strong>and</strong>ing <strong>of</strong> Everyday Life.” Availableonline. URL: http://agents.media.mit.edu/projects/mobile/BT-Commonsense_on_the_Go.pdf. Accessed August 21, 2007.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!