21.07.2015 Views

New Scientist Magazine - No. 3011

New Scientist Magazine - No. 3011

New Scientist Magazine - No. 3011

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

TECHNOLOGYFacebook’s exam for machinesMaking your AI take a storybook quiz vastly improves on the Turing test, says Jacob AronJOHN is in the playground.Bob is in the office. Where is John?If you know the answer, you’reeither a human, or softwaretaking its first steps towards fullartificial intelligence. Researchersat Facebook’s AI lab in <strong>New</strong> Yorksay an exam of simple questionslike this could help in designingmachines that think like people.Computing pioneer AlanTuring famously set his owntest for AI, in which a humantries to sort other humansfrom machines by conversingwith both. However, thisapproach has a downside.“The Turing test requires usto teach the machine skills thatare not actually useful for us,”says Matthew Richardson, anAI researcher at Microsoft. Forexample, to pass the test an AImust learn to lie about its truenature and pretend not to knowfacts a human wouldn’t.These skills are no use toFacebook, which is looking formore sophisticated ways to filteryour news feed. “People havea limited amount of time tospend on Facebook, so we haveto curate that somehow,” saysYann LeCun, Facebook’s directorof AI research. “For that you needto understand content and youneed to understand people.”AI plays 20 questionsIn the longer term, Facebookalso wants to create a digitalassistant that can handle a realdialogue with humans, unlike thescripted conversations possiblewith the likes of Apple’s Siri.Similar goals are drivingAI researchers everywhere todevelop more comprehensiveexams to challenge theirmachines. Facebook itselfhas created 20 tasks, whichQ16/ALAMYget progressively harder – theexample at the top of this articleis of the easiest type. The teamsays any potential AI must passall of them if it is ever to developtrue intelligence (arxiv.org/abs/1502.05698).Each task involves shortdescriptions followed by somequestions, a bit like a readingcomprehension quiz. Harderexamples include figuring outwhether one object could fitinside another, or why a personmight act a certain way. “Wewanted tasks that any humanwho can read can answer,” saysFacebook’s Jason Weston, wholed the research.Having a range of questionschallenges the AI in differentways, meaning systems thathave a single strength fall short.The Facebook team used itsexam to test a number of learningalgorithms, and found that nonemanaged full marks. The bestperformance was by a variant ofa neural network with access toan external memory, an approachthat Google’s AI subsidiaryDeepMind is also investigating.But even this fell down on taskslike counting objects in a questionor spatial reasoning.Richardson has also developed atest of AI reading comprehension,called MCTest. But the questions–Smarter every day–in MCTest are written by hand,whereas Facebook’s areautomatically generated.The details for Facebook’s tasksare plucked from a simulationof a simple world, a little like anold-school text adventure, where“ In the long-term, Facebookwants a digital assistantthat can handle a realdialogue with humans”characters move around and pickup objects. Weston says this is keyto keeping questions fresh forrepeated testing and learning.But such testing has itsproblems, says Peter Clark ofthe Allen Institute for ArtificialIntelligence in Seattle, becausethe AI doesn’t need tounderstand what real-worldobjects the words relate to.“You can substitute a dummyword like ‘foobar’ for ‘cake’and still be able to answer thequestion,” he says. His ownapproach, Aristo, attempts toquiz AI with questions takenfrom school science exams.Whatever the best approach,it’s clear that tech companieslike Facebook and Microsoft arebetting big on human-level AI.Should we be worried? Recentlythe likes of Stephen Hawking,Elon Musk and even Bill Gateshave warned that AI researchersmust tread carefully.LeCun acknowledges people’sfears, but says that the researchis still at an early stage, and isconducted in the open. “Allmachines are still very dumband we are still very much incontrol,” he says. “It’s not likesome company is going to comeout with the solution to AI allof a sudden and we’re going tohave super-intelligent machinesrunning around the internet.” ■22 | <strong>New</strong><strong>Scientist</strong> | 7 March 2015

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!