12.07.2015 Views

Motion Generation for Humanoid Robots with ... - Brown University

Motion Generation for Humanoid Robots with ... - Brown University

Motion Generation for Humanoid Robots with ... - Brown University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 9: The transitions between the segments that derivemeta-level behavior <strong>for</strong> each reaching motion. Linesindicate transitions between actions.6.2 ApplicationA human using a finger to point can produce ademonstration <strong>for</strong> ISAC, who attends to the objects in itsworkspace and generates reaching motions to thoseobjects. The application begins <strong>with</strong> a speech cue from thehuman, which directs the robot's attention to an object. Todirect ISAC's attention to unknown position of the object,the human tells ISAC to find the new the location of theobject such as “Reach-to”. The Human Agent sends thisintention text to the Self Agent (SA), and activates theHuman Finger Agent (HFA) inside the Human Agent andparses the name of the object. The HFA finds a pointedfinger to fixate on the object. Next, the Head Agent isactivated to find the pointed finger place and cameraangles in<strong>for</strong>mation is <strong>for</strong>warded to the Sensory EgoSphere,which returns the coordinates of the object. Based on theseintentions, the CEC uses procedures in LTM to retrieve themotion data to accomplish the robot's reaching intention.The desired motion data sequence is sent back to the CEC,and then sent to the Arm Agent to per<strong>for</strong>m the reachingtask.Figure 10: Schematic representation of the systemcommunication <strong>for</strong> the demonstration of Reaching theNew Point7 Related WorkAutonomous control of humanoid robots has been atopic of significant research. Preprogramming and teleoperationremain common methods <strong>for</strong> most applications,such as lift trucks driven by robotic operators [5].However, these approaches are quite tedious <strong>for</strong> complextasks and environments. Huber and Grupen [6] havepresented hybrid architecture <strong>for</strong> autonomous controlgiven a manually constructed control basis.Consequently, their approach to control is dependent onthe quality of the design and construction of the controlbasis, whereas the automated derivation we used [8]leverages the structure of humans.Other methods <strong>for</strong> automatically deriving behaviorsfrom human motion are not always suitable <strong>for</strong>autonomous control. Bregler [1] presented automaticallydeveloping groups of behaviors in the <strong>for</strong>m of movemesfrom image sequences of a moving human. Complexmotion can be described by sequencing the movemesgenerated <strong>for</strong> each limb, but indexing the movemes <strong>for</strong>coordinated motion generation is not obvious. Li et al.[19] and Kovar et al. [10] presented work <strong>for</strong> buildinglinear dynamical systems and directed graphs from motioncapture. For generating motion <strong>for</strong> control, however, thesesystems require a constraint optimization to be applied onstructures that may not be parsimonious.Human motion has been used as a basis <strong>for</strong>controlling humanoid robots. Ijspeert et al. [7] presentedan approach <strong>for</strong> learning non-linear dynamical systems<strong>with</strong> attractor properties from motion capture. Theirapproach is useful <strong>for</strong> perturbation-robust humanoidcontrol, but is restricted to deriving a single class ofmotion.Brooks and Stein [2] developed an integratedphysical system including vision, sound input and output,and skillful manipulation, which are all controlled by acontinuously operating parallel communication. The goalwas to enable the resulting system to learn to "think" bybuilding on its bodily experiences to accomplishprogressively more abstract tasks [2]. ISAC’s motionlearning system is similar. In our work, vision, speechrecognition, short-term memory, self-agent and behaviorsstored in long-term are all operating parallel in a uniquearchitecture in order to control the humanoid robotautonomously.8 ConclusionWe have described an approach <strong>for</strong> generating newmotions from derived behaviors. We were able to derivethe behavior vocabularies based on the Spatio-temporalIsomap method. We stored these behaviors in a robot’slong-term memory, and used a search mechanism togenerate autonomous control based on robot's perceived

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!