23.08.2015 Views

Here - Agents Lab - University of Nottingham

Here - Agents Lab - University of Nottingham

Here - Agents Lab - University of Nottingham

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

– Memory, for maintaining the global memory and working memory for thebehavioural layer. In global memory, a map <strong>of</strong> the environment and properties<strong>of</strong> objects or features extracted from sensor data are stored. We haveused a 2D grid map for representing the environment. This map also keepstrack which <strong>of</strong> the grid cells are occupied and which are available for pathplanning. The working memory stores temporary sensor data (e.g., images,sonar values) that are used for updating the global memory.– Information Processor, for image processing. This component providessupport for object recognition, feature extraction and matching, as well asfor information fusion. Information fusion is used to generate more reliableinformation from the data received from different sensors. We have usedthe OpenCV [20] library to implement algorithms and methods for processingimages captured by the camera. Algorithms such as Harris-SIFT [21],RANSAC [22], and SVM [23] for object recognition and scene classificationhave been integrated in this module.– Environment Configurator, for interpreting and classifying events thatoccur in the environment. For example, when the output <strong>of</strong> the left sonarexceeds a certain value, this module may send a corresponding event messagethat has been pre-configured. This is useful in case such readings have specialmeaning in a domain. In such cases, an event message may be used to indicatewhat is happening and which objects and features such as human faces andpictures have been detected.– Navigation includes support for Localization, Odometry, Path Planningand Mapping components, which aid robots in locating themselves and inplanning an optimal path from the robot’s current position to destinationplaces in a dynamic, real-time environment. Once a robot receives a navigationmessage from the cognitive layer with an instruction to walk towardsthe destination in the 2D grid-map space, the Path Planning component calculatesan optimal path. After each walking step, Odometry will try to keeptrack <strong>of</strong> the robot’s current position, providing the real-time coordinates <strong>of</strong>the robot for planning the next walking step. Due to the joint backlash andfoot slippage, it is impossible to accurately estimate the robot’s position justby means <strong>of</strong> odometry. For this reason, the robot has been equipped with thecapability to actively re-localize or correct its position by means <strong>of</strong> predefinedlandmarks. Additionally, navigation requires back-and-forth transformationbetween coordinate systems. The odometry sensors, for example, provide therobot’s coordinates in a World Coordinate System (WCS) that needs to bemapped to the robot’s position in 2D Grid-map Coordinate System (GCS)for path planning. However, when executing walking steps, the position inGCS has to be transformed into the Local Coordinate System (LCS) thatthe robot uses.– Communication, for delivering perception messages to the EIS componentand receiving action messages from the EIS. The communication componentis constructed based on a Server/Client structure and uses the TCP/IPprotocol. The behaviour layer in a robot starts a unique server to which60

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!