23.12.2014 Views

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Proceedings <strong>of</strong> the National Conference on<br />

Trends and Advances in Mechanical Engineering,<br />

<strong>YMCA</strong> <strong>University</strong> <strong>of</strong> <strong>Science</strong> & <strong>Technology</strong>, Faridabad, Haryana, Oct <strong>19</strong>-<strong>20</strong>, <strong>20</strong>12<br />

NAVIGATION CONTROL AND LOCALIZATION OF MOBILE<br />

ROBOT<br />

Meghana S 1 and Dr. D.N Drakshayani 2<br />

1,2 Department <strong>of</strong> Mechanical Engineering, Sir M Visvesvaraya Institute <strong>of</strong> <strong>Technology</strong>, Bengaluru<br />

1 Corresponding author: megha87_s@yahoo.co.in<br />

Abstract<br />

This paper presents an experimental study on localization and navigation control based on vision system. The<br />

system is designed for a robot navigating in an indoor environment with a single web camera. A colour image<br />

segmentation method based on RGB (Red, Green, and Blue) colour information <strong>of</strong> the colour image is applied<br />

for the recognition <strong>of</strong> position and orientation <strong>of</strong> a small mobile robot. First, the lower and higher threshold<br />

values <strong>of</strong> RGB are determined for the sample colours. The colour image segmentation and processing is<br />

performed using MATLAB tool. The localization and navigation control <strong>of</strong> the robots are efficiently and<br />

accurately established by knowing the intensity <strong>of</strong> each extracted colour region. Experimental results <strong>of</strong> the<br />

colour image segmentation or threshold segmentation method applied to the recognition <strong>of</strong> an object are<br />

studied. This work proposes the navigation control <strong>of</strong> a mobile robot with inexpensive hardware.<br />

1. Introduction<br />

Autonomous mobile robots need the capability to explore, navigate and localize themselves in dynamic or<br />

unknown environments in order to suit the wide range <strong>of</strong> industrial applications. In the past two decades, a<br />

number <strong>of</strong> different approaches have been proposed to develop flexible and efficient navigation systems for<br />

manufacturing industry, based on different sensor technologies such as odometry, Inertial Navigation, Landmark<br />

Navigation, laser scanners, inertial sensors, sonar and vision [1] etc.<br />

Localization and navigation are the fundamental problems <strong>of</strong> mobile robots. In the past, a variety <strong>of</strong> approaches<br />

for mobile robot localization has been developed. They mainly differ in the techniques used to represent the<br />

belief <strong>of</strong> the robot about its current position and according to the type <strong>of</strong> sensor information that is used for<br />

localization [2]. The basic requirements for the autonomous navigation <strong>of</strong> a mobile robot are environmental<br />

recognition, path planning, driving control, and location estimation/correction capabilities [3], [4]. The location<br />

estimation and correction capabilities are practically indispensable for the autonomous mobile robot to execute<br />

the given tasks efficiently. There are two general methods for the estimation <strong>of</strong> location: 1) using deadreckoning<br />

sensors attached at the wheels and body to add the displacement to the initial position data and 2)<br />

using a camera, ultrasonic, laser, radar and/or infrared sensors to recognize and locate beacons. The former<br />

scheme, though cheap and easy to implement, is subject to the aggregate error that may result in inaccurate<br />

position data. The latter scheme has become very popular recently, as it can provide precise location data<br />

instantaneously. However, there are many factors involved in obtaining accurate location information while the<br />

mobile robot is moving [5]. To get reliable and precise location data, sensor fusion techniques [6], [7] have also<br />

been developed.<br />

When a charge-coupled device (CCD) camera is utilized under good illumination conditions, certain patterns or<br />

shapes <strong>of</strong> objects are also effective for determining the location [8], [9]. Cameras are excellent sensors in robotic<br />

systems. However, a camera being extreme sensitive to illumination variation restricts its capability. The image<br />

sensor <strong>of</strong> a camera has extremely high sensitivity to varying light illumination. The image sensor in a camera<br />

usually results in inconsistent colour data due to little illumination variation because <strong>of</strong> fixed aperture. Lighting<br />

is one <strong>of</strong> the most important considerations for any vision application. Varying intensity <strong>of</strong> light source surely<br />

makes captured image data different. Without the right type <strong>of</strong> lighting, the application will not be successful<br />

[10]. Most researchers [11], [12] focus on the indoor navigation <strong>of</strong> a mobile robot in a well-structured<br />

environment. In other words, beacons, doors, and corridor edges are utilized to estimate the current location <strong>of</strong><br />

the mobile robot.Hence in this work, the localization system based on the Infrared sensors and vision based<br />

system has been developed. It works on relative positioning technique promising with better performance and at<br />

low cost. Even though using a vision sensor or combination <strong>of</strong> vision sensor and infrared sensors can provide<br />

plenty <strong>of</strong> information about the environment, extraction <strong>of</strong> visual features for positioning is not easy.<br />

Localization is done by the identification <strong>of</strong> colours with the predefined information <strong>of</strong> their intensity for the<br />

given environment. During its operation, the robot uses its Infrared sensors to scan obstacles present in its<br />

surrounding.<br />

289

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!