Localization of Autonomous Robot in an Urban Area Based on SURF Feature Extraction of Images

Localization of Autonomous Robot in an Urban Area Based on SURF Feature Extraction of Images

Abu Sadat Mohammed Yasin, Md. Majharul Haque, Md. Nasim Adnan, Sonia Rahnuma, Anowar Hossain, Kallol Naha, Mohammod Akbar Kabir, Francesc Serratosa
Copyright: © 2020 |Pages: 28
DOI: 10.4018/IJTD.20201001.oa1
Article PDF Download
Open access articles are freely available for download

Abstract

An autonomous robot is now an internationally discussed topic to ease the life of humans. Localization and movement are two rudimentary necessities of the autonomous robots before accomplishing any job. So, many researchers have proposed methods of localization using external tools like network connectivity, global positioning system (GPS), etc. However, if these tools are lost, either the movement will be paused, or the robot will be derailed from the actual mission. In these circumstances, the authors propose an approach to localize an autonomous robot in a specific area using the given set of images without external help. The image database has been prepared and kept in the internal memory of robot so that image matching can be done quickly. The localization method has been accomplished using three algorithms: (1) SURF, (2) ICP-BP, and (3) EMD. In the evaluation, SURF has been found better than ICP-BP and EMD in terms of accuracy and elapsed time. The authors believe that the proposed method will add value to other methods using some external tools even when those tools are unavailable.
Article Preview
Top

Introduction

Geo-localization is the identification of the real-world geographic location of an object. It is closely related to geographic coordinate positioning systems such as a radar source, mobile phone, Internet connected computer terminal, autonomous robot or any kind of automatically moving objects. Internet and computer-based geo-localization can be accomplished by associating a geographic location with the Internet Protocol (IP) address, MAC address, RFID, hardware embedded article/production number, embedded software number, Wi-Fi positioning system, device fingerprint, canvas fingerprinting, device’s GPS coordinates, or some self-disclosing information (Holdener & Anthony, 2011; Haque et al., 2013).

Autonomous robots, just like humans, also have the ability to make their own decisions and then perform an action accordingly. A truly autonomous robot is one that can perceive its environment, make decisions based on what it perceives and/or has been programmed to recognize and then actuate a movement or manipulation within that environment (“Autonomous Robots”, 2020). Geo-localizing the current position of autonomous robot is a significant issue because the robot needs to know its current location before any movement within a reasonable time-frame.

Many prospective researchers have proposed different methods for the geo-localization of an autonomous robot using Radio Frequency (RF), GPS, Internet, laser system, ultrasonic sensor, landmarks, skylines etc. A detailed overview of these relevant papers is given below.

In 1995, Magee & Aggarwal (1995) presented a computationally straightforward method for determining location of camera that is mounted on a robot. The positioning of robot from sensor data was proposed by Burgard, Fox, and Thrun (1997). This approach provides logical criteria for (i) setting the robot’s motion direction (exploration), and (ii) determining the pointing direction of sensors to efficiently localize a robot. A low-cost strategy for localization was proposed by utilizing a Kalman filter operating on sensors’ data for estimating the position and orientation of robot (Goel, Roumeliotis, & Sukhatme, 1999).

Han, Lee, and Hashimoto (2000) offered an approach by using binocular stereo vision to control the position and orientation of a robot. This method works for SCARA (Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm) robot manipulator. Another localization method was proposed by Yun, Lyu, and Lee (2006) which utilize the external monitoring camera information for the indoor environment. Two methods for simultaneous localization and mapping for both outdoor and indoor environments were described by Berrabah and Bedkowski (2008). The first method (Berrabah & Bedkowski, 2008) is a feature-based algorithm that combines geo-referenced images to localize robot in user-defined global coordinates frame. The second method (Berrabah & Bedkowski, 2008) works in indoor environment and robot uses a laser range finder to build an occupancy grid map in its navigation area.

A localization method using the Matrix Pencil (MP) algorithm for hybrid detection of the Direction of Arrival (DOA) and Time of Arrival (TOA) was presented in (Trinh et al., 2012). Huang, Tsai, & Lin (2012) published two techniques for mobile robot localization for the indoor environment. At first, they (Huang et al., 2012) use the images of the markers attached on the ceiling with known positions to calculate the location and orientation of the robot. Secondly (Huang et al., 2012), an RGB-D camera mounted on the robot is adapted to acquire the color and depth of images of the environment. A real-time 3D localization and mapping approach for the USAR (Urban Search and Rescue) robotic application was proposed by Bedkowski, Maslowski, and Cubber (2012).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing