Computer Vision and Deep Leaning in Intelligent Mobility System

Authors

  • Joseph De Guia , Madhavi Devaraj

Abstract

This paper presents the state-of-the-art computer vision tasks in object detection for different modalities of sensors
such as camera, LIDAR, and RADAR being used in autonomous driving. These sensors provide rich and huge amount of data that
is being processed in compute for instructions and decisions to be made by the vehicle using a controller. The perception system
module collects data from its environment and its scenario to locate itself and run autonomously. The collected data needs to be
processed, features needs to be identified, optimized and learned instantaneously in the vehicle to provide signals for making
decisions in the controller for longitudinal, lateral movements as well as decelerate, and stop autonomously. Therefore, one of the
tasks involved in the perception is object detection and this is to identify, classify and predict the objects seen by the sensors. With
software in-the-loop that provides decision to the vehicle such as perception system, it makes sense that it should be at the same
behavior of the human as in control of the vehicle in action. The autonomous vehicle is now evolving into an intelligent transport
system and auto-pilot is no longer an option but a trustworthy system that humans can trust. This paper explores the different
approaches and methods using sensors and perception systems that exploits the deep learning methods in the advancement of the
intelligent mobility systems.

Published

2020-01-31

Issue

Section

Articles