Machine Intelligence as a Foundation of Self-Driving Automotive (SDA) Systems

Machine Intelligence as a Foundation of Self-Driving Automotive (SDA) Systems

Goh Bian Chiat, Muneer Ahmad, N. Z. Jhanjhi, Yasir Malik
DOI: 10.4018/978-1-7998-9201-4.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Machine intelligence is a backbone of self-driving automotive (SDA) systems. Presently, ResNet, DenseNet, and ShuffleNet V2 are excellent convolution choices, whereas object detection focuses on YOLO and F-RCNN design. This study discovers the uniqueness of methods and argues the suitability of using each design in SDA technology. Real-time object detection is imperative in SDA technology, for CNN, as well as to object detection algorithms, an architecture that is a balance between speed and accuracy is important. The most favorable architecture in the scope of this case study would be ShuffleNetV2 and YOLO since both are networks that prioritize speed. But the drawback of speed prioritization is that they suffer from slight inaccuracies. One way to overcome this is to replace the neural network with a more accurate (albeit slower) model. The other solution is to use reinforcement learning to find the best architecture, basically using neural networks to create neural networks. Both approaches are resource-intensive in the sense of capital, talent, and computational budget.
Chapter Preview
Top

Introduction

SElf-Driving Automotive (SDA), or Autonomous Vehicle is a feature which allowed automotive (mainly car, trucks, vehicle with four wheels or more) to drive itself. According to SAE (Society of Automotive Engineers) International Standard J3016, there are 6 Level of Autonomous. Starting at Level Zero, which is no automation, up till Level 5, full vehicle autonomy (Shuttleworth, 2019). Many automotive and software company are currently racing to develop their own SDA technology, such as Tesla, Waymo, Toyota, Honda, BMW, etc.

The rational for auto maker to pursuit this technology comes in 3 folds: Satisfaction, Safety and Survival. The first two rationales combined to become a strong selling point and ultimately boost the company images and profit. When a car is able to drive itself, the task of driving becomes optional. This meant that driver choose to drive, and not had to drive. This generates a lot of satisfaction for the driver, especially when the driver must drive on the traffic-dense road, with multiple stop-start traffic, every day to work. On a long highway, driver now able to sit back and enjoy the view, or having conversation with passenger without the distraction of driving, the trip can become very enjoyable. Safety is a main consideration for auto maker, function such as Vehicle Stabilization Control, ABS exist in modern car. Because SDA technology can driver itself, it can also apply brake when there is a sudden object appeared in front of the car. This is possible by means of tuning the camera and increasing the image quality during the night, as well as tuning the SDA Chip to have a high refresh rate. The average driver reaction time for an unexpected (like animal jumping in front) are 2.3s (McGehee, Mazzae, & Baldwin, 2000). If the SDA Chip is tuned to ingest images at a rate of 10 fps, taking account the reaction time for the SDA Chip to apply brakes is 500ms, and required 2 frame to validate that the object capture was not a lens flare, the total respond time for the SDA from image acquisition to processing to action will took 700ms. This is a 3x improvement, comparing SDA with average human driver. In addition, as the hardware begins to improve overtime, a customized hardware and a software accelerator will only bring the respond time down even further. Finally, company that does not innovates according to the market trend will find their sales dwindled and eventually washed away by those who innovates accordingly, one good example would be Nokia (Troianovski & Grundberg, 2012). Innovation based on market trend or create a blue-ocean market is one of the cores for the long-term benefits and survival of any company. When market demands function such as SDA, company that refuse to innovate in this realm will not receive support from the market.

At this point, majority of automotive manufacturer are still in SAE Level 2, which means provides support for both steering and throttle/braking function (“The State of the Self-Driving Car Race 2020,” 2020). Support meaning the AI will help or assist the driver in steer and brake, such as Lane Keep Assistance from Honda. Waymo and Tesla are few of the companies that really focuses on developing SDA. Their technology has reached SAE Level 3, and in some area, Level 4. Which means, the car can drive itself without the intervention of driver in some of the roads and drive act as supervision only. Waymo has started their RoboTaxi in some part of US, where cars can drive itself ferrying passengers without having a driver present (Ohnsman, 2020).

Both Waymo and Tesla both approach the SDA with a very different solution. Waymo uses Lidar as their primary sensor and uses high definition map to navigate. Whereas Tesla Autopilot uses combination of camera, ultrasonic sensors and radar, with the aid of normal map for navigation. Lidar is a hardware that uses photon to measure the distance and build a 3-D representation internally. Tesla engineer argued that human does not shoot laser from the eyes. Instead, human relies primarily on visual to navigate and drive. Tesla approach to reach SAE Level 5 is to solve computer vision (Musk, Karpathy, & Bannon, 2019).

In our case study, I will focus on Tesla side of Self-Driving Automotive Technology (aka Autopilot ®). More specifically, I will discuss in-depth on the image recognition part of the SDA, which is the core technology in Autopilot®.

Complete Chapter List

Search this Book:
Reset