Self-driving, or Autonomous cars have been the focus area for many tech giants. Have you ever wondered why despite so much interest and research, they haven’t taken over the human driver already?
The research paper by Simone Mentasti, Matteo Matteucci, Stefano Arrigoni, Federico Cheli titled “Two algorithms for vehicular obstacle detection in sparse pointcloud” discusses the challenges and constraints of the existing solutions associated with the number of sensors included in a self-driving car and sensor data retrieval. The researchers have also proposed 2 solutions for the same purpose.
How do self-driving cars work?
The vision of autonomous vehicles is typically provided by Lidars complemented by data from cameras & radars. Lidar stands for Light Detection and Ranging, a method for determining ranges by targeting an object with a laser and measuring the time for the reflected light to return to the receiver.
Associated Challenges
Lidars have laser sensors that have been used in robotics way before autonomous vehicles, but these applications were very different from their application in self-driving vehicles. An autonomous vehicle moves at high speed in a dynamic requirement that requires precise mapping of the surrounding. In autonomous vehicles, the point cloud information from lidar is usually combined with data from cameras and radars. Sensors with many planes are used to obtain a dense and well-defined representation. However, this approach has several limitations, and the main are:
- High-resolution sensors are expensive
- Require high-efficiency GPUs to be processed.
Although lidars with fewer planes are cheaper, the returned data are not dense enough to be processed with state-of-the-art deep learning approaches to retrieve 3D bounding boxes.
Importance of this research
This paper addresses the scenario where data from lidar are limited in resolution, and obstacles are described only by a few planes. The researchers have proposed the below 2 solutions using fewer planes.
- 16 plane obstacle Detection: This approach performs vertical plane fitting operations working with 16-plane liars
- 8 plane obstacle Detection: It performs most of the operations on a 2D occupancy grid, and it works with all types of sensors
Results
Both algorithms have been validated on a dataset acquired in the Monza ENI circuit. The algorithm can compute values close to the ground truth. Although less accurate, this solution can also be employed as a source for the control algorithm.
Conclusion
The researchers have proposed 2 solutions for obstacle detection from a sparse print cloud. In the words of the researchers,
Both solutions have been validated using a custom acquired dataset, with accurate ground truth, to compare the real obstacle position and heading with the one from the algorithms. Both solutions have proved their ability to compute 3D bounding boxes with low error. The second approach is slightly less accurate due to the grid discretization process, but the error values are acceptable for control. Moreover, the solution can run in real-time on a consumer laptop without a modern GPU. Future works will be centered on implementing a final block of the pipeline to perform classification on the retrieved bounding box, similarly to neural network-based approaches. A second improvement will focus on tracking the state of each obstacle, in such a way, it should be possible to mitigate the bounding box noise, and filter spikes in the heading estimation.
Source: Simone Mentasti, Matteo Matteucci, Stefano Arrigoni, Federico Cheli “Two algorithms for vehicular obstacle detection in sparse pointcloud”. Link: https://arxiv.org/pdf/2109.07288.pdf