Decision Making Insights Of Self-Driving Cars


A lot of people are talking about autonomous vehicles, which use highly advanced and futuristic technologies. And although these come with sophisticated intelligence, their performance largely depends on decision making skills of humans on roads.

Intelligence of autonomous vehicles is based on nothing but simple robotics. So, if you have to decide on how to drive a vehicle well, you have to make decisions exactly how self-driving cars do it. Technologies like those mentioned below give strength to self-driving cars.

Deep learning vision

The cameras and the LiDARs in the front of the vehicle should be able to see the signboard that specifies the speed limit and the diversion ahead. Unlike humans, self-driving cars make strict decisions concerning traffic light rules.

Planning

This is responsible for all decision-making based on complete information about the surroundings.

Localisation and mapping

Nowadays, apps like Google Maps facilitate navigation at a global level. This module helps a vehicle in knowing its present location at a local level with respect to the nearby surroundings.

Localisation

One can use GPS to pinpoint location, but its accuracy is only up to a few metres at present. If you are deciding to stop because of a traffic light in front, or considering to overtake another vehicle ahead, then this kind of accuracy is not good enough. So, a centimetre-grade positional accuracy using a better localisation system eases the decision making process for a self-driving vehicle (with regards to where it is currently located). One can find out about the surrounding features using a feature-extraction algorithm.

The two images of the same place in Fig. 1 have been taken from a stereo camera at different intervals of time. From the second image (on the right-hand side), we can see that the trees and railing appear a little closer. This helps to calculate how much you have moved by integrating this information along with the time taken to find out your precise location.

Fig. 1: Two images of the same place taken from a stereo camera at different intervals of time

However, stereo cameras are extremely noisy. So, the depth accuracy is pretty bad. Therefore, LiDARs provide a static 3D visual, wherein multiple images along with time are stitched together to give a final panoramic image (map).

The image in Fig. 2 is a visual representation of the IIIT Allahabad campus made by simultaneous localisation and mapping (SLAM) technology. And the image in Fig. 3 is the live visual representation displaying the associated surroundings. The technology keeps on integrating the map.

Visual representation of the IIIT Allahabad campus made by simultaneous localisation and mapping (SLAM) technology
Fig. 2: Visual representation of the IIIT Allahabad campus made by simultaneous localisation and mapping (SLAM) technology
Live visual representation displaying the associated surroundings
Fig. 3: Live visual representation displaying the associated surroundings

Observe the diagram in Fig. 4 by starting from the left-hand side (first blue dot). It acts as the origin, after which you move on to the second, third, and fourth blue dots. Each dot represents the vehicle pose.

Graph optimisation
Fig. 4: Graph optimisation

The red dots represent (the position of) any characteristic landmark near you while moving. The connecting lines between the blue and red dots signify your ability to see the represented landmark.

Knowing the blue dot (pose of the vehicle), the position of connected red dots (position of landmark) can be calculated; while knowing the red dot (position of landmark), the blue dot (pose of the vehicle) can be calculated. Taking the first blue dot as the origin, all blue dots (pose of the vehicle) and red dots (position of the landmarks) can be calculated. It is called graph optimisation for simultaneous localisation and mapping.

Layers of planning

Let us assume that the self-driving car would know where it is by performing complex calculations. But a problem of decision making, known as planning, arises. Here the planning (Fig. 5) is represented in a step-by-step manner.

Layers of planning
Fig. 5: Layers of planning

Strategic

It refers to finalising your destination. Automated taxi service providers, such as OLA and Uber, fit this well as they need to go to different destinations based on customer requirements.

Sub-strategic

It involves intelligent route planning, which tries to estimate where traffic congestion will take place and at what time.

Trajectory generation

It involves changing lanes under different scenarios. So, when two roads merge, one has to decide who has the right of way. At intersections, the controller needs to be sure whether to go or to stop. Other situations include parking, road blockage, and normal roads.

Control

With respect to using the brake and throttle.

Trajectory planning

Fig. 6 shows vehicles travelling on a road on both lanes of the road. Due to 3D vision, a particular vehicle sees other surrounding vehicles. If B is moving too slow, it is better to change lanes. Lane changing trajectories look like as shown by the red arrow line. With an adapted version of a pre-computed lane changing trajectory, an intelligent software controls the vehicle depending on the changes with respect to others.

Vehicles travelling on a road on both lanes of a road
Fig. 6: Vehicles travelling on a road on both lanes of a road

When a general obstacle is present in front, then humans can calculate the distance pretty fast. A self-driving car does the same by inferring all possible options and finally selecting the most suitable path (shown in blue in Fig. 7). The algorithm is called rapidly-exploring random tree, which generates all possible options represented as a tree.

Rapidly-exploring random tree
Fig. 7: Rapidly-exploring random tree

In case of dynamic obstacles, that is, vehicles that can move (Fig. 8), let us say that trajectory has been built. Unfortunately, the surrounding cars can also move, making the trajectory infeasible.

Self-simulation in case of dynamic obstacles
Fig. 8: Self-simulation in case of dynamic obstacles

To counter the uncertainties, the vehicle does a self-simulation with the help of forward simulation technology of various possibilities. In the process, (a simulated future virtual avatar of) each vehicle tries to avoid/repel (a simulated future avatar of) the other vehicle from coming near it. This leads to finally selecting a path, which has the least amount of avoidance/repulsion.

Currently, self-driving cars will never overtake. These have been designed just to simply follow other vehicles on the road, avoid static obstacles, and change lanes, if necessary.

In conclusion, a self-driving car or autonomous vehicle is something that can understand what is going around, use that understanding to determine its current position, map whatever it sees around both at the global and local level, do strategic decision making (overtake, change lane, or avoid vehicles), and finally operate the braking and throttle mechanisms.

There are a lot of problems yet to be solved. Only when everybody concerned comes together on a common platform, can pressing issues be worked out.


This article is based on the talk ‘Autonomous Vehicles: Tech Making It Possible’ by Dr Rahul Kala, assistant professor, Centre Of Intelligent Robotics, IIIT-Allahabad during February edition of Tech World Congress and India Electronics Week 2021





Source link

We will be happy to hear your thoughts

Leave a reply

Techseller
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0