The Latest Developments in Autonomous Driving
Earlier this year Ford Motor Company announced that it is partnering with the Massachusetts Institute of Technology (MIT) and Stanford University, to research the technical challenges surrounding the future of autonomous driving. The teams will use Ford’s recently unveiled Fusion Hybrid research vehicle, which utilizes four Lidar sensors to generate a real-time 3D map of the vehicle’s surroundings. The aim of the research is to develop a system which gives the vehicle a human level of intuition when it comes to assessing and adapting to visual clues in its immediate environment.
The research team at MIT will focus on developing sophisticated algorithms for systems which can predict the actions of other cars and pedestrians around the vehicle, so that it can navigate a safe path around any potential risks. While the researchers at Stanford will focus on sensors which can ‘see around’ obstructions, to assist with driving maneuvers and improve safety. Ford says it expects fully automated driving by 2025, and will continue to invest in new technologies and partnerships as part of its ‘Blueprint for Mobility’.
It will be interesting to follow this research as it unfolds to see what innovations they can come up with next. Meanwhile, Google has released details of further development to its software, which has helped its fleet of self-driving cars navigate city streets full of typical obstructions such as pedestrians, road works, cyclists, delivery trucks, and railway crossings. The software has been developed to the point where it can now detect and recognize hundreds of different objects simultaneously and react accordingly.
Demonstrations, for example, on how the self-driving car can spot a cyclist with an outstretched arm and detect this as a signal that the cyclist is about to turn. The car will yield and give way to the cyclist as necessary to avoid collision. They have also shown how the car can navigate its way through temporary, unexpected construction zones, and how it can keep track of multiple traffic flows, cyclists and pedestrians at intersections, to determine when it is safe to move away.
Google say that as their engineers gain experience and knowledge they are able to build more complex software models of what to expect, and thus are able to ‘teach’ the car how to react in a vast number of different circumstances. The self-driving cars have now clocked over 700,000 miles autonomously, and the recent software advances have allowed them to navigate thousands of situations on city streets that just wouldn’t have been possible as little as two years ago.