The state of autonomy is not where anyone predicted it would be. About two years ago, Nvidia’s CEO Jensen Huang was at CES to discuss how their tech would lead to Level 4 autonomy by 2020. As a reminder, the NHTSA defines level 4 as a “vehicle that can itself perform all driving tasks and monitor the driving environment”.
Two years and countless hiccups later, Huang and the entire automotive industry have been forced to backtrack. For too long, an idealized vision of self-driving cars and robotaxis trumped what was always the real mission: creating safer roads by empowering vehicles with robust perception technology.
The Human Toll
The sad reality is that driving is just not safe. According to a report from the World Health Organization, over 1.35M people die on the roads, every year. That’s one person every 25 seconds. And more than half of all road traffic deaths are among vulnerable road users: pedestrians, cyclists, and motorcyclists.
However, we should point out that – globally – road deaths are down. Regulations and habits have helped, and so has technology. Innovation in artificial intelligence and sensing is undoubtedly making cars smarter and roads safer. Whether it is cutting edge driver-assist or increasing levels of autonomy, these systems are all designed to address the burden of driving and ultimately reduce the number of road accidents.
But despite some promising numbers, the vast majority of car crashes are still caused by human error and current Advanced Driver Assistance Systems (ADAS) aren’t yet bridging the gap. Then, what is the key to solving the fundamental risks inherent in driving?
The Technological Challenge
While improvement in cameras and sensors has enabled vehicle OEMs to improve safety, there still needs to be much more progress. Indeed, ADAS performance in difficult conditions – be it fog, rain, snow, or night – is still very poor.
For instance, inclement weather increases the risk of a fatal crash by 34% – and cars today are just not equipped to tackle this problem. Recent research from the AAA also revealed major robustness and accuracy challenges – especially at night, when 77% of pedestrian-vehicle fatalities occur.
But what if we could massively improve vehicle safety by rethinking how vision systems are designed today? What if we could solve perception robustness and empower vehicles with superhuman cameras?
Computer vision breakthroughs to address robustness gaps
Algolux recently won the top award at the 2020 Tech.AD Europe Conference. Algolux’s Eos Embedded Perception Software was recognized as the Most Innovative Use of Artificial Intelligence and Machine Learning in the Development of Autonomous Vehicles and Respective Technologies at the awards ceremony (read the Press Release).
Based on deep research in AI, computer vision, and imaging, we understood the inherent accuracy and robustness limitations of today’s perception approaches. Eos is based on a novel end-to-end deep learning architecture that combines imaging with computer vision specific to each of your camera configurations. This enables a massive improvement in robustness for all conditions, both good and harsh, and allows the stack to be personalized to any camera lens/sensor combination in a matter of days.
A first in the industry and a necessary departure from even the state of the art solutions being used by the automotive industry today.
Eos has been benchmarked for both good and difficult imaging conditions, significantly outperforming state of the art public and commercial computer vision models during OEM and Tier 1 testing. Specifically for the harsh low light and poor weather test cases, Eos has shown CV accuracy improvement of 2 to 3 times these alternatives.
Solving ADAS before tackling autonomy
80 million new cars are produced each year. That is where the critical opportunity to save lives on a global scale is. We at Algolux are committed to empowering the automotive industry with the most robust perception in all conditions.