Many of you in the cooler climates are starting to enjoy the warmer weather of spring now thawing the ground and the blooming of new growth. This is especially true for my colleagues in our Montreal and Munich offices. But a late season sprinkle here at our Palo Alto, California office reminded me of a recent post I read that correlated the increased intensity of rain to significantly increased risk of a fatal motor vehicle accident.
** Light rain or drizzle increases the risk of a fatal car crash by 27% vs. nice weather
** Moderate rain increases the risk by 75%
** Heavy rain is nearly 2.5x more risky
Another interesting report, from the US Dept. of Transportation, broke down crash statistics by weather type
over a 10-year period (see HERE). It reinforced the high incidence of accidents due to rain and wet pavement, but also covered other weather types like icy and snowy/slushy pavement, and fog. One eye-opening statistic was the rate of fatalities in fog were 1 fatality per 54 crashes, 4x to almost 7x higher than the other bad weather conditions.
While not surprising, it made me think about the impressive improvements in computer vision using state of the art approaches today and the inversely dismal performance of those systems in inclement weather and low lighting scenarios. Today, these Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicle (AV) systems generally hand control back to the driver or are disengage during these difficult conditions. Each sensor has strengths and weaknesses dictated by the hardware and the algorithms processing their outputs and automotive and AV companies acknowledge these limitations by only operating in good conditions, such as daytime highway driving or in sunny weather in Arizona.
But for perception systems to impact the statistics above, they need to not only work well in nice sunny weather (which can also be difficult in high-dynamic scenes, for example), but also in the harshest scenarios… because we need to drive then too.
Many have been looking at this problem, and we think we’ve cracked it with our Eos perception software, an end to end learning approach that combines RAW sensor data, a new detector architecture, along with efficient training, and delivers detection results but can also be flexibly integrated into your perception stack. Here are a few good examples of tough conditions and the corresponding detection performance Eos achieved.
We’ve also compared Eos against leading public networks, such as YOLO v3, RetinaNet-50, and SSD Mobilenet v2, on an evaluation dataset of harsh images such as those above. All the networks were trained with the same dataset to avoid bias. The following chart shows the significant improvement for typical automotive classes targeted while also executing at a higher frame rate.
If you cannot modify your vision system architecture, the computer vision (CV) module our Atlas camera tuning tool suite been able to improve accuracy over 10% through tuning the camera’s image signal processor (ISP) to optimize for vision rather than just visual image quality. This is an industry first, as ISPs and today’s manual visual tuning approaches cannot understand what a vision algorithm ideally is “looking” for.
In future blogs, I’ll describe the Eos and Atlas approaches we use to achieve such robust performance. Also, if you’re at the AutoSens, Embedded Vision Summit, and CVPR conferences, come by our booth to say hi and see a demo.
As always, drive safely and I look forward to your questions and comments!
Dave Tokic, VP Marketing