Montreal, Canada – November 28, 2018 – Algolux, the leading provider of machine learning optimization platforms for autonomous vision, has won the top award at the 2018 Detroit Automotive Tech.AD Conference. Algolux was recognized for Most Innovative Autonomous Driving Solution for its CANA full-stack deep neural network (DNN) at the awards ceremony.
Accurate perception of pedestrians, objects, and surroundings is fundamental for safe autonomous vehicle operation, but is made even more difficult by harsh imaging conditions, such as low light and adverse weather.
Algolux’s CANA full perception stack applies a novel end-to-end DNN to significantly improve accuracy versus today’s alternative state of the art approaches across these challenging scenarios. Furthermore, its modular architecture can be integrated with third party perception systems and across a range of power profiles, from high-performance central computing platforms to sub 1W edge processors.
“Algolux was identified as a finalist from a large pool of nominees providing autonomous vehicle technologies and was presented with the first-place award at the 2018 Automotive Tech.AD Detroit Award ceremony”, said Sarah Farley, Director Smart Mobility & Automotive of we.CONECT, organizers of the AutomotiveTech.AD Conferences. “We congratulate Algolux on this acknowledgement by the autonomous vehicle community on the impact CANA can provide to improve the accuracy of perception systems and further progress the safety of these vehicles.”
“We are honored to win the Automotive Tech.AD Award for the Most Innovative Autonomous Driving Solution for our CANA robust perception DNN stack. This acknowledgment from industry experts once again validates our novel application of artificial intelligence as a unique approach to address the challenges of accurate perception under the most difficult imaging conditions,” said Allan Benchetrit, Algolux President and CEO. “The award highlights that Algolux’s AI technology for autonomous vision can tackle the mission-critical requirement of safe and robust perception for autonomous vehicles and ADAS.”