Algolux Launches Next-Generation of Eos Embedded Perception Software for ADAS and Autonomous Vehicles

Tesla (Camera/Radar Fusion + Tracking) vs. Eos (Camera-Only, Frame by Frame)

Expanded Eos portfolio leverages new AI deep learning framework to massively improve system scalability, development costs, and robust accuracy for all operating conditions

September 23, 2020 – Algolux, a leading provider of robust perception software solutions for safety-critical applications, today announced at the Embedded Vision Summit the next generation of its Eos Embedded Perception Software. This new release delivers best in class accuracy and scalability benefits for teams developing vision-based Advanced Driver-Assistance Systems (ADAS), Autonomous Vehicles (AV), Smart City, and Transportation applications.

Combining breakthroughs in deep learning, computational imaging, and computer vision, the expanded Eos portfolio accelerates development and improves accuracy up to 3x for robust operation in all conditions, especially in difficult edge cases. To achieve this, Algolux developed a reoptimized end-to-end architecture and a novel AI deep learning framework. This reduces model training cost and time by orders of magnitude and removes sensor and processor lock-in, something not possible with today’s learning methods.

Low-light detection comparison: Tesla Model S Autopilot (camera/radar fusion + tracking) vs. Algolux Eos perception (camera-only)
Low-light detection comparison: Tesla Model S Autopilot (camera/radar fusion + tracking) vs. Algolux Eos perception (camera-only)

Roadblocks to autonomy and industry growth

The automated driving and fleet management markets are expected to collectively reach over $145B by 2025 (1)(2) and rely on accurate perception technologies to achieve that growth. But current “good enough” vision systems and misleading autonomy claims have led to vision failures, unexpected disengagements, and crashes. Recent AAA studies show that ADAS features such as pedestrian detection are unreliable, especially at night or in bad weather when most fatalities occur, and that systems designed to help drivers are actually doing more to interfere (3)(4).

Today’s vision architectures and supervised learning methods have fundamental performance limitations and hamper time to market for new perception capabilities. They deliver inferior results due to biased datasets that do not cover all operational scenarios, such as low light and bad weather and are impractical due to lack of scalability and high development cost.

Groundbreaking AI framework provides scalability and reduced design costs

Algolux has developed a novel AI deep learning framework to overcome these limitations. This significantly improves robustness and saves hundreds of thousands of dollars of training data capture, curation, and annotation costs per project while quickly enabling support of new camera configurations.

The framework uniquely enables end-to-end learning of computer vision systems. This effectively turns them into “computationally evolving” vision systems through the application of computational co-design of imaging and perception, an industry first. The approach allows customers to easily adapt their existing datasets to new system requirements, enabling reuse and reducing effort and cost vs. existing training methodologies. The sensing and processing pipeline is included in the domain adaptation, addressing typical edge cases in the camera design ahead of the downstream computer vision network. Unlike existing sequential perception stacks that tackle edge cases purely by relying on supervised learning using large biased datasets, the AI framework “learns” the camera itself jointly with the perception stack to address these limitations.

Low-light detection comparison: Tesla Model S Autopilot (camera/radar fusion + tracking vs. Algolux Eos Perception (camera-only)

Expanded portfolio delivers industry-leading robustness across all perception tasks

Eos delivers a full set of highly robust perception components addressing individual NCAP requirements, L2+ ADAS, higher levels of autonomy from highway autopilot, and autonomous valet (self-parking) to L4 autonomous vehicles as well Smart City applications such as video security and fleet management. Key vision features include object and vulnerable road user (VRU) detection and tracking, free space and lane detection, traffic light state and sign recognition, obstructed sensor detection, reflection removal, multi-sensor fusion, and more.

Eos Embedded Perception ADAS Software
Next-Gen Eos Embedded Perception Software | Full Stack

The Eos end-to-end architecture combines image formation with vision tasks to address the inherent robustness limitations of today’s camera-based vision systems. Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather. This was benchmarked by leading OEM and Tier 1 customers against state of the art public and commercial alternatives. In addition, Eos has been optimized for efficient real-time performance across common target processors, providing customers with their choice of compute platform.

Delivering a comprehensive portfolio of perception capabilities with next-generation robustness and scalability, Eos provides vision systems teams with the quickest path to safe and reliable driver-assist and autonomy capabilities.

If you’d like to learn more about Eos Embedded Perception, get in touch today.

Send us a message

  • This field is for validation purposes and should be left unchanged.