Loading...

Loading...

Blogs

Ion Platform – End-to-End Design and Implementation of Autonomous Vision Systems

Improving safety has been a critical pillar for the development of autonomous vehicles and driver-assist ADAS systems and the positive impact of deploying robust and accurate systems is massive. The World Health Organization recently reported that there are about 1.35M road traffic deaths annually and more than half are among the most vulnerable road users: pedestrians, cyclists, and motorcyclists. But despite incredible recent innovations, there are still significant limitations, especially when it comes to the accuracy and robustness of these systems across all operating scenarios as we’ve seen in recent in high-profile incidents.

Algolux is thrilled to announce the Ion Platform (see announcement here), the industry’s first platform for end-to-end design and implementation of autonomous vision systems. We believe that a new end-to-end design approach is needed to break down the technical and organizational silos that are key causes for these limitations.

Building autonomous vehicles and perception systems is not easy. The driving scenarios the system needs to handle must be identified and analyzed and then the sensors and processing platforms are chosen based on the perception algorithmic approaches and the system performance requirements. These are integrated with the localization, planning and control, and higher functions.

Very deep expertise is needed and applied to develop each of these domains. For example, the cameras used for computer vision are developed and manually tuned by deep imaging expert to provide the best visual image quality to the computer vision algorithms (and in certain cases to displays to the driver for ADAS use cases). Similarly, the computer vision teams develop advanced deep learning or hand-crafted algorithms to detect and avoid a pedestrian, bicycle, or vehicle. Today these systems are quite accurate in good imaging scenarios.

But they are generally not robust and significantly lose accuracy under difficult conditions such as low light, rain, snow, fog, or dust. Also, each team focuses on achieving the best possible performance for their own domain. The camera team applies and tunes processing meant to provide the best visual result, but computer vision algorithms actually need something different than what human vision prefers. The same is true when developing fusion systems that take in multi-sensor inputs from cameras, LIDAR, etc.

We believe that end-to-end deep learning and optimization approaches can be used to improve perception, but also enables a significant impact to the design process itself. Ion is a platform providing machine learning software and automation methodologies that enables development teams to design a fully optimized robust perception systems, providing increased performance, improved system effectiveness, and reducing risk.

The Ion platform includes the Atlas camera tuning suite and Eos embedded perception software. Atlas accelerates camera system tuning for better image quality and computer vision accuracy, shrinking today’s manual tuning process from many months to weeks or even days. Eos is an end-to-end software stack that delivers very robust and accurate perception across multiple sensors, with over 30% better accuracy in difficult conditions vs. today’s best public and commercial approaches. Eos can reduce system costs by enabling use of lower cost components or even removing complete devices, such as the image processing (ISP) silicon.
You can learn more about Ion here, but please contact us if you would like to see for yourself.