Building an autonomous vehicle from scratch is not easy. Teams typically start by analyzing the various driving scenarios they want to achieve. Then they work on designing the system. They select the sensors, identify DBW (drive by wire) components and controllers, make tradeoffs with localization, design planning & control, and develop the DBW interface. They acquire the various components, test them individually, then assemble the entire system on a vehicle. After testing the vehicle for various motions, then they let it go on the track. Track operation usually takes many cycles to get the system handling within an ideal accuracy range. After some development cycles the vehicle may be ready for open roads. Street environments are a completely differently level of complexity. Navigating a simple 4 way stop can take many weeks of development cycles. Problems with the perception system can impact planning, and planning algorithm adjustments may be required to focus the perception system on the relevant areas. The perception system needs to be designed to meet the objectives of the entire system. If the sensors are not optimized with these objectives, then often teams will iterate or try new sensors wasting valuable time. If the perception system itself is not highly accurate and designed to meet the system objectives, then the safety driver will be pretty busy disengaging to avert disasters.
We believe that deep learning can be used to improve the perception of the environment but also in the design process itself. Ion is a platform and a methodology that enables engineering teams to design optimized vision systems with deep learning. This allows automatic tuning of camera architectures and more robust perception systems. Ion provides increased performance, improved system effectiveness and reduces risk for vision systems. The Ion platform includes Atlas camera tuning suite and Eos embedded perception software. Atlas accelerates sensor system tuning and enables better image quality and computer vision. Eos is an end to end software stack that supports high accuracy perception across multiple sensors. With Ion teams can design an end to end vision system integrating optics, sensors, imaging processors and computer vision tasks for better optimization and performance. Altas can tune camera systems for better computer vision, and Eos can enable better perception on that same system. Optimization and performance analysis can be performed on the entire vision system which accelerates development and results in a system that is more accurate and robust.