Our mission is to enable autonomous vision – empowering cameras to see more clearly and perceive what cannot be sensed with today’s imaging and vision systems.
Algolux graduated from technology accelerator TandemLaunch.
Cameras are finding their way into an increasing number of industries and products. New designs are being driven by novel sensors and complex imaging algorithms. Whether the cameras are in smartphones, vehicles, drones or IoT connected products, there is an unending challenge of integrating optics, sensors, imaging processors and computer vision tasks. Tuning each of these heterogeneous systems is a critical step towards ensuring system performance, however this requires expertise that is hard to come by and may not be core to a vendor’s resourcing strategy. As traditional imaging and computer vision systems converge, tuning for image quality is becoming more complex, costly, and resource-hungry. Furthermore, traditional image processors have not been conceived for challenging computer vision cases and therefore yield less than optimal results.
We address this opportunity with CRISP-ML, a desktop tool that uses machine learning to automatically optimize your full imaging and vision system. CRISP-ML can effectively combine large real-world computer vision training data sets with standards-based metrics and chart-driven Key Performance Indicators (KPIs) to holistically improve the performance of your vision systems. This can be done across combinations of components and operating conditions previously deemed as unfeasible. By exploiting our innovative machine-based approach, CRISP-ML automates the tuning steps that are otherwise painful, costly, and time-consuming.
Taking this one step further, we envision a world where traditional ISPs will be replaced by artificial intelligence. Using deep learning, we have conceived a unified neural architecture that integrates the image formation model (optics, sensor) with the high level computer vision model. CANA, our Camera-Aware Neural Architecture, is a full-stack solution that provides significant improvement in imaging and perception, especially in the harshest of conditions like low-light and adverse weather. Furthermore, CANA is modular in that in can be integrated with third party perception systems and across a range of power profiles, from high-performance 250W central computing platforms to sub 1W edge processors.