3D Reconstruction, 3D Vision, Computer Vision, Event-Based Processing, Neuromorphic Computing, Research, Robotics

Neuromorphic Event-Based Generalized Time-Based Stereovision

Abstract

3D reconstruction from multiple viewpoints is an important problem in machine vision that allows recovering tridimensional structures from multiple two-dimensional views of a given scene. Reconstruction from multiple views is conventionally achieved through a process of pixel luminance-based matching between different views. Unlike conventional machine vision methods that solve matching ambiguities by operating only on spatial constraints and luminance, this paper introduces a full time-based solution to stereovision using the high temporal resolution of neuromorphic asynchronous event-based cameras. These cameras output dynamic visual information and luminance encoded in time. They allow a formulation of stereovision as a pure event coincidence detection problem. We will introduce a methodology for time based stereovision in the context of binocular and trinocular configurations using time based event matching criterion combining for the first time all together: space, time, luminance and motion.





Leave a Reply

Your email address will not be published. Required fields are marked *

*
*