While teleimmersion has great potential, available algorithms for real-time stereo reconstruction require several seconds to several minutes to process two images and produce accurate stereo output. Available FPGA and GPU implementations have inherent drawbacks in ability to reconstruct homogenous regions or regions with repeating patterns. The time-of-flight cameras have low resolution, limited range, high noise, and albedo sensitivity. Therefore, a practical real-time stereo reconstruction is needed for a system enabling geographically distributed users to interact with each other in a shared virtual space.
University of California investigators have responded to this challenge by developing INTI Multiview, a real-time stereo reconstitution integration for 3D telleimmersion. With INTI Multiview, each user is presented by their 3D avatar generated in real time. INTI Multiview focuses primarily on integrating multiple stereo reconstructions from different views. Levels of accuracy comparable to other methods are achieved at a much faster speed on CPU by taking a hybrid approach: performing a local optimization technique (the region matching) and using a global optimization approximation to improve the initial results (anisotropic diffusion). The investigators have implemented a novel multiscale representation that allows for the highly accurate reconstruction of a scene. The investigators have successfully tested INTI Multiview in many application areas, such as remote dance choreography, shared geoscientific and archeological applications, and training. This work has further extensions in other applications where real-time stereo data is necessary, e.g. full body motion capture, surveillance and tracking, foreground/background segmentation, autonomous vehicle control. Markerless 3D reconstruction for human movement analysis (motion capture, visual feedback for gaming, rehabilitation etc.)