Autostereoscopic displays give great promise as the future of 3D technology. These displays spatially multiplex many views onto a screen, providing a more immersive experience by enabling users to look around the virtual scene from different angles. A parallax barrier, which physically occludes certain pixels, or a lenticular sheet, which distributes light in different directions, is fixed to the screen, obviating the need for 3D glasses.
Multiview screens commonly display 8 images and future screens will certainly display even more. The most difficult task for multiview imagery is how to capture content from so many cameras. Using many cameras is complex, expensive, requires bulky equipment, and is impossible in certain applications such as medicine. Capturing imagery from two cameras, however, is much simpler and can be done cheaply with a handheld device. If we can generate many views from a stereo image pair then we can circumvent the hardware problem.
The lab has developed a view synthesis algorithm, which is currently the top performer among view synthesis algorithms. This method provides excellent results but is slow, due to expensive graph cuts optimization and mean shift segmentation. An alternative to this method developed for efficiency is presented. An efficient method is critical for many synthesis applications.
Our method consists of four main steps: generating an initial view, refining it, filling the holes in the disparity map, then filling holes in the color image. The inputs to the algorithm are a stereo image pair and associated disparity maps.
This work has applications in free-viewpoint television, angular scalability for 3D video coding / decoding, and stereo-to-multiview conversion.
The general concepts of exploiting disparity information, inter-view correlation, and enforcing color consistency are maintained while greatly increasing the speed by using more efficient algorithms. Results are comparable to current state-of-the-art methods in terms of objective measures while computation time is drastically reduced.
|United States Of America||Issued Patent||10,148,930||12/04/2018||2011-347|
|United States Of America||Issued Patent||9,401,041||07/26/2016||2011-347|