A core problem in computer vision and graphics applications is recovering a model of the attributes that create an image. This model is comprised of the intrinsic scene properties: the collection of shapes, paints and lights which together create an image. Conventional methods for recovering intrinsic scene properties rely on multiple observations of the same scene in order to over-constrain the problem. By comparison, recovering these properties from a single image is vastly more difficult.
To address this challenge, researchers at UC Berkeley have developed algorithms that infer the most likely intrinsic scene properties of a single image. Furthermore, the researchers have developed corresponding software that can render the scene with adjusted properties such as a different viewpoint, paints, lights, and shapes.
Computer vision and robotics applications involving object recognition and segmentation.
Visual effects and Photoshop-type applications.
In comparison to existing approaches, the Berkely solution is the first to recover all intrinsic scene properties at once, and thereby provide a practical solution. Moreover, the Berkeley solution can be enhanced with 3D scanners.