Robust Visual-Inertial Sensor Fusion For Navigation, Localization, Mapping, And 3D Reconstruction

Tech ID: 27401 / UC Case 2015-346-0

Summary

UCLA researchers in the Computer Science Department have invented a novel model for a visual-inertial system (VINS) for navigation, localization, mapping, and 3D reconstruction applications.

Background

Vision-augmented navigation or VINS is central to augmented and virtual reality, robotics, autonomous vehicles, and navigation in mobile phones. The future growth of these applications depends on reliable navigation in dynamic environments, thus improvement to these systems is of importance. Current methods rely upon low-level processing of visual data for 3D motion estimation. However, the processing is substantially useless and easily 60 – 90% of sparse features selected and tracked across frames are inconsistent with a single rigid motion due to illumination effects, occlusions, and independently moving objects. These effects are global to the scene, while low-level processing is local to the image, so it is not realistic to expect significant improvements in the vision front-end. Instead, it is critical for algorithms utilizing vision to leverage other sensory modalities, such as inertial.

Innovation

Researchers led by Professor Stefano Soatto have developed a novel sensor fusion system that integrates inertial and vision measurements to estimate 3D positon and orientation, along with a point-cloud model of the 3D world surrounding it. This invention has better robustness and performance that other performing VINS schemes, such as Google Tango, and with the same computational footprint. This unique technology addresses the problem of inferring ego-motion of a sensor platform from visual and inertial measurements, focusing on handling outliers.

Applications

  • Augmented and virtual reality
  • Robotics
  • Autonomous vehicles and flying robots
  • Indoor localization in GPS-denied areas
  • Ego-motion estimation

Advantages

  • Uses integrated inertial and vision measurements
  • Improved robustness and performance
  • Focuses on handling outliers

Patent Status

Country Type Number Dated Case
United States Of America Published Application 20190236399 08/01/2019 2015-346
 

Related Materials

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Inventors

  • Soatto, Stefano

Other Information

Keywords

Visual-inertial systems, VINS, vision-augmented navigation, vision-aided navigation, vision-based navigation, augmented reality, virtual reality, robotics, autonomous vehicles, autonomous flying robots, indoor localization, location recognition

Categorized As