Automated Activity Classification Of Video From Body Worn Cameras

Tech ID: 29299 / UC Case 2017-970-0

Summary

UCLA researchers in the Department of Mathematics have developed an approach to classify different ego-motion categories from body-worn video.

Background

Portable cameras record dynamic first-person video footage and these videos contain information on the motion of the individual to whom the camera is mounted. Work with body-worn sensors has also been shown to be effective for categorizing human actions and activities. The global displacement between successive frames provides some ideas to aggregate global motion and marginalize out local outlier motion.

Innovation

The inventors have developed an approach to classify different ego-motion categories from body-worn video. By using a parametric model for calculating the simple global representation of motion, the approach produces a low dimensional representation of the motion of the ego, which then can be classified using novel graph-based semi-supervised and unsupervised learning algorithms. The algorithms are motivated by a PDE-based image segmentation method and achieve high performance in both accuracy and efficiency for different discrete data sets. The invention also involves technologies that provide measure of uncertainty quantification of the video classification.

Applications

  • Police video indexing and analysis
  • Classification of videos for sports, activities, object and gesture recognition

Advantages

  • High accuracy
  • High efficiency
  • Novel learning algorithm

Patent Status

Patent Pending

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Inventors

  • Bertozzi, Andrea L.

Other Information

Keywords

Ego-motion, video, classification, learning algorithm, recognition

Categorized As