UCLA researchers in the Department of Mathematics have developed an approach to classify different ego-motion categories from body-worn video.
Portable cameras record dynamic first-person video footage and these videos contain information on the motion of the individual to whom the camera is mounted. Work with body-worn sensors has also been shown to be effective for categorizing human actions and activities. The global displacement between successive frames provides some ideas to aggregate global motion and marginalize out local outlier motion.
The inventors have developed an approach to classify different ego-motion categories from body-worn video. By using a parametric model for calculating the simple global representation of motion, the approach produces a low dimensional representation of the motion of the ego, which then can be classified using novel graph-based semi-supervised and unsupervised learning algorithms. The algorithms are motivated by a PDE-based image segmentation method and achieve high performance in both accuracy and efficiency for different discrete data sets. The invention also involves technologies that provide measure of uncertainty quantification of the video classification.