Researchers led by Tzung Hsiai from the David Geffen School of Medicine at UCLA have developed a way to visualize moving objects using virtual reality.
Over the past decade, the integration of VR technology with surgical training has produced clinical and procedural outcomes. However, generating virtual environments for cardiac and pulmonary systems has proved difficult due to a necessity of a 4D environment (3D + time) that captures their cyclic deformations. Capturing a moving object in 4D requires capturing images at a high frequency (ex. 10 Hz), and then object detection (segmentation) to identify the moving object in each frame. The current gold standard remains segmenting by hand, which is time and labor intensive. It can also produce inaccurate and inconsistent animations due to the variance in manual detection frame by frame.
Researchers led by Tzung Hsiai from the David Geffen School of Medicine at UCLA have developed a way to visualize moving objects using virtual reality. Their methodology employs lightsheet microscopy to gather sequential images with high spatial and temporal resolution. A specialized algorithm then identifies a moving region of interest in each frame, which overcomes the frame by frame inconsistencies found in manual identification. This allows for object reconstruction with smooth and natural movement in an immersive VR environment. Their algorithm shows reliable and accurate reconstruction of objects when compared to the current standard of manual identification. The inventors were able to produce a VR experience of a beating heart with unprecedented resolution in both space and time.
virtual reality, 4D, segmentation, deformable image registration, reconstruction, 3D