The majority of state‐of‐the‐art lung segmentation algorithms in the literature do not simultaneously segment lungs, lung lobes, and airway in a single algorithm. Additionally, automated algorithms typically perform the segmentation task on a series of 2D slices, which can reduce segmentation accuracy of anatomical structures (i.e. lung lobes) that may require contextual information across all three spatial dimensions. Many existing algorithms also have not been validated on chest CTs across a wide variety of conditions to evaluate algorithm generalizability.
Currently, quantification of respiratory measurements requires a radiologist, trained analyst, or technician to recognize, identify, and manually annotate anatomical landmarks such as the lung lobes or airway in the chest. A fully‐automated deep learning system may eliminate the need for manual analysis, thereby improving efficiency and expanding applicability to a large number of CTs.
Researchers from UC San Diego have developed a system and method utilizing 3D spatial information to simultaneously segment the lungs, lung lobes, and airway. This approach has been tested has across chest CTs acquired from a highly diverse set of patients and imaging centers. One practical application of this technology is largescale application to chest CT images in screening studies for the detection, prognosis, and visualization of pulmonary disease. The software visualizations of the severity of pulmonary disease also facilitate understanding of the spatial distribution of pulmonary disease.
This software code automatically segments anatomical features in chest CT (such as the lungs, lung lobes, and airway). The algorithm developed is based on a Convolutional neural network (CNN) deep learning architecture for the identification of solid organs and anatomical landmarks.
One practical application of this technology is largescale application to chest CT images in screening studies for the detection, prognosis, and visualization of pulmonary disease.
The researchers have demonstrated the ability of this system to automatically estimate clinically meaningful metrics that are in strong agreement with those generated by the third-party medical imaging analysis platforms Thirona, VIDA, and Slicer.
A fully‐automated deep learning system may eliminate the need for manual analysis, thereby improving efficiency and expanding applicability to a large number of CTs.
Our deep learning system is currently in the working prototype stage of development.
UC San Diego is seeking partners to commercialize this technology as the segmentation model and pulmonary disease visualization derived from this technology have the potential to be utilized by imaging analysis platforms used in CT imaging.
lung segmentation algorithms, pulmonary Disease, 3D spatial information, CT imaging