Browse Category: Imaging > Software

[Search within category]

Systems and Methods of Single-Cell Segmentation and Spatial Multiomics Analyses

Researchers at the University of California, Davis have developed a novel cell segmentation technology for accurate analysis of non-spherical cells and that offers a comprehensive, high-throughput approach for analyzing the transcriptomic and metabolomic data to study complex biological processes at the single-cell level.

Machine Learning And Attention For Intelligent Sensing

A revolutionary approach to sensor data processing that leverages bio-inspired computing for intelligent sensing.

An Efficient Deep Learning Model For Single-Cell Segmentation And Tracking In Time-Lapse Microscopy

Time-lapse microscopy allows for direct observation of cell biological processes at the single-cell level with high temporal resolution. Quantitative analysis of single-cell time-lapse microscopy requires automated segmentation and tracking of individual cells over several days. Precise segmentation and tracking remain challenging because cells change their shape, divide, and show unpredictable movements.Researchers at UC Santa Cruz applied recent advances in the application of deep-learning models to the analysis of cellular images. The result was a deep-learning-based model and a user-friendly software, termed DeepSea, that automates both the segmentation and tracking of individual cells in time-lapse microscopy images.

Using Virtual Tile Routing For Navigating Complex Transit Hubs

Many people have learned to appreciate the advent of GPS based navigational applications in our daily lives through the use of street level navigation, and many more loathe the same applications when using them to navigate established public transportation systems. Many of these travelers become confused and frustrated when attempting to understand and act on the directions given to them by such existing applications that primarily focus on large-scale street navigation, especially if the user has a visual or cognitive impairment. Several existing applications will not even attempt to aid someone in the navigation of say, a metro, train or bus station, and instead simply inform the user of the label of the route that the application intends the user to take. Without any small-scale directions many people find themselves struggling to figure out what platform or boarding zone they need to use to get on their preferred method of transportation, as well as how to get to these platforms and boarding zones in the first place. These transit hubs, plazas, malls, and the like have long been a pain in the side of developers and users alike when it comes to navigation. Innovation has long been overdue in this space concerning small scale transit plaza navigation, with major players holding large market shares in navigation not even attempting to address this longstanding problem. The only existing application to offer indoor navigation offers very limited as well as inconsistent functionality including only two-dimensional indoor mapping, due to manually uploaded floor plans that are only available in the first place from partnering locations. This has continued to be an issue due to a lack of adoption by existing locations, as each location is required to draw out their floor plan on an antiquated image file and submit it for approval. Solving this problem would ease a large amount of stress for those navigating in areas they are not familiar with, as well as saving time that could possibly make the difference between a missed train and a nearly missed train.

Learned Image Compression With Reduced Decoding Complexity

The Mandt lab introduces a novel approach to neural image compression, significantly reducing decoding complexity while maintaining competitive rate-distortion performance.

Advanced Human Pose Recognition Technology

This technology revolutionizes human pose recognition by overcoming dataset and environmental limitations.

Advanced Imaging by LASER-Trained Algorithms Used to Process Broad-Field Light Photography and Videography

Diagnosing retinal disease, which affects over 200 million people worldwide, requires expensive and complicated analysis of the structure and function of retinal tissue. Recently, UCI developed a training algorithm which, for the first time, is able to assess tissue health from images collected using more common and less expensive optics.

MR-Based Electrical Property Reconstruction Using Physics-Informed Neural Networks

Electrical properties (EP), such as permittivity and conductivity, dictate the interactions between electromagnetic waves and biological tissue. EP are biomarkers for pathology characterization, such as cancer. Imaging of EP helps monitor the health of the tissue and can provide important information in therapeutic procedures. Magnetic resonance (MR)-based electrical properties tomography (MR-EPT) uses MR measurements, such as the magnetic transmit field B1+, to reconstruct EP. These reconstructions rely on the calculations of spatial derivatives of the measured B1+. However, the numerical approximation of derivatives leads to noise amplifications introducing errors and artifacts in the reconstructions. Recently, a supervised learning-based method (DL-EPT) has been introduced to reconstruct robust EP maps from noisy measurements. Still, the pattern-matching nature of this method does not allow it to generalize for new samples since the network’s training is done on a limited number of simulated data pairs, which makes it unrealistic in clinical applications. Thus, there is a need for a robust and realistic method for EP map construction.

(SD2023-060) Penalized Reference Matching algorithm with Stimulated Raman Scattering (PRM-SRS) microscopy: Multi-Molecular Detection of Hyperspectral images

Lipids play crucial roles in many biological processes under physiological and pathological conditions. Mapping spatial distribution and examining metabolic dynamics of different lipids in cells and tissues in situ are critical for understanding aging and diseases. Commonly used imaging methods, including mass spectrometry-based technologies or labeled imaging techniques, tend to disrupt the native environment of cells/tissues and have limited spatial or spectral resolution, while traditional optical imaging techniques still lack the capacity to distinguish chemical differences between lipid subtypes.

Machine Learning-Based Monte Carlo Denoising

Brief description not available

Advanced Imaging By LASER-Trained Algorithms Used To Process Broad-Field Light Photography and Videography

Diagnosing retinal disease, which affects over 200 million people worldwide, requires expensive and complicated analysis of the structure and function of retinal tissue. Recently, UCI developed a training algorithm which, for the first time, is able to assess tissue health from images collected using more common and less expensive optics.

Deep Learning Techniques For In Vivo Elasticity Imaging

Imaging the material property distribution of solids has a broad range of applications in materials science, biomechanical engineering, and clinical diagnosis. For example, as various diseases progress, the elasticity of human cells, tissues, and organs can change significantly. If these changes in elasticity can be measured accurately over time, early detection and diagnosis of different disease states can be achieved. Elasticity imaging is an emerging method to qualitatively image the elasticity distribution of an inhomogeneous body. A long-standing goal of this imaging is to provide alternative methods of clinical palpation (e.g. manual breast examination) for reliable tumor diagnosis. The displacement distribution of a body under externally applied forces (or displacements) can be acquired by a variety of imaging techniques such as ultrasound, magnetic resonance, and digital image correlation. A strain distribution, determined by the gradient of a displacement distribution, can be computed (or approximated) from measured displacements. If the strain and stress distributions of a body are both known, the elasticity distribution can be computed using the constitutive elasticity equations. However, there is currently no technique that can measure the stress distribution of a body in vivo. Therefore, in elastography, the stress distribution of a body is commonly assumed to be uniform and a measured strain distribution can be interpreted as a relative elasticity distribution. This approach has the advantage of being easy to implement. The uniform stress assumption in this approach, however, is inaccurate for an inhomogeneous body. The stress field of a body can be distorted significantly near a hole, inclusion, or wherever the elasticity varies. Though strain-based elastography has been deployed on many commercial ultrasound diagnostic-imaging devices, the elasticity distribution predicted based on this method is prone to inaccuracies.To address these inaccuracies, researchers at UC Berkeley have developed a de novo imaging method to learn the elasticity of solids from measured strains. Our approach involves using deep neural networks supervised by the theory of elasticity and does not require labeled data for the training process. Results show that the Berkeley method can learn the hidden elasticity of solids accurately and is robust when it comes to noisy and missing measurements.

Automated Histological Image Processing tool for Identifying and Quantifying Tissue Calcification

Researchers at UCI have developed a method of identifying, quantifying, and visualizing tissue with calcification. The image processing tool can automatically characterize calcium deposits in CT images histological tissue, especially when it has accumulated in unusual places in the body.

Neuroscientific Method for Measuring Human Mental State

Many areas of intellectual property law involve subjective judgments regarding confusion or similarity. For example, in trademark or trade dress lawsuits a key factor considered by the court is the degree of visual similarity between the trademark or product designs under consideration. Such similarity judgments are nontrivial, and may be complicated by cognitive factors such as categorization, memory, and reasoning that vary substantially across individuals. Currently, three forms of evidence are widely accepted: visual comparison by litigants, expert witness testimonies, and consumer surveys. All three rely on subjective reports of human responders, whether litigants, expert witnesses, or consumer panels. Consequently, all three forms of evidence potentially share the criticism that they are subject to overt (e.g. conflict of interest) or covert (e.g. inaccuracy of self-report) biases.To address this situation, researchers at UC Berkeley developed a technology that directly measures the mental state of consumers when they attend to visual images of consumer products, without the need for self-report measures such as questionnaires or interviews. In so doing, this approach reduces the potential for biased reporting.  

Automatic Dribbling Action Recognition in a Sports Game

Researchers led by Prof. Bir Bhanu at UCR have designed a patent pending system to automate the classification and analysis of player dribbling styles using an assembled dataset of soccer videos from various sources. Architecture for the classification of soccer dribbling styles.

Applying a Machine Learning Algorithm to Canine Radiographs for Automated Detection of Left Atrial Enlargement

Researchers at the University of California, Davis have developed a method of detecting canine left atrial enlargement as an early sign of mitral valve disease by applying machine learning techniques to thoracic radiograph images.

Design Of Task-Specific Optical Systems Using Broadband Diffractive Neural Networks

UCLA researchers in the Department of Electrical and Computer Engineering have developed a diffractive neural network that can process an all-optical, 3D printed neural network for deep learning applications.

A Fully‐automated Deep Learning System (software code) for the Detection, Prognosis, and Visualization of Pulmonary Disease.

The majority of state‐of‐the‐art lung segmentation algorithms in the literature do not simultaneously segment lungs, lung lobes, and airway in a single algorithm. Additionally, automated algorithms typically perform the segmentation task on a series of 2D slices, which can reduce segmentation accuracy of anatomical structures (i.e. lung lobes) that may require contextual information across all three spatial dimensions. Many existing algorithms also have not been validated on chest CTs across a wide variety of conditions to evaluate algorithm generalizability. Currently, quantification of respiratory measurements requires a radiologist, trained analyst, or technician to recognize, identify, and manually annotate anatomical landmarks such as the lung lobes or airway in the chest. A fully‐automated deep learning system may eliminate the need for manual analysis, thereby improving efficiency and expanding applicability to a large number of CTs.

Software-Automated Medical Imaging Software for Standardizing the Diagnosis of Sarcopenia

Sarcopenia  is defined as an age associated decline in or loss of lean skeletal muscle mass. The pathophysiology can be multifactorial and the change in body composition may be difficult to detect due to obesity, changes in fat mass, or edema. Changes in weight, limb or waist circumference are not reliable indicators of muscle mass changes. Sarcopenia may also cause reduced strength, functional decline and increased risk of falling. Sarcopenia is otherwise asymptomatic and is often unrecognized.  

Software - Unified algorithm for data cleaning, source separation, and imaging of electroencephalographic signals: Recursive Sparse Bayesian Learning (RSBL)

Electroencephalographic source imaging (a.k.a. magnetic/electric or M/EEG source imaging, ESI, or brain electrical tomography) usually depends upon sophisticated signal processing algorithms for data cleaning, source separation and imaging. Typically, these problems are addressed separately using a variety of heuristics, making it difficult to systematize a methodology for extracting robust brain source images on a wide range of applications.

AI Enabled UAV Route-Planning Algorithm with Applications to Search and Surveillance

Portable UAVs such as quad-copters have made huge inroads in the last several years in various fields of aerial photography and surveillance. Drones can efficiently and cheaply hover over/follow a target of interest and capture unique perspectives of wildlife, real-estate, sporting events and operational environments such as law enforcement or military. More challenging however is the application of UAVs for large area search and surveillance. In these scenarios, a search pattern must be established which can cover many square miles and is far too expansive for a UAVs typical battery to sustain. To make UAVs more broadly effective in large area search and target identification, new path planning algorithms are needed to efficiently eliminate areas of low probability while focusing on search areas most likely to contain the subject of interest. Likewise, improved image classifiers are needed to aid in separating targets of interest from background terrain, thus expediting the search within given battery limitations

Extended Depth-Of-Field In Holographic Image Reconstruction Using Deep Learning-Based Auto-Focusing And Phase-Recovery

UCLA researchers in the Department of Electrical Engineering have developed a novel deep learning-based algorithm that digitally reconstructs images from holography over an extended depth of field.

Real-time 3D Image Processing Platform for Visualizing Blood Flow Dynamics

Researchers at UCI have developed an image processing platform capable of visualizing 3D blood flow dynamics of the heart in real-time. This technology aims to be a promising tool for looking at areas of the heart that were previously difficult to image and to better understand the dynamics in cardiac dysfunctions.

A New Human-Monitor Interface For Interpreting Clinical Images

UCLA researchers in the Department of Radiological Sciences have invented a novel interactive tool that can rapidly focus and zoom on a large number of images using eye tracking technology.

  • Go to Page: