Browse Category: Imaging > Other

[Search within category]

SpeedyTrack: Microsecond Wide-field Single-molecule Tracking

      Single-particle/single-molecule tracking (SPT) is a key tool for quantifying molecular motion in cells and in vitro. Wide-field SPT, in particular, can yield super-resolution mapping of physicochemical parameters and molecular interactions at the nanoscale, especially when integrated with single-molecule localization microscopy techniques like photoactivation and fluorophore exchange. However, wide-field SPT is often limited to the slow (<10 μm2/s) diffusion of molecules bound to membranes, chromosomes, or the small volume of bacteria, in part due to the ~10 ms framerate of common single-molecule cameras like electron-multiplying charge-coupled devices (EM-CCDs); for unbound diffusion in the mammalian cell and in solution, a molecule readily diffuses out of the <1 μm focal range of high-numerical-aperture objective lenses within 10 ms. While recent advances such as ultra-highspeed intensified CMOS cameras, feedback control by locking onto a molecule, trapping, and tandem excitation pulse schemes address the framerate issue, each also introduces drawbacks in light/signal efficiency, speed, uninterrupted diffusion paths, and/or trajectory resolution, e.g., number of time points.      UC Berkeley researchers have overcome these myriad challenges by introducing spatially-encoded dynamics tracking (SpeedyTrack), a strategy to enable direct microsecond wide-field single-molecule tracking/imaging on common microscopy setups. Wide-field tracking is achieved for freely diffusing molecules at down to 50 microsecond temporal resolutions for >30 timepoints, permitting trajectory analysis to quantify diffusion coefficients up to 1,000 um2/s. Concurrent acquisition of single-molecule diffusion trajectories and Forster resonance energy transfer (FRET) time traces further elucidates conformational dynamics and binding states for diffusing molecules. Moreover, spatial and temporal information is deconvolved to map long, fast single-molecule trajectories at the super-resolution level, thus resolving the diffusion mode of a fluorescent protein in live cells with nanoscale resolution. Already substantially outperforming existing approaches, SpeedyTrack stands out further for its simplicity—directly working off the built-in functionalities of EM-CCDs without the need to modify existing optics or electronics.

Auto Single Respiratory Gate by Deep Data Driven Gating for PET

In PET imaging, patient motion, such as respiratory and cardiac motion, are a major source of blurring and motion artifacts. Researchers at the University of California, Davis have developed a technology designed to enhance PET imaging resolution without the need for external devices by effectively mitigating these artifacts

Spectral Kernel Machines With Electrically Tunable Photodetectors

       Spectral machine vision collects both the spectral and spatial dependence (x,y,λ) of incident light, containing potentially useful information such as chemical composition or micro/nanoscale structure.  However, analyzing the dense 3D hypercubes of information produced by hyperspectral and multispectral imaging causes a data bottleneck and demands tradeoffs in spatial/spectral information, frame rate, and power efficiency. Furthermore, real-time applications like precision agriculture, rescue operations, and battlefields have shifting, unpredictable environments that are challenging for spectroscopy. A spectral imaging detector that can analyze raw data and learn tasks in-situ, rather than sending data out for post-processing, would overcome challenges. No intelligent device that can automatically learn complex spectral recognition tasks has been realized.       UC Berkeley researchers have met this opportunity by developing a novel photodetector capable of learning to perform machine learning analysis and provide ultimate answers in the readout photocurrent. The photodetector automatically learns from example objects to identify new samples. Devices have been experimentally built in both visible and mid-infrared (MIR) bands to perform intelligent tasks from semiconductor wafer metrology to chemometrics. Further calculations indicate 1,000x lower power consumption and 100x higher speed than existing solutions when implemented for hyperspectral imaging analysis, defining a new intelligent photodetection paradigm with intriguing possibilities.

Three-dimensional Acousto-optic Deflector-lens (3D AODL)

      Optical tweezers generated with light modulation devices have great importance for highly precise laser imaging and addressing systems e.g. excitation and readout of single atoms, imaging of interactions between molecules, or highly precise spatial trapping and movement of particles. To generate dynamic optical tweezers adjustable at the microsecond scale, acousto-optic deflectors (AOD) are commonly used to modulate the spatial profile of laser light. Dynamic optical tweezers are increasingly relevant for emerging technologies such as neutral atom quantum computers, and tightly focused laser spot arrays may enable advanced imaging and/or semiconductor processing applications. However, dynamic optical tweezer systems capable of rapid, aberration-free movement of one or multiple atoms in independent, arbitrary three-dimensional trajectories with minimal aberration have not yet been realized.      UC Berkeley researchers have developed a dynamic optical tweezer system that overcomes significant defects such as limited 2D motion and optical aberration present in existing art. Carefully designed waveform modulation of one or more acousto-optic deflector lenses (AODLs) enables atomic addressing and rapid tweezer motions while minimizing significant optical aberrations present in prior methods. The invention is capable of microsecond scale single or multi tweezer motion in arbitrary three-dimensional trajectories without the use of translation stages. The invention can flexibly address one atom, multiple atoms, or the entire array.

Imaging The Surfaces Of Optically Transparent Materials

A breakthrough imaging technique that provides high-resolution visualization of optically transparent materials at a low cost.

(SD2022-255) A robust approach to camera radar fusion

Researchers from UC San Diego have developed RadSenNet, a new approach of sequential fusing of information from radars and cameras. The key idea of sequential fusion is to fundamentally shift the center of focus in radar-camera fusion systems from cameras to radars. This shift enables their invention (RadSegNet) to achieve all-weather perception benefits of radar sensing. Keeping radars as the primary modality ensures reliability in all situations including occlusions, longrange and bad weather.

Unsupervised Positron Emission Tomography (PET) Image Denoising using Double Over-Parameterization

Researchers at the University of California, Davis, have developed a novel imaging system that improves the diagnostic accuracy of PET imaging. The system combines machine learning and computed tomography (CT) imaging to reduce noise and enhance resolution. This novel technique can integrate with commercial PET imaging systems, improving diagnostic accuracy and facilitating superior treatment of various diseases.

Headset with Incorporated Optical Coherence Tomography (OCT) and Fundus Imaging Capabilities

Researchers at the University of California, Davis, have developed a headset (e.g., virtual reality headset) in which two imaging modalities, optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO), are incorporated with automated eye tracking and optical adjustment capabilities providing a fully automated imaging system in which patients are unaware that images of the retina are being acquired. Imaging takes place while the patient watches a soothing or entertaining video.

Metasurface, Metalens, and Metalens Array with Controllable Angular Field-of-View

Researchers at the University of California, Davis have developed an optical lens module that uses a metalens or a metalens array having a controllable angular field-of-view.

High Resolution, Ultrafast, Radiation-Background Free PET

Researchers at the University of California, Davis, have developed a positron emission tomography (PET) medical imaging system that allows for higher 3D position resolution, eliminates radiation background, and holds a similar production cost to existing technologies.

A New Device for Tissue Imaging: Phasor-Based S-FLIM-SHG

An innovative microscope integrating HSI, FLIM, and SHG for advanced optical metabolic imaging.

Energy-Efficient Nonlinear Optical Micro-Device Arrays

Optical neural networks (ONNs) are a promising computational alternative for deep learning due to their inherent massive parallelism for linear operations. However, the development of energy-efficient and highly parallel optical nonlinearities, a critical component in ONNs, remains an outstanding challenge. To address this situation, researchers at UC Berkeley and Berkeley National Lab developed a nonlinear optical microdevice array (NOMA) compatible with incoherent illumination by integrating the liquid crystal cell with silicon photodiodes at the single-pixel level. The researchers fabricated NOMA with over half a million pixels, each functioning as an optical analog of the rectified linear unit at ultralow switching energy down to 100 femtojoules/pixel. The team demonstrated an optical multilayer neural network. This work holds promise for large-scale and low-power deep ONNs, computer vision, and real-time optical image processing.

Frequency Programmable MRI Receive Coil

In magnetic resonance imaging (MRI) scanners, the detection of nuclear magnetic resonance (NMR) signals is achieved using radiofrequency, or RF, coils. RF coils are often equivalently called “resonance coils” due to their circuitry being engineered for resonance at a single frequency being received, for low-noise voltage gain and performance. However, such coils are therefore limited to a small bandwidth around the center frequency, restricting MRI systems from imaging more than one type of nucleus at a time (typically just hydrogen-1, or H1), at one magnetic field strength.To overcome the inherent restriction without sacrificing performance, UC Berkeley researchers have developed an MRI coil that can perform low-noise voltage gain at arbitrary relevant frequencies. These frequencies can be programmably chosen and can include magnetic resonance signals from any of various nuclei (e.g., 1H, 13C, 23Na, 31P, etc.), at any magnetic field strength (e.g., 50 mT, 1.5T, 3T, etc.). The multi-frequency resonance can be performed in a single system. The invention has further advantages in terms of resilience due to its decoupled response relative to other coils and system elements.

Self-Supervised Machine-Learning Adaptive Optics For Optical Microscopy

      Image quality and sample structure information from an optical microscope is in large part determined by optical aberrations. Optical aberrations originating from the microscope optics themselves or the sample can degrade the imaging performance of the system. Given the difficulty to find and correct all sources of aberration, a collection of methods termed adaptive optics is used to measure and correct optical aberrations in other ways, to recover imaging performance. However, state-of-the-art adaptive optics systems typically comprise complex hardware and software integration, which has impeded their wide adoption in microscopy. UC Berkeley researchers recently demonstrated how self-supervised machine learning (ML)-based adaptive optics can accurately estimate optical aberrations from a single 3D fluorescence image stack, without requiring external datasets for training. While demonstrated for widefield fluorescence microscopy, many optical microscopy modalities present unique challenges.       In the present technology, UC Berkeley researchers have developed a novel self-supervised ML-based adaptive optics system for two-photon fluorescence microscopy, which should also be extensible to confocal and other modalities. The system can effectively image tissues and samples for cell biology applications. Importantly, the method can address common errors in optical conjugation/alignment in commercial microscopy systems that have yet to be systematically addressed. It can also integrate advanced computational techniques to recover sample structure.

Fluorescent Bis-Trifluoromethyl Carborhodamine Compounds

UCB researchers have developed a novel class of bright, fluorescent rhodamine dyes with a novel structural modification resulting in a deep red shift relative to the parent carborhodamine dye, with the new dye absorbing and emitting near-infrared light in the same region as the commercially successful silicon rhodamine dyes. Biological imaging with near-infrared light is advantageous for numerous biological and surgical applications.  Furthermore, bis-trifluoromethyl carborhodamines offer improved properties desirable for biological imaging applications due to their unique physical and electronic properties. 

Spatial Analysis of Multiplex Immunohistochemical Tissue Images

Researchers at the University of California, Davis have developed a semiautomated solution for identifying differences in tissue architectures or cell types as well as visualizing and analyzing cell densities and cell-cell associations in a tissue sample.

SPECTRAL DOMAIN FUNCTIONAL OCT and ODT

This technology revolves around Optical Coherence Tomography (OCT), a noninvasive imaging method that provides detailed cross-sectional images of tissue microstructure and blood flow. OCT utilizes either time domain (TDOCT) or Fourier domain (FDOCT) approaches, with FDOCT offering superior sensitivity and speed. Doppler OCT combines Doppler principles with OCT to visualize tissue structure and blood flow concurrently. Additionally, polarization-sensitive OCT detects tissue birefringence. Advanced methods aim to enhance the speed and sensitivity of Doppler OCT, crucial for various clinical applications such as ocular diseases and cancer diagnosis. Swept source FDOCT systems further improve imaging capabilities by increasing range and sensitivity. Overall, this technology represents significant advancements in biomedical imaging, offering insights into both structural and functional aspects of tissue physiology.

Imaging of cellular immune response in human skin

This patent application describes methods for non-invasive, label-free imaging of the cellular immune response in human skin using a nonlinear optical imaging system.

System And Method For Tomographic Fluorescence Imaging For Material Monitoring

Volumetric additive manufacturing and vat-polymerization 3D printing methods rapidly solidify freeform objects via photopolymerization, but problematically raises the local temperature in addition to degree-of-conversion (DOC). The generated heat can critically affect the printing process as it can auto-accelerate the polymerization reaction, trigger convection flows, and cause optical aberrations. Therefore, temperature measurement alongside conversion state monitoring is crucial for devising mitigation strategies and implementing process control. Traditional infrared imaging suffers from multiple drawbacks such as limited transmission of measurement signal, material-dependent absorptions, and high background signals emitted by other objects. Consequently, a viable temperature and DOC monitoring method for volumetric 3D printing doesn’t exist.To address this opportunity, UC Berkeley researchers have developed a tomographic imaging technique that detects the spatiotemporal evolution of temperature and DOC during volumetric printing. The invention lays foundations for the development of volumetric measurement systems that uniquely resolve both temperature and DOC in volumetric printing.This novel Berkeley measurement system is envisaged as an integral tool for existing manufacturing technologies, such as computed axial lithography (CAL, Tech ID #28754), and as a new research tool for commercial biomanufacturing, general fluid dynamics, and more.

System And Method For Noise-Enabled Static Imaging Using Event Cameras

Dynamic Vision Sensors (DVS), also known as event cameras or neuromorphic sensors, enable extremely high temporal resolution and dynamic range compared to traditional sensors. However, DVS pixels only capture changes in intensity, which discards all static information. To overcome this issue, an additional photosensor array is needed either (1) in a two-sensor system or (2) combined into a single sensor with two-pixel technologies (DAVIS346). In both cases, the resulting system is bulkier, more complex to design, and more expensive to manufacture. UC Berkeley researchers have developed an event-based imaging system that can capture static intensity, thereby eliminating the need of such two-pixel technologies by extracting underlying static intensity information directly from DVS pixels. The researchers have also demonstrated the feasibility of this approach through the analysis of noise statistics in event cameras.

  • Go to Page: