Learn more about UC TechAlerts – Subscribe to categories and get notified of new UC technologies

Browse Category: Imaging > Software


[Search within category]

Developing Physics-Based High-Resolution Head And Neck Biomechanical Models

UCLA researchers in the Department of Radiation Oncology at the David Geffen School of Medicine have developed a new computational method to model head and neck movements during medical imaging/treatment procedures.

Assessment Of Wound Status And Tissue Viability Via Analysis Of Spatially Resolved Thz Reflectometry Maps

UCLA researchers in the Department of Bioengineering have developed an algorithm to assess the burn wound severity and predict its future outcomes using Terahertz imaging.

Quantification Of Plant Chlorophyll Content Using Google Glass

UCLA researchers in the Department of Electrical Engineering have invented a novel device that can quantify chlorophyll concentration in plants using a custom-designed Google Glass app.

Pixel Super-Resolution Using Wavelength Scanning

UCLA researchers have developed a novel way to significantly improve the resolution of an undersampled or pixelated image.

Robust Visual-Inertial Sensor Fusion For Navigation, Localization, Mapping, And 3D Reconstruction

UCLA researchers in the Computer Science Department have invented a novel model for a visual-inertial system (VINS) for navigation, localization, mapping, and 3D reconstruction applications.

Dsp-Sift: Domain-Size Pooling For Image Descriptors For Image Matching And Other Applications

UCLA researchers in the Computer Science Department have invented a novel modification to the scale-invariant feature transform (SIFT) algorithm that shows significant improvement for computer vision applications.

A Method Of Computational Image Analysis For Predicting Tissue Infarction After Acute Ischemic Stroke

UCLA researchers in the Departments of Radiological Sciences and Neurology have designed an algorithm to predict tissue infarctions using pre-therapy magnetic resonance (MR) perfusion-weighted images (pre-PWIs) acquired from patients with acute ischemic stroke. The predictions generated by the algorithm provide information that may assist in physicians’ treatment decisions.

MEMS-Based Mirror Array For Optical Beam Forming And Steering

Self-driving cars, drones, robots and other autonomous systems rely on various sensors for obstacle detection and avoidance to navigate safely through environments. One of the most common methods to sense obstacles is light Detection and Ranging (LiDAR), which uses light in the form of a pulsed (or amplitude/frequency modulated CW) laser to measure variable distances. These light pulses—combined with other data recorded by the airborne system— generate precise, three-dimensional information about the shape of the surrounding environment and its surface characteristics. While LiDAR is a well established and utilized system within many mobility companies, it’s large size and high cost-per-unit has prevented its implementation in many commercial applications. Solid state LiDARs with non-mechanical scanning elements have received increasing interests. In particular, the optical phased array (OPA) provides non-mechanical scanning in a compact form factor. More importantly, at reduced size OPAs enable sophisticated beamforming such as simultaneous scanning, pointing, and tracking of multiple objects, or even direct line-of-sight communications. Unfortunately, at large-scale OPAs have been found to have slow response times, making their application for commercial use impossible. Researchers at the University of California, Berkeley, have designed an optical phased array with rapid response time. This novel technology utilizes arrays of micromirrors actuated by micro-electro-mechanical systems (MEMS). This novel OPA operates with a larger field of view, with a wide range of laser wavelengths, and without the need for high voltage electronics. It is also far more compact and sophisticated than bulky and intrusive mechanical LIDAR technologies.

System And Methods To Test Autonomous And Automated Cars By Using Virtual Reality And Augmented Reality

Rigorous testing and validation is essential for the deployment of autonomous and semi-autonomous vehicle technologies. The main objective of this process is to evaluate the performance and safety of the system under various operating conditions. For semiautonomous systems that assist the driver, this process also aims to evaluate the response of the human operator to the active safety system and the resulting closed-loop behavior. Consequently, the testing process must satisfy the following requirements:1. The traffic environment (including but not limited to other vehicles, pedestrians and traffic elements such as stop signs, stoplights and crosswalks) must be easily reconfigurable.2. The human operator must be provided with realistic feedback about the motion of the vehicle being tested.Today, this process is done via hardware-in-the-loop (HIL) simulations and testing on proving grounds. In HIL simulations, the dynamics and motion of the controlled vehicle are simulated using a computer program, often in conjunction with a vehicle motion simulator. However, advanced simulators are expensive and unable to accurately capture the complexity of the physical vehicle system. Proving ground tests on an actual car at facilities such as MCity at the University of Michigan alleviate this issue. However, the traffic environments at such facilities are not flexible, and are difficult and expensive to modify. Moreover, emergency scenarios such as sudden braking at high speeds are dangerous to test.Researchers at the University of California, Berkeley have developed a novel and effective way of testing autonomous and semi-autonomous vehicles. The system consists of a real vehicle with a human driver, operating in a reconfigurable virtual environment. An immersive visualization of the virtual environment is created via Virtual Reality (VR) and Augmented Reality (AR). This system takes information of the position of the vehicle and the head pose of the driver, propagates the virtual environment forward in time using dynamical models and updates the visualization in VR/AR interface in real-time. The actuation of the car can be modified by the driver or by the software on the vehicle.

Software for Optimal Presentation Of Imagery On Multi-Plane Displays

96 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} UC Berkeley researchers developed a software for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays.  These displays present an additive combination of images at a discrete set of optical distances, allowing the viewer to focus at different distances in the simulated scene. The software uses an optimization algorithm to compute the images to be displayed on the presentation planes so that the retinal images, when accommodating to different distances, match the corresponding retinal images of the input scene as closely as possible.  The researchers demonstrated the utility of the technique using imagery acquired from both synthetic and real-world scenes, and analyzed the system’s characteristics including bounds on achievable resolution and found this software improves the practicality and realism of 3D displays by enabling realistic focus cues to be reproduced.  

Software for auto-generation of text reports from radiology studies

Imaging machines used for radiology studies often export data (such as vascular velocities, bone densitometry, radiation dose, etc.) as characters stored in image format. Radiologists are expected to interpret this data and also store it in their text-based reports of the studies. This is usually accomplished by dictating the data into the text report or copying it by typing it. However, these methods are error-prone and time-intensive.

Atom Probe Tomography Method and Algorithm

Most cluster analysis parameters in atom probe tomography (APT) are selected ad hoc. This can often lead to data misinterpretation and misleading results by instrument technicians and researchers. Moreover, arbitrary cluster parameters can have suboptimal consequences on data quality and integrity, leading to inefficiencies for downstream data users. To address these problems, researchers at the University of California, Berkeley, have developed a framework and specific cluster analysis methods to efficiently extract knowledge from better APT data. By using parameter selection protocols with theoretical explanations, this technology allows for a more optimized and robust multivariate statistical analysis technique from the start, thus improving the quality of analysis and outcomes for both upstream and downstream data users.

Software for Differential Dynamic Microscopy (DDMCalc)

A MATLAB code for performing differential dynamic microscopy (DDM).

Forest Convolutional Neural Networks

In machine learning, a convolutional neural network (CNN) is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field. They are widely used to model image and video recognition, being a powerful tool for different vision problems. Compared to other image classification algorithms, convolutional neural networks use relatively little pre-processing. This means that the network is responsible for learning the filters that in traditional algorithms were hand-engineered. Despite major reductions in error, current implementations of CNN models still leave significant room for improvement due to the lack of transparency and flexibility in architecture design.

Patient-Specific Ct Scan-Based Finite Element Modeling (FEM) Of Bone

This invention is a software for calculating the maximum force a bone can support. The offered method provides an accurate assessment of how changes in a bone due to special circumstances, such as osteoporosis or a long duration space flight, might increase patient’s risk of fracture.

A Method For Determining Characteristic Planes And Axes Of Bones And Other Body Parts, And Application To Registration Of Data Sets

The invention is a method for deriving an anatomical coordinate system for a body part (especially bone) to aid in its characterization. The method relies on 3-D digital images of an anatomical object, such as CT- or MR-scans, to objectively, precisely, and reliably identify its geometry in a computationally efficient manner. The invention is a great improvement over the current practice of subjective, user-dependent manual data entry and visualization of bones and organs. The applications for well-defined anatomical coordinate systems include robotic surgeries, models for bone density studies, and construction of statistical anatomical data sets.

Camera-Based Reader For Blurry And Low-Resolution 1D Barcodes

Virtually every packaged good is labeled with at least one form of barcode; generally, either by EAN or UPC standards. The success of barcode technology for identification, tracking, and inventory derives from its ability to encode information in a compact fashion with very low associated cost. Barcode reading via dedicated scanners is a mature technology. Commercial laser-based hand-held barcode scanners achieve robust readings. Recently, however, there has been growing interest in accessing barcodes with regular cellphone, rather than with a dedicated devices. Since cellphones are of ubiquitous use, this would enable a multitude of mobile applications. For example, a number of cellphone apps have appeared recently that provide access via barcode reading to the full characteristics and user review for a product found at a store. Unfortunately, cellphone camera images generated by low-grade lenses which produce blurred barcode images. Likewise, motion blur along with low ambient light make barcode reading difficult in certain situations.

Facial Recognition & Vehicle Logo Super-Resolution System

Background: The video surveillance market is projected to grow annually at 17% and reach $42B by 2020. Video surveillance is a popular tool to track and monitor movement of people and vehicles to provide protection and discover information for investigations. Current technologies are competent in capturing images but not with high definition. Therefore, a more advanced security system that is smarter and multidimensional is needed.  Brief Description: UCR Researchers have developed a novel method and system for unified face representation for individual recognition in surveillance videos along with vehicle recognition. They extracted facial images from a video, generated an emotion avatar image (EAI) and computed features using their innovative algorithms. Low-resolution vehicle images can also be enhanced by using their super-resolution algorithms to produce high-resolution images. Existing technologies can only take frontal images but this new technology can handle out-of-plane, rotated images.

Faces: Art, And Computerized Evaluation Systems-A Feasibility Study Of The Application Of Face Recognition Technology To Works Of Portrait Art

Background: Portraits are not just forms of art; they usually identify important people and the artistic styles of that era. Currently, face recognition technologies for portraits do not exist and therefore, many great pieces in museums remain unidentified. Curators spend an excruciating amount of time, energy and already limited resources to identify paintings. A computer program that helps answer these questions will be beneficial not only for art identification-sakes but to discover the historical stories behind unknown paintings.  Brief Description: UCR researchers have developed a novel computerized system for identifying artists and artists’ styles. First, they fed known portraits into their algorithm for face recognition system training. Then, the Portrait Feature Space (PFS) feature analyzes the unknown portrait and looks for a match in the system. The system is able to learn artistic conventions, such as variation in brush strokes and facial proportion metrics, to compute a similarity score. Identity verification is a 2-step process where style modeling results in assigning the unknown portrait to a particular artist, then further authentication through analysis with known sitters.

Augmentated Reality Using Projector-Camera Enabled Devices

The technology is a distributed architecture for a group of projector and camera enabled devices.The features consist of a collaborative network of projector-camera enabled devices which allow multiple projector-camera devices to create a seamless image display on any surface or geometry.This technology is compatible with mobile devices.

A Method for Automatic Segmentation and Quantitative Parameterization of a Tumor

Archives of glioblastoma (GBM) imaging and genomic data present an unprecedented potential for the clinical evaluation of tumor progression and the identification of novel imaging biomarkers. Reliable automatic segmentation of brain tumors will prove invaluable in this regard.

Video Frame Synchronization for A Federation of Projector Using Camera Feedback

The technology is a video frame synchronization technique for multiple projector displays.It features technique based on camera feedback and works by adjusting frame display times between projectors.It allows for collaborative displays between resource limited devices.

Distributed Scalable Interaction Paradigm for Multi-User Interaction Across Tiled Multi-Displays

The technology is a method for multiple users to interact simultaneously with multiple tiled displays.Under this technology, multiple users are allowed to interact with a tiled display with a distributed registration technique.It features easy scalability across different applications, modalities and users and user interactions involve hand gestures or are laser-based.

A Projector With Enhanced Resolution Via Optical Pixel Sharing

The technology is a device which enables software and hardware to create a display that is perceptually similar to a true high resolution display at a lower cost.It features targeted higher resolution at desired portion of a display and “Optical pixel sharing unit” hardware.Under this technology users can vary pixel density spatially.

  • Go to Page: