Learn more about UC TechAlerts – Subscribe to categories and get notified of new UC technologies

Browse Category: Computer > Software


[Search within category]

Machine Learning-Based Monte Carlo Denoising

Brief description not available

Adapting Existing Computer Networks to a Quantum-Based Internet Future

Researchers at the University of California, Davis have developed an approach for integrating quantum computers into the existing internet backbone.

Low-Cost, Multi-Wavelength, Camera System that Incorporates Artificial Intelligence for Precision Positioning

Researchers at the University of California, Davis have developed a system consisting of cameras and multi-wavelength lasers that is capable of precisely locating and inspecting items.

Programmable System that Mixes Large Numbers of Small Volume, High-Viscosity, Fluid Samples Simultaneously

Researchers at the University of California, Davis have developed a programmable machine that shakes and repeatedly inverts large numbers of small containers - such as vials and flasks – in order to mix high-viscosity fluids.

In-Sensor Hardware-Software Co-Design Methodology of the Hall Effect Sensors to Prevent and Contain the EMI Spoofing Attacks

Researchers at UCI have developed a novel co-design methodology of hardware-software architecture used for protecting Hall sensors found in autonomous vehicles, smart grids, industrial plants, etc…, against spoofing attacks.There are currently no comprehensive measures in place to protecting Hall sensors.

Dynamic Target Ranging With Multi-Tone Continuous Wave Lidar Using Phase Algorithm

Researchers at the University of California, Irvine have developed a novel algorithm that is designed to be integrated with current multi-tone continuous wave (MTCW) lidar technology in order to enhance the capability of lidar to acquire range(distance) of fast-moving targets as well as simultaneous velocimetry measurements.

Integrated Virtual Reality and Audiovisual Display Support System for Patients in a Prone Position

Researchers at the University of California, Davis have developed an integrated virtual reality and audiovisual support system that increases the comfort of patients who are undergoing diagnostic tests or medical procedures in the prone and other positions.

Smart Suction Cup for Adaptive Gripping and Haptic Exploration

Vacuum grippers are widely used in industry to handle objects via suction pressure. Unicontact suction cups are commonly used for gripping because they are simple to operate and can handle a variety of items, including those that are delicate, large, or inaccessible to jaw grippers. However, suction cup grippers have challenges such as planning a contact location and inertial force-induced grasping failure. To address these challenges, UC Berkeley researchers developed a tactile sensing technology for smart suction cups. This Berkeley sensing technology can detect suction contact and prevent suction cup grasp failures. It can perform tactile sensing of object properties such as roughness or porosity that might lead to grasping failures before they happen. If a grasp failure does happen, the technology gains additional information about why and how the failure occurred to prevent similar failures in future attempts. Sensing occurs quickly, such that robot behavior can remain fast while increasing performance, efficiency and reliability.  As compared with other robotic grasping sensing technologies, this smart suction cup technology is affordable, resilient and easy to service. The cup is manufactured using the same process as other suction cups, and electronics are simple and located away from the point-of-contact and protected from damage or hazardous exposure.

Risk Assessment Tool for Bovine Respiratory Disease in Dairy Calves

Researchers at the University of California, Davis have developed a system to assess, estimate and devise a comprehensive control and prevention plan for bovine respiratory disease in pre-weaned dairy calves.

(SD2020-340) Algorithm-Hardware Co-Optimization For Efficient High-Dimensional Computing

With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way.  The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. 

(SD2019-340) Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

Deep Learning Techniques For In Vivo Elasticity Imaging

Imaging the material property distribution of solids has a broad range of applications in materials science, biomechanical engineering, and clinical diagnosis. For example, as various diseases progress, the elasticity of human cells, tissues, and organs can change significantly. If these changes in elasticity can be measured accurately over time, early detection and diagnosis of different disease states can be achieved. Elasticity imaging is an emerging method to qualitatively image the elasticity distribution of an inhomogeneous body. A long-standing goal of this imaging is to provide alternative methods of clinical palpation (e.g. manual breast examination) for reliable tumor diagnosis. The displacement distribution of a body under externally applied forces (or displacements) can be acquired by a variety of imaging techniques such as ultrasound, magnetic resonance, and digital image correlation. A strain distribution, determined by the gradient of a displacement distribution, can be computed (or approximated) from measured displacements. If the strain and stress distributions of a body are both known, the elasticity distribution can be computed using the constitutive elasticity equations. However, there is currently no technique that can measure the stress distribution of a body in vivo. Therefore, in elastography, the stress distribution of a body is commonly assumed to be uniform and a measured strain distribution can be interpreted as a relative elasticity distribution. This approach has the advantage of being easy to implement. The uniform stress assumption in this approach, however, is inaccurate for an inhomogeneous body. The stress field of a body can be distorted significantly near a hole, inclusion, or wherever the elasticity varies. Though strain-based elastography has been deployed on many commercial ultrasound diagnostic-imaging devices, the elasticity distribution predicted based on this method is prone to inaccuracies.To address these inaccuracies, researchers at UC Berkeley have developed a de novo imaging method to learn the elasticity of solids from measured strains. Our approach involves using deep neural networks supervised by the theory of elasticity and does not require labeled data for the training process. Results show that the Berkeley method can learn the hidden elasticity of solids accurately and is robust when it comes to noisy and missing measurements.

Systems and Methods for Sound-Enhanced Meeting Platforms

Computer-based, internet-connected, audio/video meeting platforms have become pervasive worldwide, especially since the 2020 emergence of the COVID-19 pandemic lockdown. These meeting platforms include Cisco Webex, Google Meet, GoTo, Microsoft Teams, and Zoom. However, those popular platforms are optimized for meetings in which all the participants are attending the meeting online, individually. Accordingly, those platforms have shortcomings when used for hybrid meetings in which some participants are attending together in-person and others attending online. Also, the existing platforms are problematic for large meetings in big rooms (e.g. classrooms) in which most or all of the participants are in-person. To address those suboptimal meet platform situations, researchers at UC Berkeley conceived systems, methods, algorithms and other software for a meeting platform that's optimized for hybrid meetings and large in-person meetings. The Berkeley meeting platform offers a user experience that's familiar to users of the conventional meeting platforms. Also, the Berkeley platform doesn't require any specialized participant hardware or specialized physical room infrastructure (beyond standard internet connectivity).

Software Defined Pulse Processing (SDPP) for Radiation Detection

Radiation detectors are typically instrumented with low noise preamplifiers that generate voltage pulses in response to energy deposits from particles (x-rays, gamma-rays, neutrons, protons, muons, etc.). This preamplifier signal must be further processed in order to improve the signal to noise ratio, and then subsequently estimate various properties of the pulse such as the pulse amplitude, timing, and shape. Historically, this “pulse processing” was carried out with complex, purpose-built analog electronics. With the advent of digital computing and fast analog to digital converters, this type of processing can be carried out in the digital domain.There are a number of commercial products that perform “hardware” digital pulse processing. The common element among these offerings is that the pulse processing algorithms are implemented in hardware (typically an FPGA or high performance DSP chip). However this hardware approach is expensive, and it's hard to tailor for a specific detector and application.To address these issues, researchers at UC Berkeley developed a solution that performs the pulse processing in software on a general purpose computer, using digital signal processing techniques. The only required hardware is a general purpose, high speed analog to digital converter that's capable of streaming the digitized detector preamplifier signal into computer memory without gaps. The Berkeley approach is agnostic to the hardware, and is implemented in such a way as to accommodate various hardware front-ends. For example, a Berkeley implementation uses the PicoScope 3000 and 5000 series USB3 oscilloscopes as the hardware front-end. That setup has been used to process the signal from a number of semiconductor and scintillator detectors, with results that are comparable to analog and hardware digital pulse processors.In comparison to current hardware solutions, this new software solution is much less expensive, and much more easily configurable. More specifically, the properties of the digital pulse shaping filter, trigger criteria, methods for estimating the pulse parameters, and formatting/filtering of the output data can be adjusted and tuned by writing simple C/C++ code.

Real-Time Imaging in Low Light Conditions

Prof. Luat Vuong and colleagues from the University of California, Riverside have developed a method for imaging in low light and low signal-to-noise conditions. This technology works by using a dense neural network to reconstruct an object from intensity-only data and efficiently solves the inverse mapping problem without performing iterations with each image and without deep learning schemes. This network operates without learned stereotypes with low computational complexity, low reconstruction latency, decreased power consumption, and robust resistance to disturbances compared to current imaging technologies. Fig 1: Theoretical/simulation accuracy for multi-vortex arrays - 3,5,7 correspondingly using the dense single layer neural net, in comparison to convolutional NN and a single layer NN using conventional imaging. The SNR is provided for the conventional imaging scheme.  

Neuroscientific Method for Measuring Human Mental State

Many areas of intellectual property law involve subjective judgments regarding confusion or similarity. For example, in trademark or trade dress lawsuits a key factor considered by the court is the degree of visual similarity between the trademark or product designs under consideration. Such similarity judgments are nontrivial, and may be complicated by cognitive factors such as categorization, memory, and reasoning that vary substantially across individuals. Currently, three forms of evidence are widely accepted: visual comparison by litigants, expert witness testimonies, and consumer surveys. All three rely on subjective reports of human responders, whether litigants, expert witnesses, or consumer panels. Consequently, all three forms of evidence potentially share the criticism that they are subject to overt (e.g. conflict of interest) or covert (e.g. inaccuracy of self-report) biases.To address this situation, researchers at UC Berkeley developed a technology that directly measures the mental state of consumers when they attend to visual images of consumer products, without the need for self-report measures such as questionnaires or interviews. In so doing, this approach reduces the potential for biased reporting.  

(2019-275) Mixed-Signal Acceleration Of Deep Neural Networks

Deep Neural Networks (DNNs) are revolutionizing a wide range of services and applications such as language translation , transportation , intelligent search, e-commerce, and medical diagnosis. These benefits are predicated upon delivery on performance and energy efficiency from hardware platforms. With the diminishing benefits from general-purpose processors, there is an explosion of digital accelerators for DNNs. Mixed-signal acceleration is also gaining traction. Albeit low-power, mixed signal circuitry suffers from limited range of information encoding, is susceptible to noise, imposes Analog to Digital (A/D) and Digital to Analog (D/A) conversion overheads, and lacks fine-grained control mechanism. Realizing the full potential of mixed-signal technology requires a balanced design that brings mathematics, architecture, and circuits together.

Contextual Augmentation Using Scene Graphs

Spatial computing experiences are constrained by the real-world surroundings of the user.  In such experiences, augmenting virtual objects to existing scenes require a contextual approach, where geometrical conflicts are avoided, and functional and plausible relationships to other objects are maintained in the target environment.  Yet, due to the complexity and diversity of user environments, automatically calculating ideal positions of virtual content that is adaptive to the context of the scene is considered a challenging task.    UC researchers have developed a framework which augments scenes with virtual objects using an explicit generative model to learn topological relationship from priors extracted from a real-world and/or synthetic 3D datasets.  Primarily designed for spatial computing applications, SceneGen extracts features from rooms into a novel spatial representation which encapsulates positional and orientational relationships of a scene which captures pairwise topology between objects, object groups, and the room.  The AR application iteratively augments objects by sampling positions and orientations across a room to create a probabilistic heat map of where the object can be placed.  By placing objects in poses where the spatial relationships are likely, we are able to augment scenes that are realistic. 

Autonomous Comfort Systems Via An Infrared-Fused Vision-Driven Robotic Systems

Robotic comfort systems have been developed which use fans to deliver heated/cooling air to building occupants to provide greater levels of personal comfort.  However, current robotic systems rely on surveys asking individuals about their comfort state through a web interface or app.  This reliance on user feedback becomes impractical due to survey fatigue on the part of the user.  Researchers at the University of California, Berkeley have developed a system which uses a visible light camera located on the nozzle of a robotic fan to detect human facial features (e.g., eyes, nose, and lips).  Images from a co-located thermal camera are then registered onto the visible light image and temperatures of different facial features are captured and used to infer the comfort state of the individual.  Accordingly, the fan/heater system blows air with a specific velocity and temperature toward the occupant via a closed-loop feedback control.  Since the system can track a person in an environment, it addresses issues with prior data collection systems that needed occupants to be positioned in a specific location.

Vehicle Make and Model Identification

Prof. Bir Bhanu and his colleagues from the University of California, Riverside have developed a method for  analyzing real-time video feed of vehicles from a rear  view  perspective to identify the make and model of a vehicle. This method works by using a software system for detecting the Regions-of-Interest (ROIs) of moving vehicles and moving shadows, computing structural and other features and using a vehicle make and model database for vehicle identification. The system performs calculations based on factors found in all vehicles, so it is reliable regardless of vehicle color and type. The system is compatible with low resolution video feed, so it is able to analyze video feed in real-time. Thus, this technology holds potential for innovating fields like vehicle surveillance, vehicle security, class-based vehicle tolling, and traffic monitoring where reliable real-time video analysis is needed.  Figure 1: Example of the direct rear view of moving vehicles.  

(SD2020-332) F5‐HD: Fast Flexible FPGA‐based Framework for Refreshing Hyperdimensional Computing

Hyperdimensional (HD) computing is a novel computational paradigm that emulates the brain functionality in performing cognitive tasks. The underlying computation of HD involves a substantial number of element-wise operations (e.g., addition and multiplications) on ultra-wise hypervectors, in the granularities of as small as a single bit, which can be effectively parallelized and pipelined. In addition, though different HD applications might vary in terms of number of input features and output classes (labels), they generally follow the same computation flow. Such characteristics of HD computing inimitably matches with the intrinsic capabilities of FPGAs, making these devices a unique solution for accelerating these applications.

Mutation Organization Software for Adaptive Laboratory Evolution (ALE) Experimentation

Adaptive Laboratory Evolution (ALE) is a tool for the study of microbial adaptation. The typical execution of an ALE experiment involves cultivating a population of microorganisms in defined conditions (i.e., in a laboratory) for a period of time that enables the selection of improved phenotypes. Standard model organisms, such as Escherichia coli, have proven well suited for ALE studies due to their ease of cultivation and storage, fast reproduction, well known genomes, and clear traceability of mutational events. With the advent of accessible whole genome resequencing, associations can be made between selected phenotypes and genotypic mutations.   A review of ALE methods lists 34 separate ALE studies to date. Each study reports on novel combinations of selection conditions and the resulting microbial adaptive strategies. Large scale analysis of ALE results from such consolidation efforts could be a powerful tool for identifying and understanding novel adaptive mutations. 

Vehicle Logo Identification in Real-Time

Brief description not available

Generating Visual Analytics and Player Statistics for Soccer

Prof. Bhanu and his colleagues from the University of California, Riverside have developed a system to automate the process of player talent identification by performing visual analytics and generating statistics at the match, team and player level for soccer from a video using computer vision and machine learning techniques. This work uses a database of 49,952 images which are annotated into two classes namely: players with the ball and players without the ball. The system can identify which players are controlling the ball. Compared to other state-of-the-art approaches, this technology has demonstrated an accuracy of 86.59% on identifying players controlling the ball and an accuracy of 84.73% in generating the match analytics and player statistics. Figure 1: Visualization of features learned by the system Figure 2: Visualization of gray scale features learned by the system  

Machine Learning Program that Diagnoses Hypoadrenocorticism in Dogs Using Standard Blood Test Results

Researchers at the University of California, Davis have developed a program based on machine learning algorithms to aid in diagnosing hypoadrenocorticism.

  • Go to Page: