Learn more about UC TechAlerts – Subscribe to categories and get notified of new UC technologies

Browse Category: Computer > Software

Categories

[Search within category]

Risk Assessment Tool for Bovine Respiratory Disease in Dairy Calves

Researchers at the University of California, Davis have developed a system to assess, estimate and devise a comprehensive control and prevention plan for bovine respiratory disease in pre-weaned dairy calves.

Algorithm-Hardware Co-Optimization For Efficient High-Dimensional Computing

With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way.  The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. 

Automated Emotion Recognition using Facial Features

Researchers at the University of California, Davis have developed a deep learning-based technology that analyzes facial images to distinguish seven human emotions.

Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

Software Defined Pulse Processing (SDPP) for Radiation Detection

Radiation detectors are typically instrumented with low noise preamplifiers that generate voltage pulses in response to energy deposits from particles (x-rays, gamma-rays, neutrons, protons, muons, etc.). This preamplifier signal must be further processed in order to improve the signal to noise ratio, and then subsequently estimate various properties of the pulse such as the pulse amplitude, timing, and shape. Historically, this “pulse processing” was carried out with complex, purpose-built analog electronics. With the advent of digital computing and fast analog to digital converters, this type of processing can be carried out in the digital domain.There are a number of commercial products that perform “hardware” digital pulse processing. The common element among these offerings is that the pulse processing algorithms are implemented in hardware (typically an FPGA or high performance DSP chip). However this hardware approach is expensive, and it's hard to tailor for a specific detector and application.To address these issues, researchers at UC Berkeley developed a solution that performs the pulse processing in software on a general purpose computer, using digital signal processing techniques. The only required hardware is a general purpose, high speed analog to digital converter that's capable of streaming the digitized detector preamplifier signal into computer memory without gaps. The Berkeley approach is agnostic to the hardware, and is implemented in such a way as to accommodate various hardware front-ends. For example, a Berkeley implementation uses the PicoScope 3000 and 5000 series USB3 oscilloscopes as the hardware front-end. That setup has been used to process the signal from a number of semiconductor and scintillator detectors, with results that are comparable to analog and hardware digital pulse processors.In comparison to current hardware solutions, this new software solution is much less expensive, and much more easily configurable. More specifically, the properties of the digital pulse shaping filter, trigger criteria, methods for estimating the pulse parameters, and formatting/filtering of the output data can be adjusted and tuned by writing simple C/C++ code.

Real-Time Imaging in Low Light Conditions

Prof. Luat Vuong and colleagues from the University of California, Riverside have developed a method for imaging in low light and low signal-to-noise conditions. This technology works by using a dense neural network to reconstruct an object from intensity-only data and efficiently solves the inverse mapping problem without performing iterations with each image and without deep learning schemes. This network operates without learned stereotypes with low computational complexity, low reconstruction latency, decreased power consumption, and robust resistance to disturbances compared to current imaging technologies. Fig 1: Theoretical/simulation accuracy for multi-vortex arrays - 3,5,7 correspondingly using the dense single layer neural net, in comparison to convolutional NN and a single layer NN using conventional imaging. The SNR is provided for the conventional imaging scheme.  

Search And Recommendation Process For Identifying Useful Boundaries In Virtual Communication Settings

Advances in Augmented and Virtual Reality (AR/VR) headsets and displays have introduced alternative systems of immersive and context aware communications platforms.  However, one key factor that can cause a major bottleneck in future AR/VR communication is the limited space surrounding the user in the real world.  In Augmented Reality, unlimited spatial data can be imported to the user’s current surrounding.  Many of these virtual objects do not hold spatial limitations to themselves and are only restricted to the user’s real world surrounding constraints.  They can be visualized, augmented and placed anywhere necessary in the space, as long as they are within the users’ environmental boundaries.   However, this one-way spatial limitation between virtual and real objects does not always apply in communication applications where two or more users, all having spatial discrete constraints, are interacting with each other in a spatial setting.  All parties of the tele-conference (or other communication methods) hold unique spatial limitations (room size, furniture settings, etc.) and consequently their virtual doubles or Avatars may not be able practice the same spatial relationship and arrangement between the real-world spaces and their corresponding boundaries for all parties.  This would result in misalignment of head and body gestures, spatial sound errors and other micro expression errors due to the incorrect positioning of each member of the virtual call.   UC researchers have developed a search and recommendation process which can identify mutual accessible boundaries of all the parties of a communication setting (AR conference calls, virtual calls, tele-immersion, etc.) and provide each user the exact location to position itself and where to move surrounding objects so that all parties of the call can hold a similar spatial relationship to each other with minimum effort.  Such process would allow all members of the virtual call to augment other members in their own spaces, by considering the spatial limitations of all participants in the virtual/augmented reality call.    The process facilitates promoting remote communication in all consumer levels, in both commercial and personal settings.  It would also benefit remote workplace procedures, allowing workers and employees to communicate efficiently together, without accessing large commercial spaces.  Preserving micro-gestures and expressions in another feature of this process, maintaining different attributions of social interactions and effective communications.

Neuroscientific Method for Measuring Human Mental State

Many areas of intellectual property law involve subjective judgments regarding confusion or similarity. For example, in trademark or trade dress lawsuits a key factor considered by the court is the degree of visual similarity between the trademark or product designs under consideration. Such similarity judgments are nontrivial, and may be complicated by cognitive factors such as categorization, memory, and reasoning that vary substantially across individuals. Currently, three forms of evidence are widely accepted: visual comparison by litigants, expert witness testimonies, and consumer surveys. All three rely on subjective reports of human responders, whether litigants, expert witnesses, or consumer panels. Consequently, all three forms of evidence potentially share the criticism that they are subject to overt (e.g. conflict of interest) or covert (e.g. inaccuracy of self-report) biases.To address this situation, researchers at UC Berkeley developed a technology that directly measures the mental state of consumers when they attend to visual images of consumer products, without the need for self-report measures such as questionnaires or interviews. In so doing, this approach reduces the potential for biased reporting.  

Mixed-Signal Acceleration Of Deep Neural Networks

Deep Neural Networks (DNNs) are revolutionizing a wide range of services and applications such as language translation , transportation , intelligent search, e-commerce, and medical diagnosis. These benefits are predicated upon delivery on performance and energy efficiency from hardware platforms. With the diminishing benefits from general-purpose processors, there is an explosion of digital accelerators for DNNs. Mixed-signal acceleration is also gaining traction. Albeit low-power, mixedsignal circuitry suffers from limited range of information encoding, is susceptible to noise, imposes Analog to Digital (A/D) and Digital to Analog (D/A) conversion overheads, and lacks fine-grained control mechanism. Realizing the full potential of mixed-signal technology requires a balanced design that brings mathematics, architecture, and circuits together.

Contextual Augmentation Using Scene Graphs

Spatial computing experiences are constrained by the real-world surroundings of the user.  In such experiences, augmenting virtual objects to existing scenes require a contextual approach, where geometrical conflicts are avoided, and functional and plausible relationships to other objects are maintained in the target environment.  Yet, due to the complexity and diversity of user environments, automatically calculating ideal positions of virtual content that is adaptive to the context of the scene is considered a challenging task.    UC researchers have developed a framework which augments scenes with virtual objects using an explicit generative model to learn topological relationship from priors extracted from a real-world and/or synthetic 3D datasets.  Primarily designed for spatial computing applications, SceneGen extracts features from rooms into a novel spatial representation which encapsulates positional and orientational relationships of a scene which captures pairwise topology between objects, object groups, and the room.  The AR application iteratively augments objects by sampling positions and orientations across a room to create a probabilistic heat map of where the object can be placed.  By placing objects in poses where the spatial relationships are likely, we are able to augment scenes that are realistic. 

Autonomous Comfort Systems Via An Infrared-Fused Vision-Driven Robotic Systems

Robotic comfort systems have been developed which use fans to deliver heated/cooling air to building occupants to provide greater levels of personal comfort.  However, current robotic systems rely on surveys asking individuals about their comfort state through a web interface or app.  This reliance on user feedback becomes impractical due to survey fatigue on the part of the user.  Researchers at the University of California, Berkeley have developed a system which uses a visible light camera located on the nozzle of a robotic fan to detect human facial features (e.g., eyes, nose, and lips).  Images from a co-located thermal camera are then registered onto the visible light image and temperatures of different facial features are captured and used to infer the comfort state of the individual.  Accordingly, the fan/heater system blows air with a specific velocity and temperature toward the occupant via a closed-loop feedback control.  Since the system can track a person in an environment, it addresses issues with prior data collection systems that needed occupants to be positioned in a specific location.

Vehicle Make and Model Identification

Prof. Bir Bhanu and his colleagues from the University of California, Riverside have developed a method for  analyzing real-time video feed of vehicles from a rear  view  perspective to identify the make and model of a vehicle. This method works by using a software system for detecting the Regions-of-Interest (ROIs) of moving vehicles and moving shadows, computing structural and other features and using a vehicle make and model database for vehicle identification. The system performs calculations based on factors found in all vehicles, so it is reliable regardless of vehicle color and type. The system is compatible with low resolution video feed, so it is able to analyze video feed in real-time. Thus, this technology holds potential for innovating fields like vehicle surveillance, vehicle security, class-based vehicle tolling, and traffic monitoring where reliable real-time video analysis is needed.  Figure 1: Example of the direct rear view of moving vehicles.  

F5‐HD: Fast Flexible FPGA‐based Framework for Refreshing Hyperdimensional Computing

Hyperdimensional (HD) computing is a novel computational paradigm that emulates the brain functionality in performing cognitive tasks. The underlying computation of HD involves a substantial number of element-wise operations (e.g., addition and multiplications) on ultra-wise hypervectors, in the granularities of as small as a single bit, which can be effectively parallelized and pipelined. In addition, though different HD applications might vary in terms of number of input features and output classes (labels), they generally follow the same computation flow. Such characteristics of HD computing inimitably matches with the intrinsic capabilities of FPGAs, making these devices a unique solution for accelerating these applications.

Mutation Organization Software for Adaptive Laboratory Evolution (ALE) Experimentation

Adaptive Laboratory Evolution (ALE) is a tool for the study of microbial adaptation. The typical execution of an ALE experiment involves cultivating a population of microorganisms in defined conditions (i.e., in a laboratory) for a period of time that enables the selection of improved phenotypes. Standard model organisms, such as Escherichia coli, have proven well suited for ALE studies due to their ease of cultivation and storage, fast reproduction, well known genomes, and clear traceability of mutational events. With the advent of accessible whole genome resequencing, associations can be made between selected phenotypes and genotypic mutations.   A review of ALE methods lists 34 separate ALE studies to date. Each study reports on novel combinations of selection conditions and the resulting microbial adaptive strategies. Large scale analysis of ALE results from such consolidation efforts could be a powerful tool for identifying and understanding novel adaptive mutations. 

Vehicle Logo Identification in Real-Time

Brief description not available

Generating Visual Analytics and Player Statistics for Soccer

Prof. Bhanu and his colleagues from the University of California, Riverside have developed a system to automate the process of player talent identification by performing visual analytics and generating statistics at the match, team and player level for soccer from a video using computer vision and machine learning techniques. This work uses a database of 49,952 images which are annotated into two classes namely: players with the ball and players without the ball. The system can identify which players are controlling the ball. Compared to other state-of-the-art approaches, this technology has demonstrated an accuracy of 86.59% on identifying players controlling the ball and an accuracy of 84.73% in generating the match analytics and player statistics. Figure 1: Visualization of features learned by the system Figure 2: Visualization of gray scale features learned by the system  

Machine Learning Program that Diagnoses Hypoadrenocorticism in Dogs Using Standard Blood Test Results

Researchers at the University of California, Davis have developed a program based on machine learning algorithms to aid in diagnosing hypoadrenocorticism.

Design Of Task-Specific Optical Systems Using Broadband Diffractive Neural Networks

UCLA researchers in the Department of Electrical and Computer Engineering have developed a diffractive neural network that can process an all-optical, 3D printed neural network for deep learning applications.

IgEvolution: A Novel Tool for Clonal Analysis of Antibody Repertoires

Constructing antibody repertoires is an important error-correcting step in analyzing immunosequencing datasets that is important for reconstructing evolutionary (clonal) development of antibodies. However, the state-of-the-art repertoire construction tools typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction. Thus, although repertoire construction is a prerequisite for follow up clonal tree reconstruction, the existing repertoire reconstruction algorithms are not well suited for this task because they typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction.

Predictive Controller that Optimizes Energy and Water Used to Cool Livestock

Researchers at the University of California, Davis have developed a controller that applies environmental data to optimizing operations of livestock cooling equipment.

Deep Learning of Biomimetic Sensorimotor Control for Biomechanical Human Animation

UCLA researchers from the Department of Computer Science have developed a computer simulation model and associated software system for biomimetic human sensorimotor control.

Software for Automated Microfluidic Chip Design

Professor Brisk’s research group at the University of California, Riverside, has developed software to design and analyze an entire microfluidic chip. This is done using Microfluidic Design Automation (MDA) software to synthesize and physically lay out the devices.This software uses Microfluidic  Design  Automation (MDA) to  physically  render chips.  This  approach  is  similar  to  Electronic  Design Automation (EDA) in the semiconductor industry. The  software  automatically creates a chip architecture that is converted to MHDL, a  human-readable microfluidic hardware design language, enabling manual refinement. When  the  chip  designer  is  satisfied  with  the  architecture,  the software  physically  lays  out  the  different  layers  of  the  chip. The  output  is  an  AutoCAD  DXF  (or  other  vector  graphics) file that can be transferred to a foundry for fabrication. Fig. 1 shows a microfluidic device layout designed and laid-out by the UCR software.  

Automatic Identification of Ophthalmic Medication for The Visually Impaired

Researchers at UCI are developing technology that allows visually impaired patients to use their smartphones to take pictures of their eye medication/eye drop bottles. The technology will recognize the eye medication and verbally communicate the medication and will audibly confirm the medication along with the instructions on use.

Fast Deep Neural Network (DNN) Training/Execution on Hardware Platforms

With the growing range of applications for Deep Neural Networks (DNNs), the demand for higher accuracy has directly impacted the depth of the state-of-the-art models. Although deeper networks are shown to have higher accuracy, they suffer from drastically long training time and slow convergence speed with high computational complexity.

  • Go to Page: