Learn more about UC TechAlerts – Subscribe to categories and get notified of new UC technologies

Browse Category: Computer > Software

Categories

[Search within category]

Real-Time Imaging in Low Light Conditions

Prof. Luat Vuong and colleagues from the University of California, Riverside have developed a method for imaging in low light and low signal-to-noise conditions. This technology works by using a dense neural network to reconstruct an object from intensity-only data and efficiently solves the inverse mapping problem without performing iterations with each image and without deep learning schemes. This network operates without learned stereotypes with low computational complexity, low reconstruction latency, decreased power consumption, and robust resistance to disturbances compared to current imaging technologies. Fig 1: Theoretical/simulation accuracy for multi-vortex arrays - 3,5,7 correspondingly using the dense single layer neural net, in comparison to convolutional NN and a single layer NN using conventional imaging. The SNR is provided for the conventional imaging scheme.  

Search And Recommendation Process For Identifying Useful Boundaries In Virtual Communication Settings

Advances in Augmented and Virtual Reality (AR/VR) headsets and displays have introduced alternative systems of immersive and context aware communications platforms.  However, one key factor that can cause a major bottleneck in future AR/VR communication is the limited space surrounding the user in the real world.  In Augmented Reality, unlimited spatial data can be imported to the user’s current surrounding.  Many of these virtual objects do not hold spatial limitations to themselves and are only restricted to the user’s real world surrounding constraints.  They can be visualized, augmented and placed anywhere necessary in the space, as long as they are within the users’ environmental boundaries.   However, this one-way spatial limitation between virtual and real objects does not always apply in communication applications where two or more users, all having spatial discrete constraints, are interacting with each other in a spatial setting.  All parties of the tele-conference (or other communication methods) hold unique spatial limitations (room size, furniture settings, etc.) and consequently their virtual doubles or Avatars may not be able practice the same spatial relationship and arrangement between the real-world spaces and their corresponding boundaries for all parties.  This would result in misalignment of head and body gestures, spatial sound errors and other micro expression errors due to the incorrect positioning of each member of the virtual call.   UC researchers have developed a search and recommendation process which can identify mutual accessible boundaries of all the parties of a communication setting (AR conference calls, virtual calls, tele-immersion, etc.) and provide each user the exact location to position itself and where to move surrounding objects so that all parties of the call can hold a similar spatial relationship to each other with minimum effort.  Such process would allow all members of the virtual call to augment other members in their own spaces, by considering the spatial limitations of all participants in the virtual/augmented reality call.    The process facilitates promoting remote communication in all consumer levels, in both commercial and personal settings.  It would also benefit remote workplace procedures, allowing workers and employees to communicate efficiently together, without accessing large commercial spaces.  Preserving micro-gestures and expressions in another feature of this process, maintaining different attributions of social interactions and effective communications.

System For Determining Trademark Similarity

Many areas of intellectual property law involve subjective judgments regarding confusion or similarity. For example, in trademark or trade dress lawsuits a key factor considered by the court is the degree of visual similarity between the trademark or product designs under consideration. Such similarity judgments are nontrivial, and may be complicated by cognitive factors such as categorization, memory, and reasoning that vary substantially across individuals. Currently, three forms of evidence are widely accepted: visual comparison by litigants, expert witness testimonies, and consumer surveys. All three rely on subjective reports of human responders, whether litigants, expert witnesses, or consumer panels. Consequently, all three forms of evidence potentially share the criticism that they are subject to overt (e.g. conflict of interest) or covert (e.g. inaccuracy of self-report) biases.To address this situation, researchers at UC Berkeley developed a technology that directly measures the mental state of consumers when they attend to visual images of consumer products, without the need for self-report measures such as questionnaires or interviews. In so doing, this approach reduces the potential for biased reporting.  

Mixed-Signal Acceleration Of Deep Neural Networks

Deep Neural Networks (DNNs) are revolutionizing a wide range of services and applications such as language translation , transportation , intelligent search, e-commerce, and medical diagnosis. These benefits are predicated upon delivery on performance and energy efficiency from hardware platforms. With the diminishing benefits from general-purpose processors, there is an explosion of digital accelerators for DNNs. Mixed-signal acceleration is also gaining traction. Albeit low-power, mixedsignal circuitry suffers from limited range of information encoding, is susceptible to noise, imposes Analog to Digital (A/D) and Digital to Analog (D/A) conversion overheads, and lacks fine-grained control mechanism. Realizing the full potential of mixed-signal technology requires a balanced design that brings mathematics, architecture, and circuits together.

Contextual Augmentation Using Scene Graphs

Spatial computing experiences are constrained by the real-world surroundings of the user.  In such experiences, augmenting virtual objects to existing scenes require a contextual approach, where geometrical conflicts are avoided, and functional and plausible relationships to other objects are maintained in the target environment.  Yet, due to the complexity and diversity of user environments, automatically calculating ideal positions of virtual content that is adaptive to the context of the scene is considered a challenging task.    UC researchers have developed a framework which augments scenes with virtual objects using an explicit generative model to learn topological relationship from priors extracted from a real-world and/or synthetic 3D datasets.  Primarily designed for spatial computing applications, SceneGen extracts features from rooms into a novel spatial representation which encapsulates positional and orientational relationships of a scene which captures pairwise topology between objects, object groups, and the room.  The AR application iteratively augments objects by sampling positions and orientations across a room to create a probabilistic heat map of where the object can be placed.  By placing objects in poses where the spatial relationships are likely, we are able to augment scenes that are realistic. 

Wireless and Programmable Recording and Stimulation of Deep Brain Activity in Freely Moving Humans Immersed in Virtual, Augmented or Real-World Environments

UCLA researchers in the Department of Psychiatry and Biobehavioral Sciences have a designed a lightweight, highly mobile deep brain activity measuring platform that elucidates neural mechanisms for neuropsychiatric disorders.

Autonomous Comfort Systems Via An Infrared-Fused Vision-Driven Robotic Systems

Robotic comfort systems have been developed which use fans to deliver heated/cooling air to building occupants to provide greater levels of personal comfort.  However, current robotic systems rely on surveys asking individuals about their comfort state through a web interface or app.  This reliance on user feedback becomes impractical due to survey fatigue on the part of the user.  Researchers at the University of California, Berkeley have developed a system which uses a visible light camera located on the nozzle of a robotic fan to detect human facial features (e.g., eyes, nose, and lips).  Images from a co-located thermal camera are then registered onto the visible light image and temperatures of different facial features are captured and used to infer the comfort state of the individual.  Accordingly, the fan/heater system blows air with a specific velocity and temperature toward the occupant via a closed-loop feedback control.  Since the system can track a person in an environment, it addresses issues with prior data collection systems that needed occupants to be positioned in a specific location.

Vehicle Make and Model Identification

Prof. Bir Bhanu and his colleagues from the University of California, Riverside have developed a method for  analyzing real-time video feed of vehicles from a rear  view  perspective to identify the make and model of a vehicle. This method works by using a software system for detecting the Regions-of-Interest (ROIs) of moving vehicles and moving shadows, computing structural and other features and using a vehicle make and model database for vehicle identification. The system performs calculations based on factors found in all vehicles, so it is reliable regardless of vehicle color and type. The system is compatible with low resolution video feed, so it is able to analyze video feed in real-time. Thus, this technology holds potential for innovating fields like vehicle surveillance, vehicle security, class-based vehicle tolling, and traffic monitoring where reliable real-time video analysis is needed.  Figure 1: Example of the direct rear view of moving vehicles.  

F5‐HD: Fast Flexible FPGA‐based Framework for Refreshing Hyperdimensional Computing

Hyperdimensional (HD) computing is a novel computational paradigm that emulates the brain functionality in performing cognitive tasks. The underlying computation of HD involves a substantial number of element-wise operations (e.g., addition and multiplications) on ultra-wise hypervectors, in the granularities of as small as a single bit, which can be effectively parallelized and pipelined. In addition, though different HD applications might vary in terms of number of input features and output classes (labels), they generally follow the same computation flow. Such characteristics of HD computing inimitably matches with the intrinsic capabilities of FPGAs, making these devices a unique solution for accelerating these applications.

Multi-Omics CoAnalysis (MOCA) Software

Researchers at the University of California, Riverside have developed a software program named Multi-Omics CoAnalysis (MOCA), which is an integrative, interactive, and informative (i3) workbench. Using MOCA, researchers will be able to statistically analyze and interactively visualize the experimental data and generate the corresponding correlative omics data. Data can be presented in various formats including box plots, line plots, heat maps, volcano plots, principal component analysis, coefficient distribution plot, and network plot with an adjacency matrix. The graphical user-interface (GUI) of MOCA delivers intuitive and interactive data visualizations, and enables access to many types of metadata and experimental data in a user-friendly manner.  Fig 1: MOCA-generated image of a metabolic network in MEP pathway Fig 2: MOCA-generated pattern plot by using machine learning

Mutation Organization Software for Adaptive Laboratory Evolution (ALE) Experimentation

Adaptive Laboratory Evolution (ALE) is a tool for the study of microbial adaptation. The typical execution of an ALE experiment involves cultivating a population of microorganisms in defined conditions (i.e., in a laboratory) for a period of time that enables the selection of improved phenotypes. Standard model organisms, such as Escherichia coli, have proven well suited for ALE studies due to their ease of cultivation and storage, fast reproduction, well known genomes, and clear traceability of mutational events. With the advent of accessible whole genome resequencing, associations can be made between selected phenotypes and genotypic mutations.   A review of ALE methods lists 34 separate ALE studies to date. Each study reports on novel combinations of selection conditions and the resulting microbial adaptive strategies. Large scale analysis of ALE results from such consolidation efforts could be a powerful tool for identifying and understanding novel adaptive mutations. 

Vehicle Logo Identification in Real-Time

Brief description not available

Generating Visual Analytics and Player Statistics for Soccer

Prof. Bhanu and his colleagues from the University of California, Riverside have developed a system to automate the process of player talent identification by performing visual analytics and generating statistics at the match, team and player level for soccer from a video using computer vision and machine learning techniques. This work uses a database of 49,952 images which are annotated into two classes namely: players with the ball and players without the ball. The system can identify which players are controlling the ball. Compared to other state-of-the-art approaches, this technology has demonstrated an accuracy of 86.59% on identifying players controlling the ball and an accuracy of 84.73% in generating the match analytics and player statistics. Figure 1: Visualization of features learned by the system Figure 2: Visualization of gray scale features learned by the system  

Machine Learning Program that Diagnoses Hypoadrenocorticism in Dogs Using Standard Blood Test Results

Researchers at the University of California, Davis have developed a program based on machine learning algorithms to aid in diagnosing hypoadrenocorticism.

BioScript: A Programming Language for Microfluidic Devices

Prof. Philip Brisk and his colleagues from the University of California, Riverside have developed a new programming language and tool to design microfluidic (MF) devices. The new presented language, BioScript, offers a user-friendly syntax that reads user input like a cookbook recipe to optimize human readability. The advantage of the BioScript type system is that it ensures that each fluid is never consumed more than once, and that unsafe combinations of chemicals are never mixed on the chip. This result establishes the feasibility of high-level programming language and compiler design for programmable chemistry, and opens up future avenues for research in microfluidic systems. Fig 2: A Laboratory-on-a-Chip (LoC) system  

Design Of Task-Specific Optical Systems Using Broadband Diffractive Neural Networks

UCLA researchers in the Department of Electrical and Computer Engineering have developed a diffractive neural network that can process an all-optical, 3D printed neural network for deep learning applications.

Computational Image Analysis of Guided Acoustic Waves Enables Rheological Assessment of Sub-Nanoliter Volumes

UCLA researchers in the Department of Electrical and Computer Engineering have developed an image analysis platform to measure the viscosity of nanoliter volume liquids.

IgEvolution: A Novel Tool for Clonal Analysis of Antibody Repertoires

Constructing antibody repertoires is an important error-correcting step in analyzing immunosequencing datasets that is important for reconstructing evolutionary (clonal) development of antibodies. However, the state-of-the-art repertoire construction tools typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction. Thus, although repertoire construction is a prerequisite for follow up clonal tree reconstruction, the existing repertoire reconstruction algorithms are not well suited for this task because they typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction.

Predictive Controller that Optimizes Energy and Water Used to Cool Livestock

Researchers at the University of California, Davis have developed a controller that applies environmental data to optimizing operations of livestock cooling equipment.

Low Complexity Maximum-Likelihood Decoding of Cyclic Codes

UCLA researchers in the Department of Electrical and Computer Engineering have developed a low complexity decoding algorithm of cyclic codes with better performance and lower latency than current approaches.

Method of Reducing Placebo/Nocebo Effects Associated with the Tapering of Medication and Storing Drug Tablet Fragments

UCLA researchers in the Department of Medicine have developed drug tapering schedule software to reduce factors that may impede patients’ discontinuation of a drug.

Deep Learning of Biomimetic Sensorimotor Control for Biomechanical Human Animation

UCLA researchers from the Department of Computer Science have developed a computer simulation model and associated software system for biomimetic human sensorimotor control.

Software for Automated Microfluidic Chip Design

Professor Brisk’s research group at the University of California, Riverside, has developed software to design and analyze an entire microfluidic chip. This is done using Microfluidic Design Automation (MDA) software to synthesize and physically lay out the devices.This software uses Microfluidic  Design  Automation (MDA) to  physically  render chips.  This  approach  is  similar  to  Electronic  Design Automation (EDA) in the semiconductor industry. The  software  automatically creates a chip architecture that is converted to MHDL, a  human-readable microfluidic hardware design language, enabling manual refinement. When  the  chip  designer  is  satisfied  with  the  architecture,  the software  physically  lays  out  the  different  layers  of  the  chip. The  output  is  an  AutoCAD  DXF  (or  other  vector  graphics) file that can be transferred to a foundry for fabrication. Fig. 1 shows a microfluidic device layout designed and laid-out by the UCR software.  

  • Go to Page: