Learn more about UC TechAlerts – Subscribe to categories and get notified of new UC technologies

Browse Category: Computer > Software

Categories

[Search within category]

Livesynthesis: Towards An Interactive Synthesis Flow

In digital circuit design, synthesis is a tedious and time consuming task. Designers wait several hours for relatively small design changes to yield synthesis results. 

DESIGN WORKFLOW IMPROVEMENTS USING STRUCTURAL MATCHING FOR FAST RE-SYNTHESIS OF ELECTRONIC CIRCUITS

Electronic circuits are growing in complexity every year. Existing workflows that optimize the design and placement of circuit components are laborious and time-consuming though. Incremental design changes that target device optimization can take many hours to render. Streamlined design workflows that are both fast and able to optimize performance are needed to keep pace with these device improvements. A UC Santa Cruz researcher has developed a new technique, SMatch, to shorten design workflow times with minimal QoR impact. 

Corf: Coalescing Operand Register File For Graphical Processing Units

Modern Graphical Processing Units (GPUs) consist of several Streaming Multiprocessors (SM) – each has its own Register File (RF) and a number of integers, floating points and specialized computational cores. GPU program is decomposed into one or more cooperative thread arrays that are scheduled to the SMs. GPUs invest in large RFs to enable fine-grained and fast switching between executing groups of threads. This results in RFs being the most power hungry components of the GPU. The RF organization substantially affects the overall performance and energy efficiency of the GPU.

Anticipatory Lane Change Warning Using Dsrc

Brief description not available

Machine Learning-Based Monte Carlo Denoising

Brief description not available

Blockchain Protocols for Advancements in Throughput, Fault-Tolerance, and Scalability

Researchers at the University of California, Davis have developed several blockchain paradigms that provide new approaches and expand on existing protocols to improve performance in large-scale blockchain implementations.

Adapting Existing Computer Networks to a Quantum-Based Internet Future

Researchers at the University of California, Davis have developed an approach for integrating quantum computers into the existing internet backbone.

Low-Cost, Multi-Wavelength, Camera System that Incorporates Artificial Intelligence for Precision Positioning

Researchers at the University of California, Davis have developed a system consisting of cameras and multi-wavelength lasers that is capable of precisely locating and inspecting items.

Programmable System that Mixes Large Numbers of Small Volume, High-Viscosity, Fluid Samples Simultaneously

Researchers at the University of California, Davis have developed a programmable machine that shakes and repeatedly inverts large numbers of small containers - such as vials and flasks – in order to mix high-viscosity fluids.

In-Sensor Hardware-Software Co-Design Methodology of the Hall Effect Sensors to Prevent and Contain the EMI Spoofing Attacks

Researchers at UCI have developed a novel co-design methodology of hardware-software architecture used for protecting Hall sensors found in autonomous vehicles, smart grids, industrial plants, etc…, against spoofing attacks.There are currently no comprehensive measures in place to protecting Hall sensors.

Dynamic Target Ranging With Multi-Tone Continuous Wave Lidar Using Phase Algorithm

Researchers at the University of California, Irvine have developed a novel algorithm that is designed to be integrated with current multi-tone continuous wave (MTCW) lidar technology in order to enhance the capability of lidar to acquire range(distance) of fast-moving targets as well as simultaneous velocimetry measurements.

Integrated Virtual Reality and Audiovisual Display Support System for Patients in a Prone Position

Researchers at the University of California, Davis have developed an integrated virtual reality and audiovisual support system that increases the comfort of patients who are undergoing diagnostic tests or medical procedures in the prone and other positions.

Smart Suction Cup for Adaptive Gripping and Haptic Exploration

Vacuum grippers are widely used in industry to handle objects via suction pressure. Unicontact suction cups are commonly used for gripping because they are simple to operate and can handle a variety of items, including those that are delicate, large, or inaccessible to jaw grippers. However, suction cup grippers have challenges such as planning a contact location and inertial force-induced grasping failure. To address these challenges, UC Berkeley researchers developed a tactile sensing technology for smart suction cups. This Berkeley sensing technology can detect suction contact and prevent suction cup grasp failures. It can perform tactile sensing of object properties such as roughness or porosity that might lead to grasping failures before they happen. If a grasp failure does happen, the technology gains additional information about why and how the failure occurred to prevent similar failures in future attempts. Sensing occurs quickly, such that robot behavior can remain fast while increasing performance, efficiency and reliability.  As compared with other robotic grasping sensing technologies, this smart suction cup technology is affordable, resilient and easy to service. The cup is manufactured using the same process as other suction cups, and electronics are simple and located away from the point-of-contact and protected from damage or hazardous exposure.

Risk Assessment Tool for Bovine Respiratory Disease in Dairy Calves

Researchers at the University of California, Davis have developed a system to assess, estimate and devise a comprehensive control and prevention plan for bovine respiratory disease in pre-weaned dairy calves.

(SD2020-340) Algorithm-Hardware Co-Optimization For Efficient High-Dimensional Computing

With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way.  The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. 

(SD2019-340) Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

Deep Learning Techniques For In Vivo Elasticity Imaging

Imaging the material property distribution of solids has a broad range of applications in materials science, biomechanical engineering, and clinical diagnosis. For example, as various diseases progress, the elasticity of human cells, tissues, and organs can change significantly. If these changes in elasticity can be measured accurately over time, early detection and diagnosis of different disease states can be achieved. Elasticity imaging is an emerging method to qualitatively image the elasticity distribution of an inhomogeneous body. A long-standing goal of this imaging is to provide alternative methods of clinical palpation (e.g. manual breast examination) for reliable tumor diagnosis. The displacement distribution of a body under externally applied forces (or displacements) can be acquired by a variety of imaging techniques such as ultrasound, magnetic resonance, and digital image correlation. A strain distribution, determined by the gradient of a displacement distribution, can be computed (or approximated) from measured displacements. If the strain and stress distributions of a body are both known, the elasticity distribution can be computed using the constitutive elasticity equations. However, there is currently no technique that can measure the stress distribution of a body in vivo. Therefore, in elastography, the stress distribution of a body is commonly assumed to be uniform and a measured strain distribution can be interpreted as a relative elasticity distribution. This approach has the advantage of being easy to implement. The uniform stress assumption in this approach, however, is inaccurate for an inhomogeneous body. The stress field of a body can be distorted significantly near a hole, inclusion, or wherever the elasticity varies. Though strain-based elastography has been deployed on many commercial ultrasound diagnostic-imaging devices, the elasticity distribution predicted based on this method is prone to inaccuracies.To address these inaccuracies, researchers at UC Berkeley have developed a de novo imaging method to learn the elasticity of solids from measured strains. Our approach involves using deep neural networks supervised by the theory of elasticity and does not require labeled data for the training process. Results show that the Berkeley method can learn the hidden elasticity of solids accurately and is robust when it comes to noisy and missing measurements.

  • Go to Page: