Please login to create your UC TechAlerts.
Request a new password for
Required
Using AI to Find Evidence-Based Actions to Achieve Modelable Goals
Researchers at the University of California, Davis have developed an AI-powered framework that bridges the gap between predictive feature analysis and actionable interventions by extracting evidence-based recommendations from scientific literature.
Gamified Speech Therapy System and Methods
Historically, speech therapy apps have relied primarily on online cloud-based speech recognition systems like those used in digital assistants (Cortana, Siri, Google Assistant), which are designed to best guess speech rather than critically evaluate articulation errors. For children with cleft palate specifically, affecting 1 in 700 babies globally, speech therapy is essential follow-up care after reconstructive surgery. Approximately 25% of children with clefts use compensatory articulation errors, and when these patterns become habituated during ages 3-5, they become particularly resistant to change in therapy. Traditional approaches to mobile speech therapy apps have included storybook-style narratives that proved expensive with low replayability and engagement, as well as fast-paced arcade-style games that failed to maintain user interest. Common speech therapy applications require a facilitator to evaluate speech performance and typically depend on continuous internet connectivity, creating barriers for users in areas with poor network coverage or those concerned about data privacy and roaming costs. The shift toward gamified therapy solutions showed that game elements can serve as powerful motivators for otherwise tedious activities. Speech recognition systems face inherent limitations in accuracy compared to cloud-based solutions and require substantial processing power and memory that can impact device performance and battery life, particularly on older mobile devices. Automatic speech recognition (ASR) models struggle significantly with children's speech due to non-fluent pronunciation and variability in speech patterns, with phoneme error rates reaching almost 12%, and consonant recognition errors affecting the reliability of speech disorder detection. The challenge becomes even more pronounced for populations with speech impairments, as conventional ASR systems are optimized for typical adult speech rather than atypical articulation patterns of cleft palate speech or developmental disabilities. Moreover, maintaining user engagement over extended therapy periods is hard, and many apps fail to provide sufficient motivation for daily practice, which is essential for speech improvement.
Methods and Systems for Annotating Floorplans
Traditional approaches to indoor mapping relied heavily on manual floor plan tracing or rule-based computer vision algorithms, which proved fragile when confronted with the wide variety of graphical representations used in architectural drawings. While Computer-Aided Design (CAD) floor plans in formats like DWG or DWF exist for most modern buildings, these detailed technical drawings are typically proprietary and inaccessible to the public. Mappers often work with low-quality images (JPEG or PDF format) of floor plans, necessitating manual digitization processes. RGB-D cameras, which capture both color and depth information, emerged as promising tools for 3D indoor scanning, though they face limitations including restricted range (typically less than 5 meters), sensitivity to lighting conditions, noisy point clouds at object edges, and computational demands for real-time processing. Automatic floor plan vectorization algorithms remain highly sensitive to image quality and graphical symbol variations, often requiring substantial manual editing even with state-of-the-art deep learning approaches.
Synthesis Flow Framework for IC Design
Digital integrated circuit design has evolved significantly over the past several decades, with synthesis becoming increasingly automated and sophisticated. The traditional synthesis flow emerged in the 1980s when commercial logic synthesis packages from companies like Cadence and Synopsys revolutionized chip design by automatically converting hardware description languages (HDL) into gate-level netlists. Electronic design automation (EDA) tools evolved from simple netlist extraction to complex optimization processes, progressing through gate-level optimization, register-transfer-level synthesis, and eventually algorithmic synthesis. However, as designs have grown exponentially in complexity, synthesis times have become a major bottleneck, with full synthesis often taking hours or days for large designs, significantly impacting designer productivity and iteration cycles. Long synthesis runtimes prevent designers from rapid iteration, with typical synthesis taking 3+ days for complex designs, forcing designers to carefully consider when to submit jobs and wait for delayed feedback. The traditional register-transfer level (RTL) design flow suffers from critical limitations including the inability for RTL engineers to identify and resolve top-level timing issues early in the design process, routing congestion problems that cannot be detected until placement is completed, and insufficient feedback on power consumption during early architectural phases. Additionally, even small design changes trigger full re-synthesis of large blocks, wasting computational resources on unchanged portions of the design, while inter-module optimization requirements often degrade quality-of-results (QoR) when designs are artificially partitioned.
CRISPRware
Clustered regularly interspaced short palindromic repeats (CRISPR) screening is a cornerstone of functional genomics, enabling genome-wide knockout studies to identify genes involved in specific cellular processes or disease pathways. The success of CRISPR screens depends critically on the design of effective guide RNA (gRNA) libraries that maximize on-target activity while minimizing off-target effects. Current CRISPR screening lacks tools that can natively integrate next-generation sequencing (NGS) data for context-specific gRNA design, despite the wealth of genomic and transcriptomic information available from modern sequencing approaches. Traditional gRNA design tools have relied on static libraries with limited genome annotations and outdated scoring methods, lacking the flexibility to incorporate context-specific genomic information. Off-target effects are also a concern, with CRISPR-Cas9 systems tolerating up to three mismatches between single guide RNA (sgRNA) and genomic DNA, potentially leading to unintended mutations that could disrupt essential genes and compromise genomic integrity. Additionally, standard CRISPR library preparation methods can introduce bias through PCR amplification and cloning steps, resulting in non-uniform gRNA representation.
Storyai: An Ai-Based Narrative Storytelling App
An innovative AI tool that improves English communication skills among K-12 students using personalized storytelling.
Decoder-Only Transformer Methods for Indoor Localization
WiFi-based indoor positioning has been a widely researched area for the past five years, with systems traditionally relying on signal telemetry data including Received Signal Strength Indicator (RSSI), Channel State Information (CSI), and Fine Timing Measurement (FTM). However, adoption in practice has remained limited due to environmental challenges including signal fading, multipath effects, and interference that significantly impact positioning accuracy. Existing machine learning approaches typically require extensive manual feature engineering, preprocessing steps like filtering and data scaling, and struggle with missing or incomplete telemetry data while lacking flexibility across heterogeneous environments. Furthermore, there is currently no unified model capable of handling variations in telemetry data formats from different WiFi device vendors, use-case requirements, and environmental conditions, forcing practitioners to develop separate models for each specific deployment scenario.
World Model Based Distributed Learning for AI Agents in Autonomous Vehicles
Researchers at the University of California, Davis have developed an approach to enhance autonomous vehicle path prediction through efficient information sharing and distributed learning among AI agents.
A Context-Aware Selective Sensor Fusion Method For Multi-Sensory Computing Systems
HydraFusion is a modular, selective sensor fusion framework designed to enhance performance and efficiency in multi-sensory computing systems across diverse contexts.
A Method For Safely Scheduling Computing Task Offloads For Autonomous Vehicles
EnergyShield is a pioneering framework designed to optimize energy consumption through safe, intelligent offloading of deep neural network computations for autonomous vehicles.
Patient Pressure Injury Prevention Methods and Software
Pressure injuries (commonly called bedsores or pressure ulcers) represent one of the most persistent and costly challenges in healthcare, affecting over 2.5 million US patients and costing almost $27B in 2019. Hospital-acquired pressure injury events occur in about 3% in general populations and about 6% in intensive care units (ICUs). Current prevention strategies still rely on the Braden Scale risk assessment tool as the gold standard. Developed in the 80s, it is used to stratify patients into risk categories based on factors like sensory perception, moisture, mobility, and friction. The Braden score directly informs turning frequency as the standard of protocol. Unfortunately, medical staff adherence to turning protocols remains low at ~50% nationally, creating a gap between prescribed care and actual implementation. Technologies to help assess by sensing pressure injuries have limitations, including discontinuous monitoring requiring manual interpretation, and lack of objective mobility metrics. These fail to account for the complex interplay between pressure distribution, patient movement patterns, and individual risk factors. The Braden-scoring approach is particularly problematic as it does not account for the presence of existing pressure injuries or patient-specific factors, and has been shown to have inadequate validity for ICU patients. Additionally, current pressure mapping systems are typically large, expensive, and require specialized training, limiting their practical deployment in routine clinical care.
Large Language Models For Verifiable Programming Of Plcs In Industrial Control Systems
A user-guided iterative pipeline that significantly improves the reliability and quality of code generated by Large Language Models (LLMs) for industrial control systems (ICS).
An Design Automation Methodology Based On Graph Neural Networks To Model The Integrated Circuits And Mitigate The Hardware Security Threats
An innovative design automation methodology leveraging graph neural networks to enhance integrated circuit security by mitigating hardware threats and protecting intellectual property.
Methods For Spatio-Temporal Scene-Graph Embedding For Autonomous Vehicle Applications
A revolutionary approach to enhancing the safety and efficiency of autonomous vehicles through advanced scene-graph embeddings.
Deep Learning System To Improve Diagnostic Accuracy For Real-Time Quantitative Polymerase Chain Reaction Data
The rapid and accurate analysis of real-time quantitative polymerase chain reaction (qPCR) data is critical for precise disease diagnostics, genetic research, and pathogen detection. However, manual interpretation is prone to human error, and current automated systems often struggle with noise and variability, leading to misdiagnosis or inaccurate results. Researchers at UC Berkeley have developed a Deep Learning System for Enhanced qPCR Data Analysis that addresses these challenges. The system utilizes an advanced deep learning model to analyze raw qPCR data in real-time, significantly improving diagnostic accuracy by identifying subtle patterns and anomalies that are difficult for human experts or conventional software to detect. This innovative approach leads to more reliable and faster results compared to traditional methods.
Platooning System and Methods
Vehicle platooning technology is an evolving segment within the broader movement towards more intelligent transportation, specifically relating to autonomous vehicles. Some early concepts dates back to the 1970s with projects like Electronic Route Guidance System developed by the U.S. Federal Highway Administration, which used a destination-oriented approach with roadside units to decode vehicle inputs and provide routing instructions. Subsequent initiatives such as the California Partners for Advanced Transportation Technology program demonstrated vehicles traveling in close formation guided by magnets embedded in roadways. The landscape has since evolved from individual vehicle automation concepts to more sophisticated vehicle-to-vehicle (V2V) communication schemes to enable coordinated movements. More recent industry developments have been driven by advancements in 5G technology, V2V communication protocols, and enhanced safety requirements. Current systems face control stability challenges, particularly as platoon size increases, with research showing that system stabilizability degrades and can lose stability entirely in infinite vehicle formations. Moreover, issues with V2V communication reliability persist, including frequent intermittent connectivity problems and wireless interference, limiting wider adoption. Additional challenges include the fundamental trade-off between fuel efficiency and safety margins, where shorter inter-vehicle distances improve aerodynamic benefits but increase collision risk.
Communication-Efficient Federated Learning
A groundbreaking algorithm that significantly reduces communication time and message size in distributed machine learning, ensuring fast and reliable model convergence.
3D Cardiac Strain Analysis
An advanced geometric method for comprehensive 3D cardiac strain analysis, enhancing diagnosis and monitoring of myocardial diseases.
Enhancing Software Reverse Engineering with Graph Neural Networks
CFG2VEC is a novel Hierarchical Graph Neural Network approach designed to significantly improve the analysis of vulnerable binaries in software reverse engineering.
Brain-to-Text Communication Neuroprosthesis
Researchers at the University of California, Davis have developed a Brain-Computer Interface (BCI) technology that enables individuals with paralysis to communicate and control devices through multimodal speech and gesture neural activity decoding.
Software to Diagnose Sensory Issues in Fragile X Syndrome and Autism
Professor Anubhuti Goel and colleagues from the University of California, Riverside have developed a novel diagnostic tool and software program that provides a quick, objective measure of sensory issues for individuals with Autism spectrum disorders and Fragile X syndrome. This tool works by using a software application to administer a game. Based on the individual’s score at the end of the game, a diagnosis about sensory issues may be made. This technology is advantageous because it may provide an easily accessible, low cost, and safe diagnostic tool for Fragile X Syndrome and Autism that can be developed as a telehealth diagnostic tool.
AI-Powered Early Warning System for Honeybee Colony Health
Brief description not available
Machine Learning Framework for Inferring Latent Mental States from Digital Activity (MILA)
The DALMSI framework is a novel method for inferring a user's latent mental states from their digital activity. Researchers at UC Berkeley developed this technology to address the limitations of traditional, intrusive methods like surveys or physical sensors. The system works by receiving and segmenting a stream of digital interaction data, and then uses neural encoding to transform these segments into representations, which a machine learning model maps to specific internal states like cognitive load or emotional state. This offers a non-intrusive, real-time, and scalable solution for understanding user experience without requiring a user's conscious effort or special hardware.
Articulatory Feedback For Phonetic Error-Based Pronunciation Training
A verbatim phoneme recognition framework that transcribes what a person actually says, including accents and dysfluencies, to provide precise feedback for pronunciation training.
X-ray-induced Acoustic Computed Tomography (XACT) for In Vivo Dosimetry
This technology leverages X-ray-induced acoustic phenomena for real-time, in-line verification of photon beam location and dose during cancer radiotherapy.