Please login to create your UC TechAlerts.
Request a new password for
Required
Host-Based Intrusion Detection Systems Powered By Large Language Models
SHIELD leverages a customized large language model pipeline to detect and investigate sophisticated cyber threats with high accuracy and interpretability.
An Architecture For Adaptive Split Computing In Vision-Language Models
An intent-aware, dual-stream AI architecture that adapts compute allocation and inference depth on embedded platforms, balancing rapid triage and detailed analysis for real-time visual understanding.
Optimized Sensitivity-Based Current Profiles for Battery Parameter Identification
Researchers at the University of California, Davis have developed a method to design optimized current profiles for lithium-ion batteries using analytic sensitivity functions. By leveraging a reduced electrochemical model, the approach enables fast and accurate identification of key parameters, improving battery management systems and reducing testing time.
Signal Space Based Navigation
Researchers at the University of California, Davis have developed a navigation system that constructs a sensing map from wireless signal observations and pedestrian deadreckoning (PDR) data to enable accurate indoor navigation without relying on traditional geographic localization maps.
A Quantitative, Multimodal Wearable Bioelectronic For Comprehensive Stress Assessment And Sub-Classification
A multimodal, wireless wearable device enabling continuous and detailed stress assessment and subclassification.
Daytime adaptive Deep Brain Stimulation for Parkinson's
Brief description not available
Improving Self-Regulation Of Internal Distraction
A Predictive ML Model For Cancer Early Relapse
Automated Optimized Adaptive Neurostimulation
Labelless, Efficient, Optimization Of Neurostimulation
Biophysically-Informed Deep Learning Model for Predicting Individualized Alzheimer’s Disease Progression
Brain Activity Imbalance Biomarker For Dementia
AI-driven Infrastructure for Student Audio Response Collection, Transcription, and Analysis
AI infrastructure that collects, transcribes, and analyzes student audio responses to deliver actionable insights on learning experiences.
Automated Diagnosis Code Selection Based On Clinical Notes
PEINT (Protein Evolution IN Time)
UC Berkeley researchers have developed a sophisticated computer-implemented framework that leverages transformer architectures to model the evolution of biological sequences over time. Unlike traditional phylogenetic models that often assume sites evolve independently, this framework utilizes a coupled encoder-decoder transformer to parameterize the conditional probability of a target sequence given multiple unaligned sequences. By capturing complex interactions and dependencies across different sites within a protein or genomic sequence, the model estimates the transition likelihood for each position. This estimation allows for a high-fidelity simulation of evolutionary trajectories. This approach enables a deeper understanding of how proteins change across different timescales and environmental pressures.
Non-Invasive Tool That Assesses Bruise Injuries Across All Skin Types.
An innovative non-invasive device that accurately determines the age of bruises for all skin types and tones, designed to assist in forensic investigations and medical diagnostics.
Method for Unlearning Content for Large Language Models
Researchers at the University of California Davis have developed an unlearning method that precisely removes specific data influences from trained large language models while preserving their overall knowledge and performance.
Using AI to Find Evidence-Based Actions to Achieve Modelable Goals
Researchers at the University of California, Davis have developed an AI-powered framework that bridges the gap between predictive feature analysis and actionable interventions by extracting evidence-based recommendations from scientific literature.
Gamified Speech Therapy System and Methods
Historically, speech therapy apps have relied primarily on online cloud-based speech recognition systems like those used in digital assistants (Cortana, Siri, Google Assistant), which are designed to best guess speech rather than critically evaluate articulation errors. For children with cleft palate specifically, affecting 1 in 700 babies globally, speech therapy is essential follow-up care after reconstructive surgery. Approximately 25% of children with clefts use compensatory articulation errors, and when these patterns become habituated during ages 3-5, they become particularly resistant to change in therapy. Traditional approaches to mobile speech therapy apps have included storybook-style narratives that proved expensive with low replayability and engagement, as well as fast-paced arcade-style games that failed to maintain user interest. Common speech therapy applications require a facilitator to evaluate speech performance and typically depend on continuous internet connectivity, creating barriers for users in areas with poor network coverage or those concerned about data privacy and roaming costs. The shift toward gamified therapy solutions showed that game elements can serve as powerful motivators for otherwise tedious activities. Speech recognition systems face inherent limitations in accuracy compared to cloud-based solutions and require substantial processing power and memory that can impact device performance and battery life, particularly on older mobile devices. Automatic speech recognition (ASR) models struggle significantly with children's speech due to non-fluent pronunciation and variability in speech patterns, with phoneme error rates reaching almost 12%, and consonant recognition errors affecting the reliability of speech disorder detection. The challenge becomes even more pronounced for populations with speech impairments, as conventional ASR systems are optimized for typical adult speech rather than atypical articulation patterns of cleft palate speech or developmental disabilities. Moreover, maintaining user engagement over extended therapy periods is hard, and many apps fail to provide sufficient motivation for daily practice, which is essential for speech improvement.
Methods and Systems for Annotating Floorplans
Traditional approaches to indoor mapping relied heavily on manual floor plan tracing or rule-based computer vision algorithms, which proved fragile when confronted with the wide variety of graphical representations used in architectural drawings. While Computer-Aided Design (CAD) floor plans in formats like DWG or DWF exist for most modern buildings, these detailed technical drawings are typically proprietary and inaccessible to the public. Mappers often work with low-quality images (JPEG or PDF format) of floor plans, necessitating manual digitization processes. RGB-D cameras, which capture both color and depth information, emerged as promising tools for 3D indoor scanning, though they face limitations including restricted range (typically less than 5 meters), sensitivity to lighting conditions, noisy point clouds at object edges, and computational demands for real-time processing. Automatic floor plan vectorization algorithms remain highly sensitive to image quality and graphical symbol variations, often requiring substantial manual editing even with state-of-the-art deep learning approaches.
Learning Multimodal Sim-To-Real Robot Policies With Generative Audio
The deployment of robotic systems in real-world environments is often limited by the "sim-to-real gap," where policies trained in digital simulations fail to account for the complex, multisensory feedback of physical reality. Researchers at UC Berkeley have developed a novel method for training multimodal sim-to-real robot policies by integrating generative audio models with traditional physics-based simulators. This framework uses a generative model to synthesize realistic audio data that corresponds to simulated physical interactions, creating a rich, multimodal dataset for policy learning. By training on both simulated physics and generated sensory data, the system enables robots to develop more robust and adaptive behaviors that translate seamlessly from virtual training environments to complex real-world tasks.
Synthesis Flow Framework for IC Design
Digital integrated circuit design has evolved significantly over the past several decades, with synthesis becoming increasingly automated and sophisticated. The traditional synthesis flow emerged in the 1980s when commercial logic synthesis packages from companies like Cadence and Synopsys revolutionized chip design by automatically converting hardware description languages (HDL) into gate-level netlists. Electronic design automation (EDA) tools evolved from simple netlist extraction to complex optimization processes, progressing through gate-level optimization, register-transfer-level synthesis, and eventually algorithmic synthesis. However, as designs have grown exponentially in complexity, synthesis times have become a major bottleneck, with full synthesis often taking hours or days for large designs, significantly impacting designer productivity and iteration cycles. Long synthesis runtimes prevent designers from rapid iteration, with typical synthesis taking 3+ days for complex designs, forcing designers to carefully consider when to submit jobs and wait for delayed feedback. The traditional register-transfer level (RTL) design flow suffers from critical limitations including the inability for RTL engineers to identify and resolve top-level timing issues early in the design process, routing congestion problems that cannot be detected until placement is completed, and insufficient feedback on power consumption during early architectural phases. Additionally, even small design changes trigger full re-synthesis of large blocks, wasting computational resources on unchanged portions of the design, while inter-module optimization requirements often degrade quality-of-results (QoR) when designs are artificially partitioned.
CRISPRware
Clustered regularly interspaced short palindromic repeats (CRISPR) screening is a cornerstone of functional genomics, enabling genome-wide knockout studies to identify genes involved in specific cellular processes or disease pathways. The success of CRISPR screens depends critically on the design of effective guide RNA (gRNA) libraries that maximize on-target activity while minimizing off-target effects. Current CRISPR screening lacks tools that can natively integrate next-generation sequencing (NGS) data for context-specific gRNA design, despite the wealth of genomic and transcriptomic information available from modern sequencing approaches. Traditional gRNA design tools have relied on static libraries with limited genome annotations and outdated scoring methods, lacking the flexibility to incorporate context-specific genomic information. Off-target effects are also a concern, with CRISPR-Cas9 systems tolerating up to three mismatches between single guide RNA (sgRNA) and genomic DNA, potentially leading to unintended mutations that could disrupt essential genes and compromise genomic integrity. Additionally, standard CRISPR library preparation methods can introduce bias through PCR amplification and cloning steps, resulting in non-uniform gRNA representation.
Storyai: An Ai-Based Narrative Storytelling App
An innovative AI tool that improves English communication skills among K-12 students using personalized storytelling.
Method And System For Quantized Machine Learning And Federated Learning
QAFeL is a novel asynchronous federated learning framework that combines buffered aggregation with bidirectional quantized communications, achieving up to 8× lower communication costs while preserving convergence speed and accuracy.