Browse Category: Computer > Software

[Search within category]

Efficient Compressive Learning

Machine learning has transitioned from traditional supervised learning to more resource-efficient sketching and federated techniques. Early compressive learning relied on hand-crafted random projections and task-specific iterative solvers. While these methods reduced data volume, they were inflexible because a change in data distribution or task required a complete redesign of the projection. Concurrently, privacy-preserving needs led to the rise of federated learning and differential privacy. However, these methods often struggled with high communication costs and the inability to merge model updates effectively across different architectures. Until recently, the state of the art remained fairly bifurcated, where one could have either high-accuracy iterative training on raw data, or efficient but brittle and task-specific compressed representations that lacked generalizability across diverse analytical tasks, e.g., Principal Component Analysis (PCA), regression, clustering.

Solar-Powered Robot For Persistent Monitoring Applications

An autonomous, solar-powered robot designed to travel along suspended wires for long-term, non-invasive environmental monitoring in hard-to-reach natural areas.

Accurate Pedestrian Tracking

The Global Navigation Satellite System (GNSS) consists of a family of satellite navigation systems (like GPS, Galileo, GLONASS, BeiDou) which provide global positioning and navigation from orbiting satellites. GNSS is one of the major inputs for phone location. Accurate pedestrian localization in “urban canyons” has long been limited by GNSS multipath errors and blocked line-of-sight, especially for blind and low-vision pedestrians who need sidewalk-level accuracy. GNSS-based positioning in dense downtowns is often limited to tens of meters off because skyscrapers block satellites, create multipath, and reduce signal quality, leading to especially large errors that make it hard to know which side of a street a pedestrian is on. For blind and low‑vision users, conventional smartphone navigation (pure GNSS, camera‑based visual positioning system, or beacon infrastructure) does not offer reliable, hands‑free, street‑side-accurate guidance. Most accuracy-focused approaches to date require detailed 3D models, specialized hardware, and/or substantial map annotations, limiting the scalability across urban environments and challenging mainstream apps deployment. Moreover, for blind and low‑vision pedestrians, integrating precise localization with usable, low‑attention interaction (i.e., no constant camera use, minimal screen looks) and robust crossing guidance is still a problem.

Host-Based Intrusion Detection Systems Powered By Large Language Models

SHIELD leverages a customized large language model pipeline to detect and investigate sophisticated cyber threats with high accuracy and interpretability.

An Architecture For Adaptive Split Computing In Vision-Language Models

An intent-aware, dual-stream AI architecture that adapts compute allocation and inference depth on embedded platforms, balancing rapid triage and detailed analysis for real-time visual understanding.

Optimized Sensitivity-Based Current Profiles for Battery Parameter Identification

Researchers at the University of California, Davis have developed a method to design optimized current profiles for lithium-ion batteries using analytic sensitivity functions. By leveraging a reduced electrochemical model, the approach enables fast and accurate identification of key parameters, improving battery management systems and reducing testing time.

Signal Space Based Navigation

Researchers at the University of California, Davis have developed a navigation system that constructs a sensing map from wireless signal observations and pedestrian deadreckoning (PDR) data to enable accurate indoor navigation without relying on traditional geographic localization maps.

A Quantitative, Multimodal Wearable Bioelectronic For Comprehensive Stress Assessment And Sub-Classification

A multimodal, wireless wearable device enabling continuous and detailed stress assessment and subclassification.

A Predictive ML Model For Cancer Early Relapse

Brief description not available

Automated Optimized Adaptive Neurostimulation

Brief description not available

Brain Activity Imbalance Biomarker For Dementia

Brief description not available

AI-driven Infrastructure for Student Audio Response Collection, Transcription, and Analysis

AI infrastructure that collects, transcribes, and analyzes student audio responses to deliver actionable insights on learning experiences.

PEINT (Protein Evolution IN Time)

UC Berkeley researchers have developed a sophisticated computer-implemented framework that leverages transformer architectures to model the evolution of biological sequences over time. Unlike traditional phylogenetic models that often assume sites evolve independently, this framework utilizes a coupled encoder-decoder transformer to parameterize the conditional probability of a target sequence given multiple unaligned sequences. By capturing complex interactions and dependencies across different sites within a protein or genomic sequence, the model estimates the transition likelihood for each position. This estimation allows for a high-fidelity simulation of evolutionary trajectories. This approach enables a deeper understanding of how proteins change across different timescales and environmental pressures.

Non-Invasive Tool That Assesses Bruise Injuries Across All Skin Types.

An innovative non-invasive device that accurately determines the age of bruises for all skin types and tones, designed to assist in forensic investigations and medical diagnostics.

Method for Unlearning Content for Large Language Models

Researchers at the University of California Davis have developed an unlearning method that precisely removes specific data influences from trained large language models while preserving their overall knowledge and performance.

Using AI to Find Evidence-Based Actions to Achieve Modelable Goals

Researchers at the University of California, Davis have developed an AI-powered framework that bridges the gap between predictive feature analysis and actionable interventions by extracting evidence-based recommendations from scientific literature.

Gamified Speech Therapy System and Methods

Historically, speech therapy apps have relied primarily on online cloud-based speech recognition systems like those used in digital assistants (Cortana, Siri, Google Assistant), which are designed to best guess speech rather than critically evaluate articulation errors. For children with cleft palate specifically, affecting 1 in 700 babies globally, speech therapy is essential follow-up care after reconstructive surgery. Approximately 25% of children with clefts use compensatory articulation errors, and when these patterns become habituated during ages 3-5, they become particularly resistant to change in therapy. Traditional approaches to mobile speech therapy apps have included storybook-style narratives that proved expensive with low replayability and engagement, as well as fast-paced arcade-style games that failed to maintain user interest. Common speech therapy applications require a facilitator to evaluate speech performance and typically depend on continuous internet connectivity, creating barriers for users in areas with poor network coverage or those concerned about data privacy and roaming costs. The shift toward gamified therapy solutions showed that game elements can serve as powerful motivators for otherwise tedious activities. Speech recognition systems face inherent limitations in accuracy compared to cloud-based solutions and require substantial processing power and memory that can impact device performance and battery life, particularly on older mobile devices. Automatic speech recognition (ASR) models struggle significantly with children's speech due to non-fluent pronunciation and variability in speech patterns, with phoneme error rates reaching almost 12%, and consonant recognition errors affecting the reliability of speech disorder detection. The challenge becomes even more pronounced for populations with speech impairments, as conventional ASR systems are optimized for typical adult speech rather than atypical articulation patterns of cleft palate speech or developmental disabilities. Moreover, maintaining user engagement over extended therapy periods is hard, and many apps fail to provide sufficient motivation for daily practice, which is essential for speech improvement.

Methods and Systems for Annotating Floorplans

Traditional approaches to indoor mapping relied heavily on manual floor plan tracing or rule-based computer vision algorithms, which proved fragile when confronted with the wide variety of graphical representations used in architectural drawings. While Computer-Aided Design (CAD) floor plans in formats like DWG or DWF exist for most modern buildings, these detailed technical drawings are typically proprietary and inaccessible to the public. Mappers often work with low-quality images (JPEG or PDF format) of floor plans, necessitating manual digitization processes. RGB-D cameras, which capture both color and depth information, emerged as promising tools for 3D indoor scanning, though they face limitations including restricted range (typically less than 5 meters), sensitivity to lighting conditions, noisy point clouds at object edges, and computational demands for real-time processing. Automatic floor plan vectorization algorithms remain highly sensitive to image quality and graphical symbol variations, often requiring substantial manual editing even with state-of-the-art deep learning approaches.

Learning Multimodal Sim-To-Real Robot Policies With Generative Audio

The deployment of robotic systems in real-world environments is often limited by the "sim-to-real gap," where policies trained in digital simulations fail to account for the complex, multisensory feedback of physical reality. Researchers at UC Berkeley have developed a novel method for training multimodal sim-to-real robot policies by integrating generative audio models with traditional physics-based simulators. This framework uses a generative model to synthesize realistic audio data that corresponds to simulated physical interactions, creating a rich, multimodal dataset for policy learning. By training on both simulated physics and generated sensory data, the system enables robots to develop more robust and adaptive behaviors that translate seamlessly from virtual training environments to complex real-world tasks.

  • Go to Page: