Browse Category: Medical > Software

[Search within category]

Transmission Imaging for Medical Applications

Quantum‑correlated photon imaging experiments first used pairs of entangled photons so that an image was recovered only from correlations between the two detection paths rather than from either beam alone. Similar correlation and entanglement ideas have been attempted for higher energies and to positron‑annihilation photons, motivating quantum‑based Positron Emission Tomography (PET) concepts in which the additional quantum information carried by annihilation photon pairs could enhance image quality or add new types of contrast beyond conventional PET. In parallel, quantum‑inspired transmission imaging has been proposed as an alternative to Computed Tomography (CT), which today relies on a well‑characterized but fundamentally stochastic X‑ray source, and is limited by Poisson photon statistics, dose requirements, and capped contrast for soft‑tissue. Traditional X‑ray and CT imaging are governed by Poisson statistics, where independent, random photon arrivals make the variance equal to the mean, and has fundamentally bound SNR for a given dose. Research on quantum‑correlated transmission schemes has looked at image formation with higher‑order correlations between photons (rather than simple independent counting) such that performance is no longer capped by standard Poisson statistics, which can in principle lead to superior SNR and sharper anatomical detail at a given dose. To date, quantum‑based X‑ray implementations of this idea have largely relied on spontaneous parametric down‑conversion (SPDC) to generate entangled or correlated photon pairs, but SPDC at X‑ray‑level energies has extremely low conversion efficiency and pair rates—often only a few pairs per second—rendering such medical or biological imaging impractical. Quantum correlation of Annihilation Photon Imaging (QAPI) brings the correlation concepts into a PET‑like regime by using positron annihilation as a bright source of 511 keV gamma‑ray pairs while assuming a transmission‑imaging role similar to CT. QAPI is designed to exploit the strengths of both worlds: unlike CT, it can count the incident annihilation photons via the idler channel and operate in a high‑transmission regime that permits binomial transmission statistics. The PET‑like 511 keV photons introduce challenges that do not exist for CT, including low interaction probability in tissue and detectors, reduced single‑photon detection efficiency, and the need for precise coincidence timing between the signal and idler counts. For any high‑energy, photon-based imaging, including emerging quantum schemes, there is a fundamental tension between dose (especially for biological tissues that are highly susceptible to damage, cell death, or mutation when exposed to ionizing radiation) and the photon statistics needed for adequate SNR. Moreover, the dose‑normalized performance for quantum approaches is still not well established.

A Predictive ML Model For Cancer Early Relapse

Brief description not available

Automated Optimized Adaptive Neurostimulation

Brief description not available

Synthetically Generating Medical Images Using Deep Convolutional Generative Adversarial Networks.

An advanced AI-driven system for synthetic medical data generation and precise segmentation of cardiac MRI to enhance accuracy and efficiency in cardiovascular health.

Using AI to Find Evidence-Based Actions to Achieve Modelable Goals

Researchers at the University of California, Davis have developed an AI-powered framework that bridges the gap between predictive feature analysis and actionable interventions by extracting evidence-based recommendations from scientific literature.

Gamified Speech Therapy System and Methods

Historically, speech therapy apps have relied primarily on online cloud-based speech recognition systems like those used in digital assistants (Cortana, Siri, Google Assistant), which are designed to best guess speech rather than critically evaluate articulation errors. For children with cleft palate specifically, affecting 1 in 700 babies globally, speech therapy is essential follow-up care after reconstructive surgery. Approximately 25% of children with clefts use compensatory articulation errors, and when these patterns become habituated during ages 3-5, they become particularly resistant to change in therapy. Traditional approaches to mobile speech therapy apps have included storybook-style narratives that proved expensive with low replayability and engagement, as well as fast-paced arcade-style games that failed to maintain user interest. Common speech therapy applications require a facilitator to evaluate speech performance and typically depend on continuous internet connectivity, creating barriers for users in areas with poor network coverage or those concerned about data privacy and roaming costs. The shift toward gamified therapy solutions showed that game elements can serve as powerful motivators for otherwise tedious activities. Speech recognition systems face inherent limitations in accuracy compared to cloud-based solutions and require substantial processing power and memory that can impact device performance and battery life, particularly on older mobile devices. Automatic speech recognition (ASR) models struggle significantly with children's speech due to non-fluent pronunciation and variability in speech patterns, with phoneme error rates reaching almost 12%, and consonant recognition errors affecting the reliability of speech disorder detection. The challenge becomes even more pronounced for populations with speech impairments, as conventional ASR systems are optimized for typical adult speech rather than atypical articulation patterns of cleft palate speech or developmental disabilities. Moreover, maintaining user engagement over extended therapy periods is hard, and many apps fail to provide sufficient motivation for daily practice, which is essential for speech improvement.

Inferring Dynamic Hidden Graph Structure in Heterogeneous Correlated Time Series

Current methods for treating nervous system disorders often rely on generalized approaches that may not optimally address the individual patient's specific pathology, leading to suboptimal outcomes. This innovation, developed by UC Berkeley researchers, provides a method to identify the most critical, or "influential," nodes within a patient's functional connectivity network derived from time-series data of an organ or organ system. The method involves obtaining multiple time-series datasets from an affected organ/system, using them to map the functional connectivity network, and then determining the most influential nodes within that network. By providing this specific and personalized information to a healthcare provider, a treatment can be prescribed that precisely targets the respective organ corresponding to these influential nodes. This personalized, data-driven approach offers a significant advantage over conventional treatments by focusing intervention on the most impactful biological targets, potentially leading to more effective and efficient patient care.

Using Machine Learning And 3D Projection To Guide Surgery

A medical device that uses machine learning and augmented reality to project precise surgical guides onto 3D patient anatomy, enabling real-time surgical guidance and remote expert collaboration.

Method And System For Quantized Machine Learning And Federated Learning

QAFeL is a novel asynchronous federated learning framework that combines buffered aggregation with bidirectional quantized communications, achieving up to 8× lower communication costs while preserving convergence speed and accuracy.

Communication-Efficient Federated Learning

A groundbreaking algorithm that significantly reduces communication time and message size in distributed machine learning, ensuring fast and reliable model convergence.

3D Cardiac Strain Analysis

An advanced geometric method for comprehensive 3D cardiac strain analysis, enhancing diagnosis and monitoring of myocardial diseases.

Machine Learning Framework for Inferring Latent Mental States from Digital Activity (MILA)

Scalable assessments of mental illness, the leading driver of disability worldwide, remain a critical roadblock toward accessible and equitable care. Researchers at UC Berkeley have introduced MAILA (MAchine-learning framework for Inferring Latent mental states from digital Activity), an innovation demonstrating that everyday human-computer interactions encode multiple dimensions of self-reported mental health and their changes over time. MAILA was trained to predict 1.3 million mental-health self-reports from 20,000 cursor and touchscreen recordings, identifying cognitive signatures of psychological function that go beyond what is conveyed by language. Key features and benefits include the ability to track dynamic mental states along three orthogonal dimensions, achieve near-ceiling accuracy in group-level predictions, and translate insights from general to clinical populations to identify individuals with self-reported mental illness.

Organoid Training System and Methods

Advances in biological research have been greatly influenced by the development of organoids, a specialized form of 3D cell culture. Created from pluripotent stem cells, organoids are effective in vitro models in replicating the structure and progression of organ development, providing an exceptional tool for studying the complexities of biology. Among these, cerebral cortex organoids (hereafter "organoid") have become particularly instrumental in providing valuable insights into brain formation, function, and pathology. Modern methods of interfacing with organoids involve any combination of encoding information, decoding information, or perturbing the underlying dynamics through various timescales of plasticity. Our knowledge of biological learning rules has not yet translated to reliable methods for consistently training neural tissue in goal-directed ways. In vivo training methods commonly exploit principles of reinforcement learning and Hebbian learning to modify biological networks. However, in vitro training has not seen comparable success, and often cannot utilize the underlying, multi-regional circuits enabling dopaminergic learning. Successfully harnessing in vitro learning methods and systems could uniquely reveal fundamental mesoscale processing and learning principles. This may have profound implications, from developing targeted stimulation protocols for therapeutic interventions to creating energy-efficient bio-electronic systems.

AI-Powered Trabecular Meshwork Identification for Glaucoma Surgeries

A revolutionary software that integrates with surgical microscopes to accurately locate the trabecular meshwork (TM), enhancing the safety and efficiency of glaucoma surgeries.

  • Go to Page: