Available Technologies

Find technologies available for licensing from UC Santa Cruz.

No technologies match these criteria.
Schedule UC TechAlerts to receive an email when technologies are published that match this search. Click on the Save Search link above

Separation of Methionine Sulfoxide Diastereomers.

Methionine (Met) is a common amino acid found in almost all proteins. When it undergoes oxidation (a common process in aging and disease), it transforms into methionine sulfoxide (Met-SO).The challenge is that this chemical reaction creates a new chiral center at the sulfur atom. This means that for every oxidized methionine, two different mirror-image versions (diastereomers) can exist: The (S,S) form and the (S,R) form.Before this invention, researchers struggled to separate these two forms. This resulted in two major technical hurdles:Standard techniques like High Performance Liquid Chromatography (HPLC) or fractional crystallization (a method dating back to 1947) were unreliable, difficult to reproduce, and failed to produce high-purity samplesBecause the two forms were so difficult to separate, almost all previous research on methionine oxidation used a mixture of both. This meant that if one form was toxic and the other was harmless, the results would be averaged out, hiding the true biological mechanism.A core motivation for this invention is the "staggering degree of disagreement" in Alzheimer's Disease research regarding the protein Amyloid beta (Aβ42)Some studies claimed that oxidized Aβ42 increased brain plaque toxicity, while others claimed it decreased itIt is plausible that these contradictions exist because previous researchers didn't know which specific diastereomer—(S,S) or (S,R)—they were testinOnce these two forms are created, they are remarkably stable. The energy barrier to flip from one form to the other is roughly 45.2 kcal/mol, which is significantly higher than other enantiomeric structures. This means that in the human body, the "wrong" version won't just flip back to the "right" one; it stays in that specific shape, potentially causing long-term damage if not properly regulated by specific enzymes (reductases). 

Scalable, Multi-Energy Detection and Imaging

Comprehensive radiation detection across the spectral range requires distinct systems for ionizing and non-ionizing imaging because each technology faces unique architectural hurdles. Modern visible light detection has successfully transitioned from passive plates to digital Active Pixel Sensors (APS) by leveraging Complementary Metal-Oxide-Semiconductor (CMOS) technology to provide every pixel with its own dedicated amplifier and active circuitry. Ionizing radiation detection like X-ray and gamma-ray has relied on exotic scintillators to convert radiation into light, a process prone to lateral light scattering and degraded spatial resolution. Recent advancements in ionizing radiation have shifted toward direct conversion materials like amorphous selenium (a-Se), which transform X-rays directly into electrical charges. However, these direct-conversion devices do not scale to larger areas without significant noise being a factor. This is primarily due to thin-film transistor (TFT) backplanes which, unlike their CMOS counterparts, lack the local amplification necessary to maintain a high signal-to-noise ratio.

Efficient Compressive Learning

Machine learning has transitioned from traditional supervised learning to more resource-efficient sketching and federated techniques. Early compressive learning relied on hand-crafted random projections and task-specific iterative solvers. While these methods reduced data volume, they were inflexible because a change in data distribution or task required a complete redesign of the projection. Concurrently, privacy-preserving needs led to the rise of federated learning and differential privacy. However, these methods often struggled with high communication costs and the inability to merge model updates effectively across different architectures. Until recently, the state of the art remained fairly bifurcated, where one could have either high-accuracy iterative training on raw data, or efficient but brittle and task-specific compressed representations that lacked generalizability across diverse analytical tasks, e.g., Principal Component Analysis (PCA), regression, clustering.

Loop-Free and Multi-Path Network Methods

The recent state of the art in network routing has been dominated by the Border Gateway Protocol (BGP). While BGP is the standard for inter-domain routing, it primarily relies on single-path propagation based on shortest-path or policy-driven criteria and metrics. Traditional multi-path approaches to network routing are challenged by routing loops and slow convergence in changing network topology environments. More recently, Software Defined Networking (SDN) and Segment Routing have attempted to provide more granular control; however, ensuring loop-free paths across multiple autonomous systems without impractical overhead remains a stubborn issue. In considering larger-scale network communications involving internet protocols (IP), these modern networks require higher resilience and bandwidth, making the ability to utilize multiple paths simultaneously without the risk of circular routing increasingly desirable.

Library Preparation And Normalization Of Copied DNA

Monitoring of viral infections such as with the SARS-CoV-2 virus was vital to detection and characterization of new variants before they became widespread and allowed public health agencies to deploy resources and develop policies in advance of new waves of the virus. I The ARTIC Network developed a panel of primers and a workflow for whole genome sequencing of SARS-CoV-2 using multiplex PCR. This became a popular strategy for sequencing. The ARTIC protocol generates overlapping PCR amplicons that span the SARS-CoV-2 genome using a defined multiplex PCR primer set. These were sequenced and mapped to the SARS-CoV-2 genome to generate a high quality consensus sequence of the variant in the sample. While ARTIC was developed for SARS-CoV-2, the protocol is readily adaptable to a wide array of viruses. Despite its clear utility, challenges arose for ARTIC: new variants would arise that the consensus primers would not recognize and all testing for those new variants would be compromised. Normalization of samples with high variation of starting template proved difficult and sequencing library preparation was not optimized for convenience, speed, or cost.  

Immobilization Devices for Biological Tissues

Organoid/brain slice immobilization for microelectrode arrays (MEAs) and organoid-on-chip platforms have traditionally depended on hydrogels, harp-style grids, or microfluidic confinement, each with its own set of pros and cons with respect to stability, standardization, and impact on electrophysiology. Hydrogels (e.g., Polyethylene glycol or PEG, extracellular matrix like Matrigel) are widely used to immobilize 3D neural tissues on MEAs. These are known to swell, drift, and alter mechanical microenvironments, which in turn modulate network firing, synchrony, and bursting behavior. Mechanical retention via harp slice grids or similar harp devices is a long-standing practice in acute brain slice and organoid electrophysiology. These devices are typically standardized, fragile, and poorly matched to diverse well and tissue geometries. ​Microfluidic organoid chips and specialized 3D MEAs (e.g., e-Flower, organoid-on-chip platforms) have recently emerged to enable hydrogel-free trapping/encapsulation of organoids for imaging and recordings, but they often require bespoke chip designs and overly complex flow control setups. There is a lack of geometry-agnostic devices for mechanically immobilizing diverse organoids on commercial MEAs that feature consistent stability, uniform and/or tailored contact, and with minimal perturbation of electrophysiological readouts.

Transmission Imaging for Medical Applications

Quantum‑correlated photon imaging experiments first used pairs of entangled photons so that an image was recovered only from correlations between the two detection paths rather than from either beam alone. Similar correlation and entanglement ideas have been attempted for higher energies and to positron‑annihilation photons, motivating quantum‑based Positron Emission Tomography (PET) concepts in which the additional quantum information carried by annihilation photon pairs could enhance image quality or add new types of contrast beyond conventional PET. In parallel, quantum‑inspired transmission imaging has been proposed as an alternative to Computed Tomography (CT), which today relies on a well‑characterized but fundamentally stochastic X‑ray source, and is limited by Poisson photon statistics, dose requirements, and capped contrast for soft‑tissue. Traditional X‑ray and CT imaging are governed by Poisson statistics, where independent, random photon arrivals make the variance equal to the mean, and has fundamentally bound SNR for a given dose. Research on quantum‑correlated transmission schemes has looked at image formation with higher‑order correlations between photons (rather than simple independent counting) such that performance is no longer capped by standard Poisson statistics, which can in principle lead to superior SNR and sharper anatomical detail at a given dose. To date, quantum‑based X‑ray implementations of this idea have largely relied on spontaneous parametric down‑conversion (SPDC) to generate entangled or correlated photon pairs, but SPDC at X‑ray‑level energies has extremely low conversion efficiency and pair rates—often only a few pairs per second—rendering such medical or biological imaging impractical. Quantum correlation of Annihilation Photon Imaging (QAPI) brings the correlation concepts into a PET‑like regime by using positron annihilation as a bright source of 511 keV gamma‑ray pairs while assuming a transmission‑imaging role similar to CT. QAPI is designed to exploit the strengths of both worlds: unlike CT, it can count the incident annihilation photons via the idler channel and operate in a high‑transmission regime that permits binomial transmission statistics. The PET‑like 511 keV photons introduce challenges that do not exist for CT, including low interaction probability in tissue and detectors, reduced single‑photon detection efficiency, and the need for precise coincidence timing between the signal and idler counts. For any high‑energy, photon-based imaging, including emerging quantum schemes, there is a fundamental tension between dose (especially for biological tissues that are highly susceptible to damage, cell death, or mutation when exposed to ionizing radiation) and the photon statistics needed for adequate SNR. Moreover, the dose‑normalized performance for quantum approaches is still not well established.

Accurate Pedestrian Tracking

The Global Navigation Satellite System (GNSS) consists of a family of satellite navigation systems (like GPS, Galileo, GLONASS, BeiDou) which provide global positioning and navigation from orbiting satellites. GNSS is one of the major inputs for phone location. Accurate pedestrian localization in “urban canyons” has long been limited by GNSS multipath errors and blocked line-of-sight, especially for blind and low-vision pedestrians who need sidewalk-level accuracy. GNSS-based positioning in dense downtowns is often limited to tens of meters off because skyscrapers block satellites, create multipath, and reduce signal quality, leading to especially large errors that make it hard to know which side of a street a pedestrian is on. For blind and low‑vision users, conventional smartphone navigation (pure GNSS, camera‑based visual positioning system, or beacon infrastructure) does not offer reliable, hands‑free, street‑side-accurate guidance. Most accuracy-focused approaches to date require detailed 3D models, specialized hardware, and/or substantial map annotations, limiting the scalability across urban environments and challenging mainstream apps deployment. Moreover, for blind and low‑vision pedestrians, integrating precise localization with usable, low‑attention interaction (i.e., no constant camera use, minimal screen looks) and robust crossing guidance is still a problem.