Efficient Compressive Learning

Tech ID: 34700 / UC Case 2025-962-0

Background

Machine learning has transitioned from traditional supervised learning to more resource-efficient sketching and federated techniques. Early compressive learning relied on hand-crafted random projections and task-specific iterative solvers. While these methods reduced data volume, they were inflexible because a change in data distribution or task required a complete redesign of the projection. Concurrently, privacy-preserving needs led to the rise of federated learning and differential privacy. However, these methods often struggled with high communication costs and the inability to merge model updates effectively across different architectures. Until recently, the state of the art remained fairly bifurcated, where one could have either high-accuracy iterative training on raw data, or efficient but brittle and task-specific compressed representations that lacked generalizability across diverse analytical tasks, e.g., Principal Component Analysis (PCA), regression, clustering.

Technology Description

To help address these challenges, researchers at UC Santa Cruz (UCSC) developed unique Compressive Meta-Learning framework that replaces rigid, hand-crafted data compression with a pair of jointly trained neural networks. UCSC’s unique Sketch Network applies a learned non-linear projection to individual data samples, which are then combined via a permutation-invariant pooling operation into a single, fixed-size dataset-level embedding. UCSC’s unique Query Network is trained end-to-end with the Sketch Network to synthesize model parameters directly from this sketch. This novel meta-learning approach allows the system to predict parameters for various tasks, such as PCA, clustering, and regression, without ever re-accessing the original raw data samples after the initial single-pass sketch is generated.

Applications

  • healthcare - private informatics
  • banking - private informatics
  • model updates / training
  • edge computing / IoT

Features/Benefits

  • Single-pass dataset processing and the collapse of sample-level embeddings eliminates need for iterative [inefficient] optimization steps.
  • Joint end-to-end training of the sketch and query networks allows adaptation to a diverse set analytical tasks.
  • Privacy-aware noise calibration and vector-arithmetic merging enables easy addition/subtraction of sketches for federated updates.
  • Approach is compatible with both ResNet-style and Transformer-style implementations and across a wide variety of data types.

Inventors

Alexander Ioannidis
Daniel Mas Montserrat
David Bonet Sole

Intellectual Property Information

Patent Pending

Related Materials

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Other Information

Keywords

sketch, sketching, network, data, dataset, parameter, parameters, privacy, meta-learning, Compressive Meta-Learning, compression, pooling, federated, end-to-end training, differential privacy, federated aggregation, single-pass, Principal Component Analysis, PCA

Categorized As