Communication-Efficient Federated Learning

Tech ID: 34170 / UC Case 2025-818-0

Brief Description

A groundbreaking algorithm that significantly reduces communication time and message size in distributed machine learning, ensuring fast and reliable model convergence.

Full Description

This technology presents a novel approach to federated learning that addresses the critical challenge of communication bottlenecks by transmitting only a single scalar value instead of high-dimensional parameter sets during the model update phase. This method dramatically lowers bandwidth requirements and communication time, facilitating scalable, efficient, and privacy-preserving machine learning across distributed networks.

Suggested uses

  • Scalable federated learning solutions for Internet of Things (IoT) devices and mobile applications. 
  • Efficient distributed learning systems for edge computing environments. 
  • Resource-constrained scenarios requiring minimal data transmission and low energy consumption.

Advantages

  • Drastic reduction in data transmitted, minimizing communication time and bandwidth usage. 
  • Guaranteed convergence for reliable and efficient model updates. 
  • Significant scalability improvements allowing for more devices to participate in federated learning. 
  • Potential for enhanced security and privacy through gradient obfuscation. 
  • Simple and robust encoding process for facile integration into existing systems.

Patent Status

Patent Pending

Related Materials

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Other Information

Categorized As


5270 California Avenue / Irvine,CA
92697-7700 / Tel: 949.824.2683
  • Facebook
  • Twitter
  • Twitter
  • Twitter
  • Twitter