Technique for Safe and Trusted AI

Tech ID: 33855 / UC Case 2024-9B0-0

Abstract

Researchers at the University of California Davis have developed a technology that enables the provable editing of DNNs (deep neural networks) to meet specified safety criteria without altering their architecture.

Full Description

This invention presents systems and methods for editing deep neural networks (DNNs) to ensure they satisfy given safety specifications. Unlike traditional approaches that may require retraining from scratch, this method employs formulas and efficient programming solvers to adjust DNNs, ensuring they adhere to specified input-output criteria without modifying the DNN's structure.

Applications

  • Enhancement of safety-critical applications such as self-driving cars and healthcare systems. 
  • Improvement of pattern recognition and problem-solving in computational models. 
  • Development of more reliable and efficient neural network editing tools.

Features/Benefits

  • Supports safety specifications using quantified linear formulas, accommodating infinite data sets in high-dimensional spaces. 
  • Maintains the original architecture of the DNN, avoiding complex structural changes. 
  • Provides a provable editing approach that ensures DNNs meet specified safety criteria. 
  • Significantly reduces the time, processor resources, memory, and power typically required for DNN editing. 
  • Reduces time-consuming and resource-intensive retraining of DNNs for error correction. 
  • Provides guidance for correcting DNNs identified as inaccurate by verifiers. 
  • Eases difficulty in ensuring DNNs meet safety-critical application standards.

Patent Status

Patent Pending

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Inventors

  • Tao, Zhe
  • Thakur, Aditya

Other Information

Keywords

artificial intelligence, deep neural networks (DNN), safety-critical application enhancement, pattern recognition, quantified linear formulas, DNN correction

Categorized As