TT-PINN: A Tensor-Compressed Neural Partial Differential Equation Solver for Edge Computing

Tech ID: 32898 / UC Case 2023-842-0

Background

Physics-informed neural networks (PINNs) can be used to solve a wide range of problems involving partial differential equations (PDEs) that are applicable to fluids mechanics, materials modeling, safety verification, control of autonomous systems, and much more. Multilayer perception (MLP) architecture is effective in learning complex systems, but large neural networks are required to achieve more expressive power, which significantly increases the memory and computing cost of training a PINN. Furthermore, a PINN often has to be trained many times in practice once the problem setting changes. Therefore, it is increasingly important to enable PINN training on the multitude of available edge devices with very limited memory and computing power. 

Description

Researchers at the University of California, Santa Barbara have invented an end-to-end tensor-compressed technique for training PINNs on edge devices by combining tensor-train (TT) compressed model representation and a PINN to approximate the solutions of PDEs. This technology marks the first time that TT decomposition has been applied to PINNs with the result of drastically reducing the number of trainable parameters in each hidden layer from a large-scale weight matrix to multiple 3-way tensors (TT-cores) of small sizes, thus enabling the training of PINNs on edge devices. This technology, called TT-PINN, not only reduces the number of parameters to make the training affordable for edge devices, but also demonstrates the expressiveness power of a larger PINN. Experimental results show that TT-PINNs significantly outperform PINNs of similar or larger sizes with few parameters and achieve similarly accurate prediction with 15x smaller models. The network size reduction is expected to be much higher on large-size PINNs. 

Advantages

  • Enables PINN training on edge devices by drastically reducing training requirements
  • Demonstrates equivalent expressiveness power of a larger PINN
  • Outperforms PINNs of similar or larger sizes and achieves similarly accurate predictions with 15x smaller models.

Applications

  • AI and Neural Networks
  • Medical Imaging
  • Electronic Design Automation
  • Fluid dynamics
  • Digital twins
  • Autonomous Systems
  • Multi-agent robots
  • Safety-aware learning-based verification

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Inventors

  • Liu, Ziyue
  • Yu, Xinling
  • Zhang, Zheng

Other Information

Keywords

neural network, Physics-informed neural networks, partial differential equations, PDEs, tensor-train, PINN, prediction, AI, autonomous systems, medical imaging

Categorized As