With the growing range of applications for Deep Neural Networks (DNNs), the demand for higher accuracy has directly impacted the depth of the state-of-the-art models. Although deeper networks are shown to have higher accuracy, they suffer from drastically long training time and slow convergence speed with high computational complexity.
Researchers at UC San Diego have Invented a network modification algorithm that takes as input a conventional CNN architecture and enforces a small-world property on its topology to generate a new network, called SWNet. The approach leverages a quantitative metric for small-worldness and devises a customized rewiring algorithm. The algorithm restructures the inter-layer connections in the in-put CNN to find a topology that balances regularity and randomness, which is the key characteristic of SWNs. Small-world properties in CNNs translates to an architecture where all layers are interlinked via sparse connections. Such networks have similar quality of prediction and number of trainable parameters as their baseline feed-forward architectures, but due to the added sparse links and the optimal SWN connectivity, they warrant better data flow. As a result, the architecture modification has three main benefits:
The disclosed technology can enhance the efficiency of any DNN, as used in image recognition, autonomous vehicles and robotics, etc.
During benchmarking experiments on various network architectures, SWNets’ performance on popular image classification benchmarks including CI-FAR10, CIFAR100, and ImageNet achieve an average of 2.1-fold improvement in training iterations required to achieve comparable classification accuracy as the baseline models. Comparisons of SWNet with the state-of-the-art DenseNet model shows that with 10×fewer parameters, SWNets demonstrate identical performance during training.
Working software has been developed and is available for evaluation.
The idea is patent pending and available for licensing.