2019 Symposium Posters

Posters > 2019

Modular Neural Networks for Low-Power Computer Vision


Primary Investigator:
Yung-Hsiang Lu

Project Members
Abhinav Goel, Aniesh Chawla, Sara Aghajanzadeh, Caleb Tung, George K. Thiruvathukal, Yung-Hsiang Lu and Shuo-Han Chen
Embedded devices are generally small, battery-powered computers with limited hardware resources. It is difficult to run Deep Neural Networks (DNNs) on these devices, because DNNs perform millions of operations, and consume a significant amount of energy. Prior research has shown that a considerable number of a DNN’s memory accesses and computation is redundant when performing tasks like image classification. To reduce these redundancies, and thereby reduce the energy consumption of DNNs, we introduce the Modular Neural Network-Tree (MNN- Tree) architecture. In this architecture, categories are grouped together based on their similarity to one another, and small DNNs (called modules) are used to classify between these groups. Once a group of categories is selected by a module, it is confirmed that the predicted category exists within that group. Another module then processes the data further to classify between smaller subgroups of similar categories within the previously identified group. As a result, the operations associated with categories in other groups is avoided, thus reducing redundant computation and memory accesses. The assessment on several major image datasets reveals the effectiveness of the proposed solution to reduce memory requirement by 53%-97%, inference time by 66%-96%, energy consumption by 67%-95%, and the number of operations by 96%-99% when compared with the existing DNN architectures while performing image classification on the Raspberry Pi 3.