Journal of Shanghai Jiao Tong University ›› 2023, Vol. 57 ›› Issue (9): 1203-1213.doi: 10.16183/j.cnki.jsjtu.2022.077

Special Issue: 《上海交通大学学报》2023年“电子信息与电气工程”专题

• Electronic Information and Electrical Engineering • Previous Articles     Next Articles

A Structured Pruning Method Integrating Characteristics of MobileNetV3

LIU Yu, LEI Xuemei()   

  1. School of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
  • Received:2022-03-21 Revised:2022-05-30 Accepted:2022-06-06 Online:2023-09-28 Published:2023-09-27
  • Contact: LEI Xuemei E-mail:ndlxm@imu.edu.cn

Abstract:

Due to its huge amount of calculation and memory occupation, the traditional deep neural network is difficult to be deployed to embedded platform. Therefore, lightweight models have been developing rapidly. Among them, the lightweight architecture MobileNet proposed by Google has been widely used. To improve the performance, the model of MobileNet has developed from MobileNetV1 to MobileNetV3. However, the model has become more complex and its scale continues to expand, which is difficult to give full play to the advantages of lightweight model. To reduce the difficulty of deploying MobileNetV3 on embedded platform while maintaining its performance, a structured pruning method integrating the characteristics of MobileNetV3 is proposed to prune the lightweight model MobileNetV3-Large to obtain a more compact lightweight model. First, the model is trained by sparse regularization to obtain a sparse network model. Then, the product of the sparse value of convolution layer and scale factor of batch normalization layer is used to identify the redundant filter, which is structurally pruned, and experiment is conducted on CIFAR-10 and CIFAR-100 datasets. The results show that the proposed compression method can effectively compress the model parameters, and the compressed model can still ensure a good performance. While the accuracy remains unchanged, the number of parameters on CIFAR-10 in the model is reduced by 44.5% and calculation amount is reduced by 40%.

Key words: deep neural network, lightweight model, structured pruning, MobileNetV3

CLC Number: