[go: nahoru, domu]

Skip to content

[ IJCAI-20 ] Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

Notifications You must be signed in to change notification settings

qiulinzhang/SPConv.pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 

Repository files navigation

SPConv.pytorch

[ IJCAI-20, 12.9% Accepted] Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

This repo provides Pytorch implementation of IJCAI 2020 paper
Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

Requirements

Basic code is borrowed from NVIDIA DALI tutorials

  • Python 3
  • Pytorch 1.1
  • NVIDIA DALI for GPU dataloader
  • NVIDIA APEX for mixed precision

Introduction of SPConv

Redundancy in Feature Maps

Abstract

We reveal that many feature maps within a layer share similar but not identical patterns. However, it is difficult to identify if features with similar patterns are redundant or contain essential details. Therefore, instead of directly removing uncertain redundant features, we propose a split based convolutional operation, namely SPConv, to tolerate features with similar patterns but require less computation.

Specifically, we split input feature maps into the representative part and the uncertain redundant part, where intrinsic information is extracted from the representative part through relatively heavy computation while tiny hidden details in the uncertain redundant part are processed with some light-weight operation.

SPConv Module

Performance

Outperforms SOTA baselines in both accuracy and inference time on GPU, with FLOPs and parameters dropped sharply.

Small Scale Classification

CIFAR_10 - VGG_16
Model FLOPs FLOPs
Reduced
Params Params
Reduced
Acc@1
VGG_16-Baseline 349.51M - 16.62M - 94.00%
Ghost-VGG_16-s2 158M 45.20% 7.7M 53.67% 93.70%
SPConv-VGG_16-α1/2 118.27M 66.24% 5.6M 66.30% 94.40%
HetConv-VGG_16-P4 105.98M 69.67% 5.17M 68.89% 93.92%
SPConv-VGG_16-α1/4 79.34M 77.29% 3.77M 77.31% 93.94%
HetConv-VGG_16-P8 76.89M 78.00% 3.54M 78.70% 93.86%
SPConv-VGG_16-α1/8 59.87M 82.87% 2.85M 82.85% 93.77%
SPConv-VGG 16-α1/16 55.14M 84.22% 2.39M 85.62% 93.43%
CIFAR_10 - ResNet_20
ResNet_20-Baseline 41.62M - 0.27M - 92%
SPConv-ResNet_20-α1/2 17.91M 56.96% 0.10M 63.00% 92.23%
SPConv-ResNet_20-α1/4 12.89M 75.88% 0.071M 73.70% 91.15%

Large Scale Classification

ImageNet2012-ResNet50
Model FLOPs FLOPs
Reduced
Params Params
Reduced
Acc@1 Acc@5 Inference Time
on GPU
Download
ResNet50-Baseline 4.14G - 25.56M - 75.91% 92.78% 1.32 ms -
SPConv-ResNet50-α1/2 2.97G 28.26% 18.34M 28.24% 76.26% 93.05% 1.23 ms model
HetConv-ResNet50-P4 2.85G 30.32% - - 76.16% - -
SPConv-ResNet50-α1/4 2.74G 33.82% 16.93M 33.76% 75.95% 92.99% 1.19 ms model
SPConv-ResNet50-α1/8 2.62G 36.72% 16.22M 36.54% 75.40% 92.77% 1.17 ms model
OctConv-ResNet50-α0.5† 2.40G 42.00% 25.56M 0.00% 76.40% 93.14% 3.51 ms -
Ghost-ResNet50-s2 2.20G 46.85% 13.0M 49% 75% 92.3% - -

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{zhang2020spconv,
  title={Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution},
  author={Zhang, Qiulin and Jiang, Zhuqing and Lu, Qishuo and Han, Jia'nan and Zeng, Zhengxin and Gao, Shang-Hua and Men, Aidong},
  booktitle={International Joint Conference on Artificial Intelligence (IJCAI)},
  year={2020}
}

About

[ IJCAI-20 ] Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages