[go: nahoru, domu]

Skip to content

Caffe with quantization layer. With this caffe you can train the quantization (1bit/3bit) model

License

Notifications You must be signed in to change notification settings

TEE-AI/caffe_quantization

 
 

Repository files navigation

Quantization Caffe for TEE Compute Stick

We add the quantization layer to offical caffe, make it can train the quantization model (1bit/3bit).

With the quantization model, you can transfer it to the model which can run on TEE Compute Stick by the conversion tool

License and Citation

Caffe is released under the BSD 2-Clause license. The BAIR/BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}

About

Caffe with quantization layer. With this caffe you can train the quantization (1bit/3bit) model

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 79.7%
  • Python 9.0%
  • Cuda 6.3%
  • CMake 2.8%
  • MATLAB 0.9%
  • Makefile 0.7%
  • Other 0.6%