Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
-
Updated
May 3, 2024 - Shell
Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]
Deploy KoGPT with Triton Inference Server
Japanese NLP sample codes
RWKV Wiki website (archived, please visit official wiki)
reference pytorch code for huggingface transformers
Implementation of the paper: "Audio Mamba: Bidirectional State Space Model for Audio Representation Learning" in pytorch
A radically simple, reliable, and high performance template to enable you to quickly get set up building multi-agent applications
Master's Final Degree Project on Artificial Intelligence and Big Data
Scripts run to produce the RIBO-former paper
Setup transformers development environment using Docker
Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
Add a description, image, and links to the transformers topic page so that developers can more easily learn about it.
To associate your repository with the transformers topic, visit your repo's landing page and select "manage topics."