[go: nahoru, domu]

Skip to content

ranggakd/whisper-ui

 
 

Repository files navigation

Streamlit UI for OpenAI's Whisper transcription & analytics

demo.mp4

This is a simple Streamlit UI for OpenAI's Whisper speech-to-text model. It let's you automatically select media by YouTube URL or select local files & then runs Whisper on them. Following that, it will display some basic analytics on the transcription. Feel free to send a PR if you want to add any more analytics or features!

Setup

This was built & tested on Python 3.9 but should also work on Python 3.7+ as with the original Whisper repo). You'll need to install ffmpeg on your system. Then, install the requirements with pip.

pip install -r requirements.txt

Usage

Once you're set up, you can run the app with:

streamlit run 01_Transcribe.py

This will open a new tab in your browser with the app. You can then select a YouTube URL or local file & click "Run Whisper" to run the model on the selected media.

License

Whisper is licensed under MIT while Streamlit is licensed under Apache 2.0. Everything else is licensed under MIT.

About

Streamlit UI for OpenAI's Whisper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%