Create captions for videos using AI.
This small project uses OpenAI's whisper AI to generate captions for videos.
- FFmpeg
- Python
- Clone the repo
- Install python from here if you have not already installed it.
- You can install FFmpeg from here or use your package manager to install it.
- Install the requirements by running
py setup.py
inside of the project directory.
- If you have not already, run the
setup.py
file to install the required packages.
python setup.py
- Then run the
main.py
with your own arguments.
python main.py -v <video path> -m <whisper model name> -p (-p is optional)
- Example
python main.py -v video.mp4 -m tiny.en -p
-
Arguments
-v
or--video
- The path to the video file.-m
or--model
- The name of the model to use. (The list of models can be found below)-p
or--preview
- If you want to preview the generated video.
-
Models
Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
---|---|---|---|---|---|
tiny | 39 M | tiny.en |
tiny |
~1 GB | ~32x |
base | 74 M | base.en |
base |
~1 GB | ~16x |
small | 244 M | small.en |
small |
~2 GB | ~6x |
medium | 769 M | medium.en |
medium |
~5 GB | ~2x |
large | 1550 M | N/A | large |
~10 GB | 1x |
For English-only applications, the .en models tend to perform better, especially for the tiny.en and base.en models. These models are provided by OpenAI. More Info
It may take some time to run for the first time since the AI model needs to be downloaded.
The output of the program will be in the output
folder. There will be a video called output.mp4
inside of the folder. There will also be a transcript file called captions.vtt
and an audio file called audio.mp3
.
Make an Issue to report a bug or request a feature.
- 0.3
- Using ffmpeg instead of moviepy
- Increased speed by so much
- 0.2
- Added the ability to preview the generated video.
- Added better argument parsing.
- 0.1.1
- Fixed video getting cut off
- 0.1.0
- Initial Release
This project is licensed under the GNU General Public License v3.0 License.