[go: nahoru, domu]

Jump to content

Text-to-video model

From Wikipedia, the free encyclopedia
A video generated using OpenAI's unreleased, open source Sora text-to-video model, using the prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text.[1] Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.[2]

Models

[edit]

There are different models, including open source models. Chinese-language input[3] CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented on GitHub in 2022.[4] That year, Meta Platforms released a partial text-to-video model called "Make-A-Video",[5][6][7] and Google's Brain (later Google DeepMind) introduced Imagen Video, a text-to-video model with 3D U-Net.[8][9][10][11][12]

In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation.[13] The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences.[14] In the same month, Adobe introduced Firefly AI as part of its features.[15]

In January 2024, Google announced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities.[16] Matthias Niessner and Lourdes Agapito at AI company Synthesia work on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars.[17] In June 2024, Luma Labs launched its Dream Machine video tool.[18][19] That same month,[20] Kuaishou extended its Kling AI text-to-video model to international users. In July 2024, TikTok owner ByteDance released Jimeng AI in China, through its subsidiary, Faceu Technology.[21] By September 2024, the Chinese AI company MiniMax debuted its video-01 model, joining other established AI model companies like Zhipu AI, Baichuan, and Moonshot AI, which contribute to China’s involvement in AI technology.[22]

Alternative approaches to text-to-video models include[23] Google's Phenaki, Hour One, Colossyan,[24] Runway's Gen-3 Alpha,[25][26] and OpenAI's unreleased (as at August 2024) Sora,[27] available only to alpha testers.[28] Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged.[29] Google is also preparing to launch a video generation tool named Veo for YouTube Shorts in 2025.[30]

Architecture and Training

[edit]

There are several architectures that have been used to create Text-to-Video models. Similar to Text-to-Image models, these models can be trained using Recurrent Neural Networks (RNNs) such as long short-term memory (LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively.[31] An alternative for these include transformer models. Generative adversarial networks (GANs), Variational autoencoders (VAEs), — which can aid in the prediction of human motion[32] — and diffusion models have also been used to develop the image generation aspects of the model.[33]

Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M.[34][35] These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM.[34][35] These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts.

The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence.[35] This predictive process is subject to decline in quality as the length of the video increases due to resource limitations.[35]

Limitations

[edit]

Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs.[36][37] Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility.[37][36]

Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message.[37][35] Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation.[37]

Ethics

[edit]

The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent.[38] Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences.[38]

Impacts and Applications

[edit]

Text-to-Video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate high-quality, dynamic content.[39] These features provide users with economical and personal benefits.

Comparison of existing models

[edit]
Model/Product Company Year Released 'Status Key Features Capabilities Pricing Video Length Supported Languages
Synthesia Synthesia 2019 Released AI avatars, multilingual support for 60+ languages, customization options[40] Specialized in realistic AI avatars for corporate training and marketing[40] Subscription-based, starting around $30/month Varies based on subscription 60+
InVideo AI InVideo 2021 Released AI-powered video creation, large stock library, AI talking avatars[40] Tailored for social media content with platform-specific templates[40] Free plan available, Paid plans starting at $16/month Varies depending on content type Multiple (not specified)
Fliki Fliki AI 2022 Released Text-to-video with AI avatars and voices, extensive language and voice support[40] Supports 65+ AI avatars and 2,000+ voices in 70 languages[40] Free plan available, Paid plans starting at $30/month Varies based on subscription 70+
Runway Gen-2 Runway AI 2023 Released Multimodal video generation from text, images, or videos[41] High-quality visuals, various modes like stylization and storyboard[41] Free trial, Paid plans (details not specified) Up to 16 seconds Multiple (not specified)
Pika Labs Pika Labs 2024 Beta Dynamic video generation, camera and motion customization[42] User-friendly, focused on natural dynamic generation[42] Currently free during beta Flexible, supports longer videos with frame continuation Multiple (not specified)
Runway Gen-3 Alpha Runway AI 2024 Alpha Enhanced visual fidelity, photorealistic humans, fine-grained temporal control[43] Ultra-realistic video generation with precise key-framing and industry-level customization[43] Free trial available, custom pricing for enterprises Up to 10 seconds per clip, extendable Multiple (not specified)
OpenAI Sora OpenAI 2024 (expected) Alpha Deep language understanding, high-quality cinematic visuals, multi-shot videos[44] Capable of creating detailed, dynamic, and emotionally expressive videos; still under development with safety measures[44] Pricing not yet disclosed Expected to generate longer videos; duration specifics TBD Multiple (not specified)

See also

[edit]

References

[edit]
  1. ^ Artificial Intelligence Index Report 2023 (PDF) (Report). Stanford Institute for Human-Centered Artificial Intelligence. p. 98. Multiple high quality text-to-video models, AI systems that can generate video clips from prompted text, were released in 2022.
  2. ^ Melnik, Andrew; Ljubljanac, Michal; Lu, Cong; Yan, Qi; Ren, Weiming; Ritter, Helge (2024-05-06). "Video Diffusion Models: A Survey". arXiv:2405.03150 [cs.CV].
  3. ^ Text-to-Video Generative AI Models: The Definitive List AI Business accessed 19 August 2024.
  4. ^ CogVideo, THUDM, 2022-10-12, retrieved 2022-10-12
  5. ^ Davies, Teli (2022-09-29). "Make-A-Video: Meta AI's New Model For Text-To-Video Generation". Weights & Biases. Retrieved 2022-10-12.
  6. ^ Monge, Jim Clyde (2022-08-03). "This AI Can Create Video From Text Prompt". Medium. Retrieved 2022-10-12.
  7. ^ "Meta's Make-A-Video AI creates videos from text". www.fonearena.com. Retrieved 2022-10-12.
  8. ^ "google: Google takes on Meta, introduces own video-generating AI". The Economic Times. 6 October 2022. Retrieved 2022-10-12.
  9. ^ Monge, Jim Clyde (2022-08-03). "This AI Can Create Video From Text Prompt". Medium. Retrieved 2022-10-12.
  10. ^ "Nuh-uh, Meta, we can do text-to-video AI, too, says Google". www.theregister.com. Retrieved 2022-10-12.
  11. ^ "Papers with Code - See, Plan, Predict: Language-guided Cognitive Planning with Video Prediction". paperswithcode.com. Retrieved 2022-10-12.
  12. ^ "Papers with Code - Text-driven Video Prediction". paperswithcode.com. Retrieved 2022-10-12.
  13. ^ Luo, Zhengxiong; Chen, Dayou; Zhang, Yingya; Huang, Yan; Wang, Liang; Shen, Yujun; Zhao, Deli; Zhou, Jingren; Tan, Tieniu (2023). "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation". arXiv:2303.08320 [cs.CV].
  14. ^ "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation". ar5iv. Retrieved 2024-08-30.
  15. ^ "Adobe launches Firefly Video model and enhances image, vector and design models. Adobe Newsroom".
  16. ^ "Google announces the development of Lumiere, an AI-based next-generation text-to-video generator".
  17. ^ "Text to Speech for Videos". Retrieved 2023-10-17.
  18. ^ Luma AI debuts 'Dream Machine' for realistic video generation, heating up AI media race VentureBeat accessed August 16, 2024.
  19. ^ Apple Debuts Intelligence, Mistral Raises $600 Million, New AI Text-To-Video Forbes accessed August 16, 2024.
  20. ^ What you need to know about Kling, the AI video generator rival to Sora that’s wowing creators VentureBeat accessed August 16, 2024.
  21. ^ ByteDance joins OpenAI's Sora rivals with AI video app launch Reuters accessed August 16, 2024.
  22. ^ "Chinese ai "tiger" minimax launches text-to-video-generating model to rival OpenAI's sora".
  23. ^ Text2Video-Zero, Picsart AI Research (PAIR), 2023-08-12, retrieved 2023-08-12
  24. ^ Text-to-Video Generative AI Models: The Definitive List AI Business accessed August 16, 2024.
  25. ^ Runway's Sora competitor Gen-3 Alpha now available The Decoder accessed August 16, 2024.
  26. ^ Generative AI's Next Frontier Is Video Bloomberg accessed August 16, 2024.
  27. ^ OpenAI teases 'Sora,' its new text-to-video AI model NBC News accessed August 16, 2024.
  28. ^ Toys R Us creates first brand film to use OpenAI’s text-to-video tool Marketing Dive accessed August 16, 2024.
  29. ^ Jin, Jiayao; Wu, Jianhang; Xu, Zhoucheng; Zhang, Hang; Wang, Yaxin; Yang, Jielong (2023-08-04). "Text To Video: Enhancing Video Generation Using Diffusion Models And Reconstruction Network". IEEE: 108–114. doi:10.1109/CCPQT60491.2023.00024. ISBN 979-8-3503-4269-7. {{cite journal}}: Cite journal requires |journal= (help)
  30. ^ "Google's veo text-to-video AI generator is coming to YouTube shorts. PCMAG".
  31. ^ Bhagwatkar, Rishika; Bachu, Saketh; Fitter, Khurshed; Kulkarni, Akshay; Chiddarwar, Shital (2020-12-17). "A Review of Video Generation Approaches". IEEE: 1–5. doi:10.1109/PICC51425.2020.9362485. ISBN 978-1-7281-7590-4. {{cite journal}}: Cite journal requires |journal= (help)
  32. ^ Kim, Taehoon; Kang, ChanHee; Park, JaeHyuk; Jeong, Daun; Yang, ChangHee; Kang, Suk-Ju; Kong, Kyeongbo (2024-01-03). "Human Motion Aware Text-to-Video Generation with Explicit Camera Control". IEEE: 5069–5078. doi:10.1109/WACV57701.2024.00500. ISBN 979-8-3503-1892-0. {{cite journal}}: Cite journal requires |journal= (help)
  33. ^ Singh, Aditi (2023-05-09). "A Survey of AI Text-to-Image and AI Text-to-Video Generators". IEEE: 32–36. doi:10.1109/AIRC57904.2023.10303174. ISBN 979-8-3503-4824-8. {{cite journal}}: Cite journal requires |journal= (help)
  34. ^ a b Miao, Yibo; Zhu, Yifan; Dong, Yinpeng; Yu, Lijia; Zhu, Jun; Gao, Xiao-Shan (2024-09-08), T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models, doi:10.48550/arXiv.2407.05965, retrieved 2024-10-27
  35. ^ a b c d e Zhang, Ji; Mei, Kuizhi; Wang, Xiao; Zheng, Yu; Fan, Jianping (August 2018). "From Text to Video: Exploiting Mid-Level Semantics for Large-Scale Video Classification". IEEE: 1695–1700. doi:10.1109/ICPR.2018.8545513. ISBN 978-1-5386-3788-3. {{cite journal}}: Cite journal requires |journal= (help)
  36. ^ a b Bhagwatkar, Rishika; Bachu, Saketh; Fitter, Khurshed; Kulkarni, Akshay; Chiddarwar, Shital (2020-12-17). "A Review of Video Generation Approaches". IEEE: 1–5. doi:10.1109/PICC51425.2020.9362485. ISBN 978-1-7281-7590-4. {{cite journal}}: Cite journal requires |journal= (help)
  37. ^ a b c d Singh, Aditi (2023-05-09). "A Survey of AI Text-to-Image and AI Text-to-Video Generators". IEEE: 32–36. doi:10.1109/AIRC57904.2023.10303174. ISBN 979-8-3503-4824-8. {{cite journal}}: Cite journal requires |journal= (help)
  38. ^ a b Miao, Yibo; Zhu, Yifan; Dong, Yinpeng; Yu, Lijia; Zhu, Jun; Gao, Xiao-Shan (2024-09-08), T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models, doi:10.48550/arXiv.2407.05965, retrieved 2024-10-27
  39. ^ Singh, Aditi (2023-05-09). "A Survey of AI Text-to-Image and AI Text-to-Video Generators". IEEE: 32–36. doi:10.1109/AIRC57904.2023.10303174. ISBN 979-8-3503-4824-8. {{cite journal}}: Cite journal requires |journal= (help)
  40. ^ a b c d e f "Top AI Video Generation Models of 2024". Deepgram. Retrieved 2024-08-30.
  41. ^ a b "Runway Research | Gen-2: Generate novel videos with text, images or video clips". runwayml.com. Retrieved 2024-08-30.
  42. ^ a b Sharma, Shubham (2023-12-26). "Pika Labs' text-to-video AI platform opens to all: Here's how to use it". VentureBeat. Retrieved 2024-08-30.
  43. ^ a b "Runway Research | Introducing Gen-3 Alpha: A New Frontier for Video Generation". runwayml.com. Retrieved 2024-08-30.
  44. ^ a b "Sora | OpenAI". openai.com. Retrieved 2024-08-30.