OpenAI unveiled its new AI model, Sora, which can create minute-long videos from text prompts. “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” says OpenAI in a Sora blog post.
What is OpenAI Sora?
Sora is an AI model developed by OpenAI –– built on past research in DALL·E and GPT models –– and is capable of generating videos based on text instructions, and can also animate a static image, transforming it into a dynamic video presentation. Sora can create full videos in one go or add more to already created videos to make them longer. It can produce videos up to one minute in duration, ensuring high visual quality and accuracy.
Is it available, and how can you use it?
For now Sora is accessible for red team members –– experts in areas such as misinformation, hateful content, and bias –– to examine critical areas for potential problems or risks. Additionally, OpenAI is granting access to visual artists, designers, and filmmakers to collect feedback on enhancing the model. However, the company definitely has the intention to make the model available to all users eventually. A statement from the blog reads, “We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.”