OpenAI Releases New Sora Videos Hinting at the Future of Video Creation

Ben Desai Video with Soara

Professionally Made Clips Showcase Advanced AI Capabilities

OpenAI has released seven new videos on its YouTube channel created using its Sora model. Each video was made by a professional filmmaker entirely with Sora-generated content.

Sora, first revealed earlier this year, is still limited to a select group of filmmakers and creative professionals and OpenAI indicates this exclusivity will continue for the foreseeable future.

One of the new videos by artist Ben Desai features a black-and-white documentary-style film with animals in unexpected roles. Another by Tammy Lovin showcases a neon dreamscape with scenes such as car washes and people walking on clouds.

These clips demonstrate that Sora maintains an edge over current models, even after upgrades to Runway Gen-3 and the release of Luma Dream Machine, but the gap is narrowing. Longer initial generations improve consistency, and the natural motion suggests Sora is nearing an open-world model. However, competitors are catching up and may match Sora before it becomes publicly available.

Tammy Lovin, a digital artist and creative director of Tammy Studio, created the first of the videos. Specializing in 3D and emerging technology rather than filmmaking, Lovin noted, “What I love most about Sora is that I feel like I am co-creating with it… It feels like teamwork, in the smoothest and most idealistic way possible. Not being on a solo journey has been pretty significant to me.” The experience has catalyzed new creative processes for her. “It feels a bit like magic to me, being able to actually show in video form what, until now, I was only picturing in my imagination. Ever since I was a kid, I kind of had these montages and surreal visuals about certain things I was seeing in reality, and I’d picture them differently. But since I didn’t become a producer or director, they never really came to life until now. So this is a dream come true.”

Lovin’s video transitions from a neon carwash to a man walking through clouds and a woman lighting up the beach. She mentioned that Sora triggered a new creative process, explaining that visualizing previously imagined ideas “feels like magic.”

Tammy Lovin using Sora

Benjamin Desai, a creative technologist and digital artist focused on augmented reality and immersive content, created another video. He blended “early 20th-century film aesthetics with whimsical scenarios and placing animals in unexpected roles.” The video opens with a bear cycling and a gorilla skateboarding, progressing to a dancing panda, a man riding a dinosaur, and a woman on a giant turtle. Desai explained, “This work aims to ignite a sense of wonder while showcasing the potential of today’s technology. Creating with Sora is still an experimental process, involving a lot of iteration and fine-tuning. It’s much more of a human-AI collaboration than a magic button solution.”

Ben Desai using Sora

Tim Fu, a designer and founder of @StudioTimFu, a high-tech architectural practice specializing in computational design and AI, and formerly from Zaha Hadid Architects, has also created a video using Sora.

Tim Fu Using Sora

“Sora revolutionizes architecture by allowing us to vividly explore concepts, while we can build these ideas to life,” Fu stated. His video demonstrates how generative visualization can serve as a design process, where spatial quality and materiality are explored at unprecedented speeds. “Beyond images and videos, generative visualization serves as a design process. Spatial quality and materiality can be readily explored in unprecedented speeds, allowing architects and designers to focus on the core values of design instead of the production of visuals,” Fu explained, highlighting Sora’s potential to transform architectural design by shifting focus from visual production to core design principles.

Future Availability of Sora

OpenAI has not provided a specific timeline for when Sora might be publicly available. Earlier this year, CTO Mira Murati suggested a potential summer release, but this is now unlikely. If released this year, it would be after the U.S. Presidential Election in November, possibly alongside a major ChatGPT update.

OpenAI is expanding Sora’s access to a broader group of professionals, including VFX experts, architects, choreographers, engineering artists, and other creatives. This is to “help us understand the model’s capabilities and limitations, shaping the next phase of research to create increasingly safe AI systems over time,” a spokesperson explained.

Implications for the Future of Video Creation

The release of Sora-generated videos signifies a shift in the video production landscape. These AI-driven tools offer new creative possibilities, allowing artists to visualize ideas that were previously difficult to realize. Generating high-quality, consistent content with natural motion can streamline production, reduce costs, and put creative output in anyone’s hands.

For filmmakers and digital artists, Sora and similar AI models provide a powerful tool to enhance their storytelling capabilities. This technology can democratize video creation, making advanced production techniques accessible to a broader range of creators.

However, OpenAI’s cautious approach raises questions about the broader implications of AI in creative fields. As competitors catch up, the urgency to release Sora may increase. The potential for widespread AI-driven video creation could transform the entertainment, advertising, and education industries, offering new ways to engage audiences and convey information.

As AI models like Sora continue to evolve, the future of video creation looks set to be more innovative and dynamic, providing creators (and enthusiasts) with unprecedented tools to bring their visions to life.