Sora is OpenAI’s advanced video generation model designed to create realistic videos from text, images, or videos, enabling new possibilities for storytelling and creativity. Today, OpenAI is launching Sora Turbo, an updated version of the model, available to ChatGPT Plus and Pro users. However, Sora is currently unavailable in the UK, Switzerland, and the European Economic Area, and access is restricted to users aged 18 or older. The rollout includes features like storyboard tools for enhanced prompting and community feeds showcasing user creations, which OpenAI plans to refine further.
Sora Turbo builds on the February version with notable improvements in performance and usability, especially in processing speed. It employs a diffusion model framework, transforming noisy video into coherent visuals, and integrates a transformer architecture to ensure consistency and smooth transitions across frames. The model uses space-time patches during training to enhance scalability and efficiency, although their direct role in output generation is not fully detailed. Drawing on techniques from DALL·E 3, Sora Turbo leverages a recaptioning method to improve alignment with user prompts.
While Sora Turbo delivers on several fronts, limitations remain. Video outputs are capped at 1080p resolution and 20 seconds in duration, and the model struggles with maintaining coherence in complex physics simulations or extended narratives. OpenAI acknowledges these constraints and has signaled plans to expand resolution, improve safety mechanisms, and make the technology more accessible over time. Additionally, the subscription model imposes limits on usage, with ChatGPT Plus users allowed up to 50 videos at 480p resolution per month, while Pro subscribers gain access to higher resolutions and longer durations. Tailored pricing plans for different user needs are under consideration for the future.
Safety remains a central focus, with Sora Turbo embedding C2PA metadata for provenance, applying visible watermarks by default, and employing multi-layered filtering systems. OpenAI has developed classifiers to detect and moderate sensitive content with high precision. However, the 97% accuracy figure cited in relation to filtering applies specifically to child safety-related classifiers and does not reflect overall performance.

Sora enters an increasingly competitive landscape of AI video generation tools but distinguishes itself in several significant ways. Platforms like Runway’s Gen-2 and Google’s Veo (formerly Lumiere) offer impressive text-to-video capabilities at 1080p but are generally limited to 10–16 seconds of footage. Sora, with its 20-second duration capability, stands out not only for its extended output but for its innovative space-time patch approach, which enhances coherence in longer sequences. Unlike specialized tools such as Synthesia or DeepBrain AI, which focus on corporate-friendly talking-head videos, Sora aims to be a versatile, general-purpose video generation system for storytelling and creative workflows.
High computational demands and challenges with physics simulations align Sora more closely with premium offerings designed for larger organizations, potentially limiting its accessibility for individual developers or smaller teams. Open models are also making steady progress, further underscoring the need for differentiation and innovation in the rapidly evolving AI video generation market.

Sora’s struggles with maintaining coherence in longer narratives make it less suitable for professional cinematic applications. Additionally, its iterative workflow, requiring multiple attempts and detailed prompt engineering, reflects the need for continued refinements in user experience. OpenAI has acknowledged these areas for improvement and emphasized its iterative deployment strategy.
Beyond technical limitations, OpenAI’s robust safety measures—such as C2PA metadata, watermarking, and advanced content filtering—are commendable steps toward responsible use. However, geographic restrictions and persistent risks of misuse, such as deepfakes or misinformation, underscore the challenges in deploying such technology responsibly. OpenAI is iterating on its approach to safety while expanding accessibility in restricted markets.

For teams evaluating Sora, its limitations must be balanced against its creative potential. While its capabilities open new frontiers in storytelling, the high computational costs, regulatory constraints, and restricted availability pose significant barriers. The success of Sora—and similar tools—will depend not only on technical innovation but also on delivering practical value while navigating the ethical and regulatory complexities of the AI video generation ecosystem.
Related Content
- The Impact of Text-to-Video Models on Video Production
- The Future of Creativity: The Intersection of AI and Copyright
If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
