OpenAI released Sora 2, an AI-powered app that creates high-definition videos from text prompts and allows users to insert realistic “cameos” of themselves and others into AI-generated content. The app immediately surged to become the most popular video app on iOS, but its ability to generate copyrighted characters like Mario and Pikachu has sparked significant copyright and deepfake concerns among legal experts.
What you should know: Sora 2 represents a major upgrade from OpenAI’s original Sora model, now featuring synchronized dialogue and sound effects alongside video generation.
- Users can create lifelike videos by providing simple text descriptions, and the app processes one-time recordings to enable realistic cameo appearances.
- The app reached the top of the iOS App Store’s Photo and Video category within 24 hours of launch, though access remains invitation-only as OpenAI gradually expands availability.
The copyright controversy: Early user creations have featured protected characters from major entertainment franchises, raising immediate legal red flags.
- Videos circulating online include Nintendo characters like Mario, Luigi, and Princess Peach, as well as other copyrighted figures like Lara Croft and Ronald McDonald.
- According to The Wall Street Journal, the app enables users to feature copyrighted material unless copyright holders specifically opt out, but blanket opt-outs aren’t available—requiring holders to submit examples of offending content instead.
What legal experts are saying: UCLA law professor Mark McKenna draws a crucial distinction between training AI models and generating copyrighted outputs.
- “If OpenAI is taking an aggressive approach that says they’re going to allow outputs of your copyright-protected material unless you opt out, that strikes me as not likely to work. That’s not how copyright law works,” McKenna explained.
- “The early indications show that training AI models on legitimately acquired copyright material can be considered fair use. There’s a very different question about the outputs of these systems.”
Deepfake concerns: Beyond copyright issues, Sora 2’s realistic capabilities have raised alarms about potential misuse for harmful content.
- One popular early creation depicted OpenAI CEO Sam Altman committing theft, demonstrating how easily the tool can generate false depictions of real people engaging in criminal activity.
- The high-quality outputs have sparked broader concerns about the tool’s potential for creating gory content, child safety issues, and spreading deepfakes.
Safety measures and limitations: OpenAI has implemented several authentication methods, though experts question their effectiveness.
- All Sora 2 videos include moving watermarks and invisible metadata indicating AI generation, but OpenAI’s own documentation acknowledges this “is not a silver bullet” since metadata can be “easily removed either accidentally or intentionally.”
- Siwei Lyu from the University of Buffalo’s Media Forensic Lab noted that while these measures provide “an additional layer of protection,” their effectiveness requires more testing, particularly for invisible watermarks that can only be evaluated internally.
Legal landscape: OpenAI faces mounting copyright litigation, including high-profile lawsuits from authors like Ta-Nehisi Coates and Jodi Picoult, plus The New York Times.
- Competitor Anthropic recently agreed to pay $1.5 billion to settle claims from authors who alleged illegal use of their books for AI training.
- Other AI companies face similar scrutiny, with China’s ByteDance and its Seedance video generation model also attracting copyright concerns.
Pikachu at war and Mario on the street: OpenAI’s Sora 2 thrills and alarms the internet