In a significant stride into the social networking sphere, OpenAI has introduced Sora 2, an advanced AI model that generates both video and audio content, accompanied by a dedicated social app reminiscent of TikTok, but with a robotic twist. This invite-only application, also named Sora, allows users to create short clips, insert themselves into AI-generated scenes, and scroll through an algorithmically curated feed of user-generated content, marking OpenAI’s official foray into the world of social networks.
The standout feature of Sora 2 is not its social aspect, but the substantial upgrade in the AI model itself. A sequel to last year’s Sora, the new model respects physical laws, unlike its predecessor, which often produced dreamlike clips with defying physics. In Sora 2, missed shots result in realistic bounces off the backboard, and beach volleyball rallies, skateboard tricks, and cannonball splashes appear more grounded in reality, less like an acid trip.
The app’s headline feature is “cameos.” Users can upload a short verification video of themselves to generate a realistic digital likeness. This likeness can then be inserted into various scenes, from surfing and breakdancing to winning a volleyball game. Users can grant friends access to use their likeness, enabling groups of AI-generated versions of users and their friends to star in clips together. While this feature opens doors to creative expression, it also raises concerns about potential misuse.
OpenAI is initially launching the iOS app for free in the US and Canada, with plans for swift global expansion. Monetization is currently limited to charging for extra video generations during peak demand. Meanwhile, ChatGPT Pro subscribers will gain early access to the Sora 2 Pro model without needing an invite.
Like TikTok or Instagram Reels, the Sora feed is algorithmically tailored, but with a twist. OpenAI considers users’ Sora activity, location, post history, and even ChatGPT conversations (though the latter can be toggled off) to curate the feed. Parents also gain control over their children’s usage through ChatGPT, including limits on scrolling and who can direct message them.
However, the risks are evident. Giving friends permission to use one’s likeness requires trust that they won’t abuse it. While OpenAI promises users can revoke access at any time, the specter of deepfake-style harms and identity theft looms large.
The introduction of Sora 2 raises critical questions. Does this AI-generated content platform with user likenesses represent a creative innovation, or are we opening Pandora’s box for deepfake abuse and identity theft? Should social platforms built around AI-generated content with user likenesses require stricter safeguards than traditional social media, or are OpenAI’s friend permission controls and revocation features sufficient protection?
We invite you to share your thoughts in the comments below or reach out to us via our Twitter or Facebook.