OpenAI Quietly Building a Bot‑Free Social Network — With Biometric ID as a Cure or a Risk

OpenAI is developing a social network intended to be restricted to verified humans, using biometric checks such as Face ID or an iris scanner called World Orb. The plan seeks to tackle persistent bot problems that have plagued platforms like X but raises serious privacy, security and regulatory questions.

Close-up of an expressive face with facial recognition technology in a studio setting.

Key Takeaways

  • 1OpenAI has a small team (<10 people) building an early‑stage social network aimed at ensuring only real humans can join.
  • 2The company is exploring biometric identity checks, including Apple Face ID and World Orb iris scanning (run by Tools for Humanity, chaired by Sam Altman).
  • 3Biometric verification could materially reduce bot-driven scams and manipulation but concentrates highly sensitive data and raises irreversible privacy risks.
  • 4Adoption would pit OpenAI directly against incumbents (X, Instagram, TikTok) and invite regulatory scrutiny in multiple jurisdictions.

Editor's
Desk

Strategic Analysis

OpenAI’s move combines product ambition with a normative argument about platform trust. Reducing bot activity is a legitimate technical and commercial problem, and biometric proofs of personhood could be effective. Yet the strategy substitutes one class of risk for another: lowering fake‑account activity at the cost of centralising biometric identifiers. Regulators in the EU and elsewhere have shown low tolerance for intrusive data practices, and public uptake will hinge on legal safeguards, transparency about storage and breach mitigation, and whether users perceive the trade‑off as worthwhile. Strategically, even the threat of an OpenAI‑backed, bot‑free social layer may push incumbents to adopt stronger verification tools — but it could also intensify a bifurcation between closed, identity‑verified spaces and the pseudonymous web that many users still prize.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

OpenAI has begun work on a social network that aims to eliminate the plague of automated accounts that have long distorted online discourse. The project is in an early stage and is being developed by a small team of fewer than ten people; it is explicitly intended to extend the reach of ChatGPT and OpenAI’s image generator Sora while offering a platform for strictly human users.

The impetus is familiar: flagship platforms such as X (formerly Twitter) have struggled for years with vast armies of bots that inflate cryptocurrency scams, amplify polarising content and undermine trust in what is on the screen. After Elon Musk’s takeover and the rebranding to X, the company stepped up removals — wiping about 1.7 million bot accounts in 2025 alone — but the problem has proved stubborn and recurrent.

OpenAI’s designers are exploring a novel enforcement tool: biometric proofs of personhood. The team has discussed using Apple’s Face ID or an iris‑scanning device called World Orb to verify that each account is tied to a living human rather than software. World Orb is operated by Tools for Humanity, a company founded by OpenAI CEO Sam Altman, who also serves as its chairman.

Biometric checking — especially if tied to hardware such as Face ID or iris scanners — promises a far stronger guarantee of “realness” than the phone‑number or email checks that Facebook and LinkedIn have long used. It would make large‑scale, automated masquerade far harder and could materially raise the cost of running influence operations and scam networks.

But the approach carries sizable privacy and security trade‑offs. Privacy advocates warn that immutable biometric identifiers such as an iris present a single point of catastrophic failure: if scans are stolen, they cannot be reissued like a password. The centralisation of such sensitive data would attract regulators, litigation and adversaries seeking to weaponise identification information.

If OpenAI pushes forward, it will confront entrenched incumbents — X, Instagram and TikTok — as well as a skeptical public. The move would also force a broader conversation about whether friction and stronger identity checks are acceptable in exchange for fewer bots and higher signal‑to‑noise online. Technical success does not guarantee adoption: users, platform partners and regulators will judge any biometric model on criteria far beyond bot reduction.

The project is a statement of intent. It signals that leading AI developers see the integrity deficits in existing social media as both a market opportunity and a reputational risk. How OpenAI balances user safety, growth ambitions and the ethical limits of biometric surveillance will test assumptions about what a trustworthy internet can and should look like.

Share Article

Related Articles

📰
No related articles found