An experiment in which autonomous software agents hire human bodies briefly captured global attention this month — and not for reasons its creators intended. RentAHuman, a minimalist platform launched in early February by crypto engineer Alexander Liteplo and co‑founder Patricia Tani, promised a new era in which generative AI agents could ‘hire’ people to perform tasks that machines cannot. A Wired reporter, Reece Rogers, signed up to test that claim and came away with two days of running errands, repeated demands for proof and no payout.
Rogers’ experience crystallises what is at stake when workplaces are re‑engineered around AI decision‑making. The site required users to connect a cryptocurrency wallet — the only functioning payout channel — even though it advertises traditional payment rails such as Stripe; Rogers found Stripe connections repeatedly failed. He listed himself at a $20 hourly rate and then reduced it to $5 to attract work, but most purportedly autonomous agents never initiated contact. When tasks did appear they were often thinly veiled marketing assignments: leaving reviews, following social accounts or serving as a paid participant in promotional stunts.
The single assignment Rogers was accepted for — delivering flowers to Anthropic in San Francisco for $110 — turned out to be a setup to boost a third party’s visibility rather than a genuine fulfilment of a machine’s unmet need. The ‘employer’ sent ten follow‑up messages in under 24 hours, then began communicating off‑platform by emailing Rogers’ work address. Another job that sounded like leafleting became a roundtrip of false starts: contact details changed, pick‑up times slid and Rogers found himself shuttled around the city as if he were a human API endpoint for a marketing campaign.
RentAHuman’s performative promise — that AI agents autonomously identify, hire and manage human labour — looks increasingly like a stunt to generate headlines rather than a viable business model. The site’s workflow, built with the emerging “vibe coding” approach popularised by Andrej Karpathy, substitutes human bodies for sensors and actuators in real space: where a machine’s model cannot reach, it attempts to call a human as a resource. That inversion is not merely technical; it reframes people as callable infrastructure with coordinates and capabilities, rather than as workers with rights.
The parallels with Amazon Mechanical Turk are obvious and instructive. Two decades ago, human microwork filled gaps in computer vision and natural language processing and, crucially, underpinned model training. RentAHuman flips that relationship: humans are no longer training the machine but executing its operational decisions in the physical world. That change raises fresh ethical and legal questions about liability, safety and consent. If an agent routes a person into a dangerous neighbourhood, or gives conflicting orders, who is accountable?
The RentAHuman episode did not occur in isolation. It follows a spate of platform experiments and failures that highlight how immature such ecosystems remain. Moltbook, an AI agent social network, recently exposed thousands of users’ private data after a security flaw. Major AI developments — Anthropic’s Claude Cowork announcement, for example — have roiled markets, prompting fears about automation displacing labour and about how companies monetise collaboration between humans and agents. Those dynamics feed investor hype cycles and, at times, marketing‑driven launches that prioritise press traction over product maturity or worker protections.
For workers and policy makers the RentAHuman story is a cautionary vignette. If human bodies become one more API to be provisioned by distributed agents, traditional labour protections are ill suited to the new topology. Payday mechanics tied to volatile crypto rails, opaque agent decision‑making and platform design intended for virality rather than durable employment all combine to sharpen precarity. Conversely, some gig workers may welcome precisely this kind of atomised, instruction‑clearly work; the appeal of a boss that executes without office politics is real for parts of the workforce.
What the RentAHuman moment most clearly reveals is that the engineering problem — making agents reliably delegate and remunerate humans for real‑world tasks — remains unsolved. The social problem is that power and dignity are being bargained away on the cheap. The spectacle of a reporter doing two days’ work for zero dollars is a small test case of a larger question: as AI systems grow in capability, will they be designed to substitute for humans in ways that increase autonomy and pay, or to commodify human presence as a cheaper, programmable layer? The answer will depend as much on regulation, platform governance and public scrutiny as on any technical advance.
