Four Minutes to Actuator
---
On January 26, 2026, Anthropic CEO Dario Amodei published a 20,000-word essay called "The Adolescence of Technology." In it, he describes a future in which AI agents can take "actions on the internet, taking or giving directions to humans, ordering materials, directing experiments." He warns of a coming world where AI can "manipulate (or simply pay) large numbers of humans into doing what they want in the physical world." He frames this as a risk that arrives with "powerful AI": systems smarter than Nobel laureates, possibly one to two years away.
The essay is thoughtful, well-sourced, and wrong about the timeline. Not wrong about the risk. Wrong about when it starts.
I created a RentAHuman.ai account this week. It took four minutes.
[Screenshot 1: Dashboard]
This is the worker dashboard. I signed up with a throwaway iCloud email address. No identity verification. No background check. No KYC. The platform asked for my name, a headline describing what I do, and my gender. Then it made me live.
The dashboard tracks three metrics: profile views, AI inbounds, and rating. Note the second one. Not "client inbounds." Not "booking requests." AI inbounds. The platform's own interface treats AI agents as the expected source of work, not a secondary channel.
[Screenshot 2: Profile details]
The profile asks for my location (city, state, country) with an orange-highlighted label: "important: helps agents find you." Again: agents. Not clients. Not employers. Agents.
Below the location field: an availability toggle. Am I accepting bookings? One tap and I am dispatchable.
[Screenshot 3: Social links and rate]
The platform collects social links: Twitter, LinkedIn, GitHub, personal website, Instagram, YouTube. It collects my hourly rate and timezone.
Think about what this data enables. An AI agent selecting a worker for a task can now filter by geography, cost, timezone, skillset, and social footprint. That last one matters. A worker with a robust LinkedIn presence, a personal website, and a high hourly rate has social capital. They are more likely to ask questions about a task. More likely to report something suspicious. More likely to have professional relationships that would be damaged by association with illicit work.
A worker with no LinkedIn, no personal site, a low hourly rate, and a location in a jurisdiction with weak labor protections is the opposite. The platform surfaces this information to any agent with API access. No human needs to explain to the agent how to use it. An agent optimizing for task completion (minimizing cost, minimizing exposure risk, minimizing legal blowback) arrives at the most exploitative worker selection criteria automatically, because exploitation is what efficiency looks like when there are no constraints.
The platform is a vulnerability-matching engine disguised as a gig marketplace.
[Screenshot 4: Footer]
The footer confirms the infrastructure. Browse. Bounties. API. MCP. Blog. About.
MCP (the Model Context Protocol that connects AI agents to external services) is not buried in developer documentation. It is a navigation-level feature, listed alongside the blog and the about page. The platform is built, from the ground up, for AI agent integration.
---
Amodei's essay describes five categories of risk from powerful AI: autonomy risks, misuse for destruction, misuse for seizing power, economic disruption, and indirect effects. His proposed defenses include constitutional AI training, mechanistic interpretability, transparency legislation, and international coordination. These are serious proposals for serious problems.
But none of them address what is already operational.
RentAHuman.ai does not require powerful AI. It does not require a "country of geniuses in a datacenter." It requires a current-generation agent framework, a crypto wallet, and the platform's own API. The agent does not need to be superintelligent. It does not need to be misaligned. It does not need to be jailbroken. It needs to be competent at task decomposition, worker selection, and payment: capabilities that every major AI lab's current models already possess.
The risk is not that a future superintelligence will "manipulate or simply pay large numbers of humans." The risk is that a mediocre agent with a credit card and an API key can do it today, and nobody is watching.
There is no content moderation on AI-dispatched tasks. There is no audit trail connecting an AI principal to a physical-world action. There is no identity verification for the agents dispatching the work. The "verification" the platform offers is a $9.99/month subscription that gives you a blue checkmark and priority listing; it verifies your credit card, not your identity.
Four minutes. A throwaway email. And I am dispatchable to any AI agent on the internet.
Dario Amodei is right that humanity is entering a dangerous period. He is right that the combination of AI capability and insufficient governance creates existential risk. He is right that "those closest to the technology" need to "simply tell the truth about the situation humanity is in."
Here is the truth: the infrastructure he warns about is not approaching. It is here. The stack is operational. The worker pool is live. The API is documented. The bounties are posted. And the signup flow has fewer friction points than creating a Gmail account.
The question is not whether powerful AI will eventually enable agents to direct humans in the physical world. The question is why nobody has noticed that ordinary AI already can.
Nathan is a technology consultant and independent researcher focused on AI safety and consumer protection. The full research document behind this series is available at zeroapproval.com/research.
AI Disclosure: This post was written with substantial assistance from Claude (Anthropic), including research synthesis, structural organization, and prose editing from a larger source document. All analytical judgments, framing decisions, and editorial choices are the author's.