Blog
AI & Machine Learning·10 min read

Building the Machine That Replaces You

I build AI systems that automate knowledge work. I'm also job hunting. Here's what I see from inside the machine.

Jo Vinkenroye·February 5, 2026
Building the Machine That Replaces You

Last Tuesday I built an AI agent that handles tasks I used to bill $150/hour for. Then I opened LinkedIn to continue my job search

January 2026: 108,435 layoffs, 5,306 new hires

January 2026: 108,435 US layoffs. About 5,306 new hires planned. A 1:20 ratio. Worst January since 2009

In 2009, the jobs came back. This time they won't

What's replacing workers isn't a downturn. It's a permanent capability. AI now outperforms humans at most white-collar knowledge work: coding, writing, analysis, research, legal review, medical diagnostics

A March 2023 study from OpenAI and UPenn found 80% of US workers have at least 10% of their tasks exposed to LLMs.

McKinsey estimates 30-50% of US knowledge work is automatable: 40 to 50 million jobs at risk over the next decade

Why "move up the value chain" doesn't work this time

Every past disruption came with reassurance: displaced workers move up the value chain. Factory workers became machine operators. Typists became data entry clerks

That worked because machines needed human oversight. Managers, coordinators, quality controllers: a whole layer of jobs emerged to supervise automation

AI doesn't need that layer

It reviews its own output. It coordinates between systems. It manages quality.

When Cognition Labs launched Devin in March 2024, it demonstrated an AI that could plan, execute, debug, and iterate on complex engineering tasks—thousands of decisions deep—without human intervention. It scored 13.86% on SWE-bench when the previous state-of-the-art was 1.96%. Seven times better

"Learn to manage AI" is the new "learn to code." But only so many AI managers are needed.

And AI is getting good at managing itself: Anthropic's computer use research showed Claude autonomously navigating interfaces, clicking, typing, adapting in real-time. OpenAI's Operator took it further: an agent with its own browser completing multi-step workflows across websites

The oversight jobs are automatable too

I write the code that makes AI autonomous

Most commentary on AI displacement comes from journalists and economists theorizing from the outside. I write the code that makes AI agents autonomous. I architect the pipelines that let them reason, act, and self-correct

I run a self-hosted AI assistant on my own server. It manages my workflow, monitors my projects, and reaches out proactively when something needs attention

The outside perception: AI automation is clunky, unreliable, needs constant babysitting. That was true 18 months ago. Not now

What AI handles today:

  • Take vague requirements and produce working, tested code
  • Debug complex systems across multiple abstraction layers
  • Learn preferences and apply them unprompted
  • Handle ambiguity better than many mid-level developers

I have 13+ years of experience and I'm deep in AI tooling. The job market is still rough. For developers who haven't kept up, it's worse

Claude Opus 4.6 shipped this week with 1M token context

This isn't prediction. The capabilities I'm describing shipped this week

Today—as I publish this—Anthropic released Claude Opus 4.6 with a 1 million token context window. That's roughly 3,000 pages of text held in working memory simultaneously.

An entire codebase. An entire legal discovery. An entire quarter's financial reports, analyzed in one pass

But context window size is the least interesting part

Sustained autonomous work. Opus 4.6 autonomously closed 13 GitHub issues and assigned 12 more to the right team members in a single day, managing a 50-person organization across 6 repositories, according to Anthropic. It handled both product and organizational decisions. It knew when to escalate to a human. That's not autocomplete. That's a project manager

Multi-step agentic planning. The model breaks complex tasks into subtasks, runs tools and sub-agents in parallel, identifies blockers, adapts its strategy as it learns. One early tester reported it handled a multi-million-line codebase migration "like a senior engineer": planning upfront, adapting, finishing in half the time

Self-improvement. Opus 4.5 demonstrated agents that autonomously refine their own capabilities, reaching peak performance in 4 iterations while other models couldn't match that quality after 10. They learn from experience, store insights, apply them later

Claude Opus 4 launched mid-2025 as the world's best coding model, scoring 72.5% on SWE-bench and working continuously for hours on complex tasks.

Months later, Opus 4.5 outscored every human candidate on Anthropic's internal engineering take-home exam, according to the company.

Now Opus 4.6 leads every major benchmark: agentic coding, financial analysis, legal reasoning, cybersecurity—often by wide margins

Each generation gets smarter. Each generation gets cheaper. Opus 4.5 dropped to $5 per million input tokens. Capabilities that were cost-prohibitive six months prior, now accessible to anyone with an API key

Three years from autocomplete to coworker

2023: AI as tool. You type a prompt, get text back. Fancy autocomplete

2024-2025: AI as assistant. Anthropic ships computer use in October 2024—Claude can see screens, move cursors, click buttons, type text. Google announces Project Mariner in December. OpenAI launches Operator in January 2025. Clunky. Impressive demos. Research previews

2025-2026: AI as employee. Anthropic launches Cowork—Claude operating autonomously on your actual computer, reading and editing your files, browsing the web, creating documents and spreadsheets. You don't prompt it and wait. You assign work and walk away. It loops you in when needed, exactly like a remote colleague would. These aren't demos anymore

2026: AI as workforce. OpenAI just launched Frontier, an enterprise platform to—read this carefully—"hire AI coworkers who take on many of the tasks people already do on a computer."

Not tools. Not assistants. Coworkers. That's OpenAI's word

Frontier gives each AI coworker its own identity, permissions, and boundaries. It onboards them with company context. It teaches them institutional knowledge. It lets them learn from feedback.

That's an HR onboarding process for an AI

Early enterprise customers report a major manufacturer reduced production optimization from six weeks to one day. A global investment company freed up 90% more time for salespeople. A large energy producer increased output by 5%—over a billion dollars in additional revenue

AI as a headcount line on your org chart

AI costs $5K-12K per year vs $80K-120K for humans

Cost comparison between human employees and AI agents
Cost comparison between human employees and AI agents

Human employee:

  • Annual cost: $80,000-$120,000 (salary, benefits, overhead)
  • Working hours: 2,000/year (40 hours/week)
  • Time off: PTO, sick days, holidays
  • Scaling: Linear hiring time, training required
  • Improvements: Gradual skill development

AI agent:

  • Annual cost: $5,000-$12,000 (compute + infrastructure)
  • Working hours: 8,760/year (24/7 availability)
  • Time off: None
  • Scaling: Instant deployment of additional agents
  • Improvements: Quarterly model updates, consistent quality

Claude Opus 4.6 costs $5 per million input tokens and $25 per million output tokens. A heavy autonomous session processing thousands of steps might run $2-5. Running it continuously for an 8-hour "workday" costs $20-50.

That's $5,000-$12,000 per year for an agent that works 24/7, never takes PTO, and improves every quarter

Even 10x that estimate for infrastructure, orchestration, and error handling—still a fraction of a human employee. And unlike humans, AI agents scale linearly. Need ten? Spin up ten. Need a hundred during crunch? Done

Anthropic's Cowork is in research preview right now. OpenAI's Operator is integrated into ChatGPT. Every major lab is racing to ship autonomous agents that handle complete workflows

The question isn't whether AI can do your office job. It's when the cost curve crosses the threshold where your employer can't justify not switching

The 5-year "optimistic" timeline already arrived

The AI researchers who built these systems saw this coming. Their "worst case" timelines are already behind us

Geoffrey Hinton—Turing Award winner, "Godfather of Deep Learning"—left Google in May 2023 specifically to warn about AI risk without corporate constraints.

In 2023, he thought we had "maybe 5 to 20 years" before AI matched human general intelligence. We're three years in. The 5-year "optimistic" scenario is already here. AI is managing engineering teams autonomously

Yuval Noah Harari warned about a coming "useless class": people not just unemployed but economically and politically irrelevant. A permanent structural exclusion.

In 2017, this felt like a 2040 problem. In 2026, OpenAI is literally marketing "AI coworkers" to enterprises. We arrived 14 years early

Erik Brynjolfsson and Andrew McAfee saw this earliest. In Race Against the Machine (2011) and The Second Machine Age (2014), they documented how digital technologies decoupled productivity from employment.

Their timeline for cognitive automation? "The next decade or two." We hit it in under ten

This automation wave breaks historical patterns: higher-income jobs face greater exposure. Every previous automation wave hit the bottom of the ladder first. This one starts at the top

Looms replaced weavers' hands. Tractors replaced farmers' backs. Computers replaced clerks' arithmetic. Each time, humans moved to work requiring judgment, creativity, social intelligence—the stuff machines couldn't touch

This is the first wave that targets thinking itself

20-30% unemployment radicalizes societies

The Arab Spring erupted at 25% youth unemployment. Weimar Germany hit 20-30% before 1933

If half the projected automation materializes—20 million displaced US workers over a decade—we're approaching those thresholds

Unlike past displacements, these won't be factory workers in specific regions. They'll be lawyers, accountants, developers, writers. Educated people in every city who did everything "right." Went to college. Built careers. Learned the "right" skills

That demographic, at that scale, doesn't sit quietly

UBI solves income but not purpose

Silicon Valley's default answer is Universal Basic Income. Pay people. Problem solved

People receiving unconditional income without work report lower motivation, lower satisfaction, less sense of purpose

Work provides more than money. Structure. Social connection. Identity. Removing income anxiety doesn't replace any of that

What replaces lost purpose, not lost income

The question isn't how to replace lost income. It's how to replace lost purpose

One possibility: credit-based systems that recognize non-economic value. Caregiving. Community building. Mentoring. Creative work. Environmental stewardship. Activities that matter but have never been economically valued

Maybe the post-AI economy isn't "everyone gets a check." Maybe it's building systems that value what markets couldn't

Speculative. Maybe naive. Still more interesting than "just adapt"—to what, exactly?

Three hedges worth taking

I'm hedging

Going deeper into AI. If the wave is coming regardless, better to be building it than drowning in it. Understanding these systems from the inside—their architectures, their failure modes—buys time

Focusing on what AI can't do. Novel system design in unprecedented domains. Judgment calls with incomplete information where being wrong is catastrophic. These are shrinking islands. Still islands

Accepting impermanence. The career I've known for 13 years may not exist in its current form for another 13. Not defeatism. A starting point for useful action


Every morning I work with AI tools better than last month. In eight months between Claude Opus 4 and Opus 4.6, we went from "impressive coding assistant" to "autonomously manages a 50-person organization's GitHub."

I'm good at what I do. I can see, from inside the machine, that "being good at it" has a shelf life now

The people who navigate this won't deny it or wait for someone else to figure it out. They'll be the ones who understand this technically and socially, and start building what comes after

I build AI systems and I'm looking for my next role in AI/Web3. Working on something that matters? Let's talk.

Stay Updated

Get notified about new posts on automation, productivity tips, indie hacking, and web3.

No spam, ever. Unsubscribe anytime.

Comments

Related Posts