Why We Still Need Humans

“Why would I ever hire again?”

Recently, the questions I receive about AI have shifted.

Last year, the questions were anxious: “How will this affect my industry? How do we stay ahead?” Now, they are manic: "Every role I'd hire for, AI can do it faster and cheaper. Why would I ever hire again?"

Both anxiety and mania are warranted. Artificial intelligence will soon surpass the internet in historical importance. New models are surging past the exponential curve and breaking benchmarks faster than harder benchmarks can be designed.

We experience anxiety and mania as opposites, but they produce the same blindness. Anxiety about falling behind creates tunnel vision, seeing only what is immediately reachable. Manic excitement chases everything shiny, mistaking motion for direction. 

Both miss the more important question: what's not changing?

Execution Wants to Be Free

Rich Sutton published The Bitter Lesson in 2019. He'd watched AI researchers spend decades building clever systems that leveraged human expertise to overcome technological constraints. General methods with more compute surpassed every one of them, without exception. 

The constraint kept advancing until the workaround became irrelevant. When you bet against changing capabilities, you lose. 

Leaders today are making the same bet. Bolting AI onto offerings. Designing elaborate solutions to reduce customer support overhead. Creating human review layers for AI output. The capability ceiling keeps rising. When it does, the advantages built around it evaporate.

The pressure to keep pace makes the bet invisible. A day spent refining the roadmap feels like a week of falling behind. So we sprint towards what feels immediately reachable, forgetting to ask what will still matter when the ceiling rises again.

Don’t bet on execution when execution wants to be free.

After the Unbundling

Unbundling reveals what parts of the package were actually driving value. The internet unbundled media, separating distribution from content. Most content had been surviving on the back of distribution.

AI is unbundling work. Hires come bundled as a package: the execution you can see, and the intangibles that you can’t. Unlike content, the intangibles in human work weren't passengers on execution's back. Execution buried them.

Now, execution is being stripped away, cheaply and permanently. After the unbundling, we can finally see those intangibles. 

No amount of compute can make a model care about what happens next, bear the cost of being wrong, or dream up an explanation the world has never seen. These capabilities are irreducibly human.

Only humans can care.

AI has no vision. It can generate wonderful simulations of caring, but it lacks the capacity to care about what comes next. Effortless execution, but only by request.

Models perform better when forced to plan responses with chain-of-thought reasoning. But the model only initiates planning if you ask. Initiative always traces back to a person with a pulse.

Leaders who automate initiative away become the dependency for every action. They find themselves at the hub of a continually expanding wheel, central to every route. 

Open loops multiply. Each waiting, always waiting, on that next human push.

Only humans can be accountable.

AI floats in a warm tub of blissful abstraction. Your feedback is compressed into markdown files, with the signal overwritten by fresh context. Someone has to get out of bed and live with the consequences.

When our name is attached to an outcome, we bring more of ourselves. The feedback loop closes. Quality compounds with reputation. 

Automation promised to free us from responsibility. Instead, responsibility became harder to locate. We became the bottleneck for a more complex system. 

When accountability is lost, the casualty is trust. Reputations attach to people, not processes. We trust someone because they've stood behind their outcomes long enough to see the pattern. 

Remove the person, and trust has nowhere to land.

Only humans can imagine.

AI is an oracle more powerful than any Greek myth. But the oracle only knows what has already been explained. All training data is locked in the past. The record is complete but contains only destinations that have already been mapped.

Imagination starts where explanation stops. Asking a question nobody thought to ask. Finding the novel frame that reveals our previous map as incomplete. These are leaps, not predictions. No training set can produce them because they don’t exist in the dataset.

My fear is that imagination is already on life support. A difficult question is posed, and our instinctive move is to prompt. We reach for a pre-packaged answer before our question is fully developed. 

The discomfort of not knowing, the productive friction calling us to adventure, gets bypassed before it has a chance to work. Leaps feed on friction.

The Second Curve

The P&L case for automation looks clean because the liabilities are off-balance-sheet. 

Automation is necessary. The tradeoffs just don’t show up on any dashboard. Optimizing for faster, cheaper execution means ruthless competition over dimensions that are being commoditized.

Every business is now shaped by these two diverging curves: AI capability and human capacity.

Sutton’s curve, AI capability, advances faster than any adaptation to it. Time works against you. Competing on execution means running faster just to stay in place. There is no moat against perpetual progress.

The second curve, the curve we can control, is human capacity. Time is a tailwind. The human curve compounds, regardless of AI capabilities. The capacity to care, to bear consequences, and to imagine new destinations all strengthen with use. 

"Why would I ever hire again?" Hire for the second curve. Automate everything else.

What doesn't change is what's worth building around.

Chris Sparks