// Article · Labor economics
CLASSIFIED · LEVEL Ω

After the LLMs: what a senior engineer is worth in 2027

Generative AI has commoditized average code. The labor-economics literature has the framework for what happens next, and it is not the framework being used in the AI hype cycle.
By T.Y.2026-05-11RSN · DAF5:A38A

If you sit in a senior engineering role in mid-2026, you have probably had some version of this conversation in the last year:

You: I think we should refactor the auth layer; the current shape will not survive the consent-management changes coming in Q3. PM: Can we just have Copilot do that? You: It can write some of the code. It will not pick the right shape. PM: How sure are you? You: Sure enough that I have been doing this for fifteen years and have seen this pattern fail four times. PM: Hm. Could we just try Copilot first and see?

The conversation is everywhere. It is not a stupid conversation. The PM is responding to an actual, measurable change in the productivity of mid-skill engineering work. GitHub's own 2023 study (Quantifying GitHub Copilot's impact on developer productivity, Peng et al.) found that developers using Copilot completed a defined task 55% faster than the control group. Atlassian's 2024 State of Developer Experience survey showed comparable productivity gains across enterprise teams. The numbers are real.

What the conversation gets wrong — and what almost every CTO conversation in 2024-2025 got wrong — is the inference from "average task is faster" to "all tasks are faster" and from there to "engineers are interchangeable now."

This is the deskilling thesis, and it has a hundred and fifty years of history.

Braverman, Autor, Acemoglu

The case that technology deskills labor goes back to Marx in Capital Vol. I (1867), specifically the chapters on machinery and modern industry. Marx's argument was that capital adopts machinery not primarily to lower production cost but to weaken the position of skilled workers, who can withhold labor in a way that unskilled workers cannot.

The thesis got a sharp 20th-century formulation in Harry Braverman's Labor and Monopoly Capital (Monthly Review Press, 1974). Braverman, an industrial worker turned editor, argued that under monopoly capitalism, mental labor undergoes the same fragmentation that manual labor underwent during the first industrial revolution. The skilled craftsman becomes the assembly-line worker. The skilled office worker becomes the data-entry clerk. The skilled programmer — Braverman did not get to that one; he died in 1976 — would become, by extension, the prompt-typist.

The Braverman thesis was hugely influential in industrial sociology in the 1970s and 1980s and then went out of fashion when the empirical record turned out to be more complicated. Erik Olin Wright's mid-1980s work on "contradictory class locations" punched holes in the simple deskilling story. Many jobs got more complex, not less, as technology advanced. The 1990s computer revolution, in particular, was widely seen as creating a new class of high-skill, high-autonomy "knowledge workers" — the opposite of the deskilling prediction.

Then David Autor, at MIT, published his 2003 paper with Levy and Murnane, The Skill Content of Recent Technological Change (Quarterly Journal of Economics). Autor's framework distinguished routine cognitive tasks (susceptible to automation) from non-routine cognitive tasks (complementary with computers) from manual tasks (a third bucket with its own dynamics). The empirical record showed that the deskilling thesis was right for routine cognitive work and wrong for non-routine cognitive work. Computers were polarizing the labor market, not flattening it. High-skill non-routine workers got more productive and more highly paid. Mid-skill routine workers got hollowed out.

This is the framework that Daron Acemoglu has been building on for twenty years and that earned him the 2024 Nobel Prize in Economics (jointly with Simon Johnson and James Robinson, though for different work). Acemoglu's recent book Power and Progress (PublicAffairs, 2023, with Johnson) extends the framework to AI specifically. The argument: whether AI augments labor or substitutes for it depends on choices in how the AI is deployed, and historically the choices have skewed toward substitution because substitution is easier to measure on a quarterly earnings call than augmentation is.

What the AI hype cycle gets wrong

The 2023-2025 AI discourse, in my reading, conflates three very different propositions:

Proposition 1. Generative AI can produce competent average output across a wide range of cognitive tasks.

Proposition 2. Most cognitive workers spend most of their time on average tasks.

Proposition 3. Therefore, most cognitive workers are about to be displaced by AI.

Proposition 1 is true. I have used GPT-5, Claude 4.7 Opus, and Gemini 2.5 Pro in production engineering work in 2025-2026. They are, all three of them, competent. They write code that compiles. They write code that mostly does what you asked. They do this fast.

Proposition 2 is partially true but importantly misleading. Most engineers' time is visible on average tasks because average tasks are the ones with measurable output. The reason your senior engineers are still senior, however, is the small fraction of their time spent on the non-average tasks: the architecture choices, the things-you-cannot-tell-the-PM-without-context decisions, the post-incident write-ups, the mentoring, the code reviews where they catch the bug nobody else would have caught. Those tasks are the residual claim on the salary differential. They are also exactly the tasks that LLMs do worst.

Proposition 3 — the displacement conclusion — does not follow. What follows is the Autor-Acemoglu polarization story, transplanted to software engineering. Mid-skill engineering work gets squeezed (and is, in fact, already getting squeezed; entry-level hiring at FAANG-tier companies dropped roughly 40% in 2024-2025 according to internal numbers from at least three sources I have spoken to off the record). High-skill engineering work gets more leveraged, because the senior engineer now wields a productivity multiplier that did not exist five years ago.

The labor-economics literature predicted this. The AI-investor discourse missed it. The reason it missed it is that the labor literature is dry, the AI literature is exciting, and the people writing the second do not read the first.

What the senior engineer is worth in 2027

Three observations, in declining order of confidence.

One. The salary differential between mid-skill and senior engineers will widen, not narrow. This is contrary to the simple deskilling story but consistent with every prior episode of skill-biased technological change. The senior engineer's productivity is rising faster than the mid-skill engineer's, in absolute terms, because the AI tooling amplifies the senior more (better prompts, better context, better evaluation of outputs). The market will price this.

Two. The composition of senior engineering work will shift toward two activities: judgment about which code to have written (system architecture, technology selection, build-vs-buy, scope discipline) and judgment about whether code is correct (review, validation, post-incident analysis). Writing code, as a fraction of the senior engineer's time, will fall. This is happening already; the trend will accelerate.

Three. A new tier of "post-LLM specialist" will emerge. These are engineers whose differentiating asset is fluency in domains, languages, or tools that LLMs do not handle well — either because the domain is too small to have generated training data (specialized embedded systems, certain bioinformatics pipelines, particular regulatory-compliance codebases) or because the language itself is new enough not to be in the corpus. This is the niche that languages like Quantum Code, niche dependently-typed proof assistants, and certain hardware-description languages occupy. The wages will be high. The hiring will be slow. The networks will be small. (I am partially speaking my own book here. I have built one of those tools. So discount appropriately.)

The case I want to make to non-engineers

The argument so far has been about engineering specifically. The same logic applies, with appropriate domain shifts, to most cognitive professions.

Lawyers: the deposition prep work, the contract review, the standard motion drafting — all under pressure from generative AI. The trial strategy, the negotiation, the judgment about which battle to fight — not under pressure, and increasingly leveraged.

Doctors: the differential diagnosis from a chart, the imaging interpretation, the routine prescription — under pressure. The judgment about an ambiguous case, the conversation with a frightened family, the procedural skill in surgery — not under pressure.

Journalists: the news brief, the press-release rewrite, the basic explainer — under pressure. The investigative reporting, the source cultivation, the editorial judgment about what is worth covering — not under pressure.

In every case, the AI hype cycle's "X profession is finished" headline is wrong. What is finished is the middle of X. The top of X is, in Autor-Acemoglu terms, becoming more leveraged, more highly paid, and more important. The bottom of X is getting squeezed. The middle of X — the bulk of the workforce — is bifurcating: some will move up, some will move down, very few will stay where they are.

The institutions that figure this out earliest will reorganize their hiring, training, and compensation around the new gradient. The institutions that do not will be eaten by ones that do.

What to do if you are senior and unsure

Five practical things, in no particular order.

  1. Stop telling junior engineers to "use AI more." They are. The thing they need from you is the judgment layer that AI cannot provide. If you abdicate that, you are abdicating the part of your job that is still increasing in value.
  1. Take the productivity gains. You can ship more, with fewer people, faster, than you could three years ago. The temptation is to use this to do the same amount of work in fewer hours. The correct move, almost always, is to use it to attempt projects that were previously out of scope.
  1. Invest in a niche that the generic models cannot reach. This does not have to be exotic. It can be your specific company's codebase, the specific compliance environment you operate in, the specific hardware target you ship on. The point is that it is a domain where the LLM is not your competitor.
  1. Read the labor-economics literature. Specifically: Autor's 2015 Why Are There Still So Many Jobs? in Journal of Economic Perspectives; Acemoglu and Restrepo's The Race Between Man and Machine (American Economic Review, 2018); Brynjolfsson and McAfee's The Second Machine Age (W. W. Norton, 2014, somewhat dated but the framing is durable). Twenty hours of reading. It will reorient how you think about the next five years.
  1. Mentor someone. The post-LLM specialist tier will be small. The people who get into it will be the ones whose senior engineers gave them the time and the trust to learn things that did not have an immediate ROI. If your career has been gated, at any point, by someone who took that bet on you, pay it forward. The system reproduces itself through these decisions.

A closing note

I will end with a sentence I have been turning over for a year.

Every previous episode of skill-biased technological change — water wheel, steam, electricity, transistor, internet — produced winners and losers. The losers were not the skilled workers; the skilled workers adapted. The losers were the workers in the middle, who were skilled enough to be expensive but not skilled enough to differentiate from the new technology. The middle hollowed out. The top got richer. The bottom got reabsorbed elsewhere.

This is not a moral argument. It is an observation about how the dynamics have always run.

What is different now is the rate. The previous transitions took decades. This one is taking quarters. The institutions that absorb skilled workers — universities, professional bodies, mentorship traditions — operate on decade timescales. They are going to be late. The early movers, the people who get this right in 2026 instead of 2030, will own the next twenty years.

I think that is what is going on. I could be wrong. The literature suggests I am not.


References

Image suggestions

Suggested platforms