Cole's Notes

A Simple Blog Powered by Go

AI, Redundancy, and the Self-Cannibalizing Organization

Posted by cole on Apr 20, 2026 22:13

What Happens When Efficiency Eats the Institution

One of the stranger patterns emerging in the age of AI is that institutions are not only using efficiency tools to reduce labor. They are also weakening the very human capacities that made those institutions coherent in the first place.

That is the ouroboros.

An organization cuts people in the name of efficiency. It reduces headcount, compresses roles, thins teams, centralizes decision-making, and offloads more of the work onto automation. At first, this can look successful. Costs drop. Outputs stay stable long enough to reassure leadership. The system appears to work.

But often what the organization has actually done is begin consuming the human judgment, tacit knowledge, maintenance culture, and relational capacity that had been making the work legible all along.

The AI did not become magically competent in a vacuum.

It entered a world that humans had already built, maintained, documented, repaired, interpreted, and kept socially coherent.

When the humans who carried that coherence are treated as excess, the institution starts hollowing itself out.

The Mirror of Individual Atrophy

At the personal level, a familiar fear is that people will surrender too much thought to AI.

They will stop writing and start prompting. Stop investigating and start summarizing. Stop learning the shape of a craft and rely on the machine to simulate the outcome.

That fear is real.

But institutions are doing something similar at organizational scale.

They are using AI to remove the people who remember why a system is structured the way it is, who know where the edge cases are, who understand what the users actually need, who mentor newer workers, who catch subtle failures, and who carry forward a culture of care, skepticism, and repair.

In other words, institutions are risking the same kind of atrophy they fear in individual workers.

They are surrendering judgment to a machine while simultaneously weakening the humans needed to supervise, challenge, and contextualize that machine.

The Competence Illusion

A great deal of AI competence is contextual.

Models appear powerful because they are deployed inside systems that still contain enormous amounts of accumulated human structure:

  • established workflows;
  • historical documentation;
  • product conventions;
  • design libraries;
  • coding standards;
  • support teams;
  • domain experts;
  • editors;
  • reviewers;
  • maintainers;
  • and institutional memory.

This means some organizations are making a category mistake.

They see AI working inside a human-built environment and conclude that the human environment is no longer necessary.

But the environment was part of the capability.

Once that surrounding structure is cut too deeply, the AI starts operating in a thinner, more brittle world. Errors become harder to catch. Knowledge gaps get papered over. Fewer people understand the system end to end. Institutional memory erodes. The visible short-term gain becomes long-term strategic confusion.

The organization mistakes assisted competence for autonomous competence.

Redundancy Is Not the Same as Waste

One of the most damaging habits of modern management has been treating redundancy as waste.

But some forms of redundancy are exactly what make a system resilient.

Two people understanding a workflow is not always inefficiency.

A senior person mentoring a junior person is not always overhead.

A team carrying more context than is minimally required for today's deadline is not always bloat.

Sometimes that is what keeps the institution alive when a crisis hits, when a key person leaves, when the documentation is incomplete, when a system breaks, or when a new technology enters faster than the organization can interpret it.

Human redundancy often looks expensive right before it becomes essential.

AI changes the cost profile of certain tasks, but that does not erase the need for human overlap, judgment, and continuity. In some cases it makes that need more urgent.

Accessibility and the False Promise of Efficiency

This matters especially for accessibility and disability.

For many disabled workers, creatives, researchers, and students, AI can remove genuinely inaccessible layers of labor. It can reduce formatting burden, translation friction, rote drafting, repetitive testing, and other forms of cognitive or administrative overhead that have historically extracted too much energy from people already carrying too much.

That is real value.

But there is a profound difference between using AI to make work more accessible, and using AI to decide that accessible workers are now easier to replace.

One approach returns time, energy, and agency to human beings.

The other extracts the human again, just at a more technologically convenient moment.

When organizations talk about efficiency without asking who becomes more replaceable, who loses accommodations, who loses mentorship, who loses context, and who loses the right to be slower in the places where slowness protects quality, they are not becoming more advanced.

They are becoming more extractive.

The Return of the Small, the Open, and the Bespoke

This is where the picture gets interesting.

When large organizations cut too deeply in pursuit of AI-driven efficiency, they can unintentionally create the conditions for their own competitive weakening.

The people made redundant do not disappear.

Many of them begin building elsewhere.

They create smaller tools, sharper consultancies, more humane workflows, open alternatives, local-first systems, niche services, bespoke products, and lighter organizations with far less overhead and far more judgment.

What used to require a large team may now be possible for a much smaller one, especially when the smaller team is not spending half its energy sustaining the reporting structures, coordination burdens, rent-seeking, and internal politics of a much larger institution.

This is one reason the minimum efficient scale of serious work appears to be shrinking.

Not because institutions no longer matter.

But because many of the layers that once looked like institutional strength turn out to have been coordination artifacts rather than core capability.

The Open-Source Pressure Point

There is another irony here.

When incumbent firms squeeze workers, consolidate tools, raise prices, narrow access, and optimize for extraction, they create cultural and economic pressure for alternatives.

Open-source tools improve.

Smaller ecosystems gain traction.

Users become more willing to leave.

Developers who once sustained a dominant platform redirect their energy into systems that undercut the old model.

This does not always happen cleanly or quickly, and not every open alternative is mature enough to replace a dominant system outright.

But the pattern is real: organizations that optimize too aggressively for efficiency and rent can help produce the conditions of their own destabilization.

They teach people how expensive dependence has become, and then wonder why the market starts rebuilding elsewhere.

Not Every Institution Deserves the Same Critique

It is important not to flatten all organizations into one category.

Some companies and institutions choose a different path. They protect workers longer. They accept lower margins. They keep prices or membership value steadier than the market demands. They invest in continuity, training, service, and human trust even when extractive logic would reward them for doing the opposite.

Those organizations should be taken seriously.

They demonstrate that efficiency does not have to mean cannibalization.

It can mean using better tools to preserve dignity, widen access, support workers, and make the institution more humane rather than more hollow.

That distinction matters.

The problem is not optimization itself.

The problem is optimization that consumes the social and human substrate on which all real capability depends.

Institutions Are Not Dead

I do not think this leads to a world without institutions.

Institutions still matter wherever trust, legitimacy, accountability, stewardship, and continuity matter.

Universities matter.

Hospitals matter.

Public institutions matter.

Research institutions matter.

Serious companies still matter.

But AI is changing what institutions must actually be good at.

If scale used to be a moat, that moat is getting shallower in some domains.

If coordination used to justify headcount, that justification is getting weaker in some layers and stronger in others.

If the institution's real value was never size but judgment, trust, memory, and responsibility, then AI is not eliminating the institution.

It is revealing it.

From Scale Advantage to Judgment Advantage

This is the shift I keep coming back to.

The next advantage may not belong to the organization that can simply become largest, fastest, or most extractive.

It may belong to the organization that can:

  • preserve human judgment;
  • use AI without hollowing out expertise;
  • protect institutional memory;
  • keep accessibility central;
  • maintain trust with workers and users;
  • know what should be automated and what should remain human;
  • and turn efficiency gains into human capacity rather than only cost savings.

That is a very different theory of organizational success.

It asks whether the institution exists to support human flourishing, or whether humans exist to sustain the efficiency story of the institution.

The Real Reset

So yes, I think we may be entering a larger reset.

Not a total collapse of institutions.

Not a simple victory of AI over labor.

Something more structurally interesting:

the exposure of which parts of large organizations were real capability and which parts were only expensive coordination.

The institutions that survive this well will not be the ones that automate most aggressively.

They will be the ones that understand what the automation was standing on.

The sharpest sentence I know for that is this:

AI is not only replacing labor. It is exposing which parts of institutions were real capability and which parts were only expensive coordination.

And if an institution consumes the humans whose judgment made it coherent, then efficiency has stopped being strategy.

It has become self-cannibalization.

← Back to posts