Photo by Olek Buzunov on Unsplash
Author’s Note
This is not an argument against AI, nor is it an argument that organizations are broadly incapable of making sound decisions.
AI, when applied to well-understood domains with clearly defined inputs and outputs, works. It scales execution, improves throughput, and reduces the burden of routine work. I use it extensively in exactly those contexts: to pressure test ideas, support editorial workflows, and automate tasks with predictable requirements. In those cases, the outcomes are consistently effective.
The pattern described here emerges under different conditions.
Specifically, when organizations move beyond augmenting work and begin attempting to replace entire roles without fully understanding what those roles actually contained. The issue is not the presence of AI, but the assumption that visible tasks represent the entirety of the work being performed.
There is now a growing body of evidence suggesting that this assumption does not hold. In many cases, organizations that reduce headcount based on automation later find themselves rehiring for the same or adjacent roles, after encountering gaps in coordination, judgment, and system resilience that were not captured in initial models.
There are alternative approaches.
IKEA provides a useful counterexample. Rather than pursuing workforce reduction as a primary outcome of automation, the company retrained approximately 8,500 employees into new interior design roles, shifting capacity toward higher-value, customer-facing work while still leveraging AI to handle routine tasks. The result was not a removal of human capability, but a redistribution of it.
The distinction matters.
The question is not whether AI can replace human work in isolated cases. It clearly can. The question is whether organizations understand the full scope of what is being replaced when they attempt to do so at the level of entire roles.
This piece is concerned with that boundary.
~Dom
There’s a subtle pattern emerging in how organizations talk about AI, and a far more noticeable one in how they act.
On one side, the language is familiar: augmentation, empowerment, freeing people to focus on higher-value work. On the other, the outcomes are harder to reconcile.
A recent analysis by Forrester found that 55% of companies that implemented AI and automation-driven workforce reductions ended up rehiring within months, often for the same roles they had just eliminated. The pattern is consistent across industries: leadership sees an opportunity to reduce costs through automation, executes the reduction, and then discovers that the eliminated positions were performing work that didn’t fit neatly into task descriptions or performance metrics.
This isn’t a story about AI failing. In many of these cases, the technology is performing exactly as designed. It is producing outputs quickly, scaling tasks efficiently, and operating within the boundaries it was given. The failure appears upstream, in how the work itself was understood, and in the assumptions made about what could be removed without consequence.
Because what looks like a “task” from a distance is often something else entirely.
It is context, accumulated over time, and judgment, built through exposure to edge cases. It is a layer of coordination, exception handling, and interpretation that rarely appears in documentation, and almost never in a metric.
When organizations move to automate, they tend to map what is visible: inputs, outputs, and steps in between. AI fits neatly into that model for most routine work. It can replicate the shape of the work, and it can even improve the speed at which that shape is executed.
But it does not inherently carry the weight of what made that work function in the first place. And so the pattern repeats.
A role is reduced to its most observable components, and those components are automated. The surrounding system – largely invisible, poorly measured, but essential – is removed along with it. At first, the results look like progress. Costs go down, and throughput increases. Dashboards reflect success.
Then the edge cases return.
The system becomes brittle where it was once adaptive, and even small issues require disproportionate effort to resolve. Work that once flowed begins to stall, fragment, or degrade in quality.
And eventually, someone notices that the problem isn’t the tool. It’s that something else was doing the work all along.
Healthcare provides a particularly stark example. Organizations have deployed AI to improve patient access, automating appointment scheduling, triaging inquiries, and streamlining intake processes. The technology performs well at these specific tasks. But in parallel, many of the same organizations have reduced care coordination staff, the people who handled the exceptions, navigated insurance complications, and built continuity for complex cases. The result is a system that processes routine cases faster while degrading in precisely the areas where human judgment was most critical.
This is the part most conversations about AI avoid because it’s difficult to quantify. The value being removed was never fully understood, so its absence is experienced as friction rather than recognized as loss. Which raises a more uncomfortable question than whether AI is ready to replace human work:
What, exactly, did we think that work was?
And more importantly; what did we decide it was worth?
The failure is not in the technology. The technology is doing exactly what it was designed to do. The failure is in how organizations understood the work being replaced. Or more precisely, in how they didn’t.
Because the problem predates AI entirely. AI doesn’t create organizational dysfunction. It reveals and accelerates it.
Culture as Message vs. Culture as Constraint
To understand why organizations remove things they don’t understand, it helps to start with how they define what matters in the first place.
Most organizations present themselves as guided by values: people-first, ethical, committed to quality, or some variation on that theme. These statements appear in mission documents, onboarding materials, and leadership communications. They function as messages – declarations of intent meant to shape perception, both internally and externally.
But culture doesn’t exist in what is only communicated. An organization’s actual culture is visible in the decisions it refuses to make, even when those decisions would be profitable, efficient, or defensible by the numbers alone.
If a stated value does not function as a limit on behavior, it is not culture. It is branding.
The distinction matters because branding optimizes for perception, while culture shapes action. Branding asks, “How do we want to be seen?” Culture asks, “What won’t we do, regardless of the cost?”
When culture is treated primarily as a message to communicate rather than a constraint on decisions, organizations lose the internal anchors that would otherwise prevent harmful drift. Under pressure, decisions default to what can be measured and optimized. And because culture-as-branding does not function as a limit, it does not prevent those decisions from being made.
What remains is compliance.
Compliance Is Not Culture
When culture does not constrain, something else must. What fills the vacuum is usually compliance.
Corporate compliance training, most recognizable in the form of harassment prevention, safety protocols, and ethics acknowledgments, exists primarily to meet legal requirements, create audit trails, and reduce liability. It is not, as anyone who has ever taken one of the 20-minute courses can tell you, designed to shape behavior, enforce values, or build judgment.
In many cases, when you shop for your training externally, it will include disclaimers that it should not be considered professional advice, and does not create any guarantee or attorney-client relationship. In this form, it is only culture so far as it can be reduced to its legally defensible minimum.
This creates a functional problem: if culture only exists where it is legally required, it isn’t culture. It’s risk management. The organization is not guided by internalized principles but by the boundaries of consequence. And within those boundaries, decisions are again made purely on the basis of measurable optimization.
The result is a framework where anything not explicitly prohibited becomes permissible, and anything that cannot be measured becomes expendable.
Which raises the next question: what does measurable optimization actually optimize for?
And what does it ignore?
The Productivity Misunderstanding
There is a long-standing confusion about where productivity growth actually comes from. Historically, it has not come primarily from reducing cost or headcount. Instead, it has come from better use of existing resources: improved processes, better coordination, and insight into where inefficiencies lived and why they persisted.
The pattern is straightforward: understand the system, identify the constraint, redesign the workflow, and distribute the gains. Under that framework, productivity increases because the same resources are deployed more effectively.
What’s happening now reverses that sequence.
Instead of understanding the system and improving it, organizations are removing components they assume to be redundant or replaceable. The immediate result looks like productivity; costs drop, and outputs scale. But the underlying system has not been improved. It has been simplified, often in ways that are not immediately visible.
The simplification is not random. It follows a predictable pattern, because measurable optimization can only see measurable work. When organizations optimize for what they can see without constraints rooted in culture, certain kinds of value are systematically deprioritized:
Institutional knowledge. The accumulated understanding of how systems actually work. How things function under real conditions is rarely what shows up in documentation. Instead, this knowledge exists in people who have been present long enough to see patterns, recognize red flags, and know which shortcuts are safe and which aren’t.
Judgment under ambiguity. The ability to assess situations that don’t fit standard categories, weigh tradeoffs that can’t be reduced to single metrics, and make decisions when the optimal path is unclear. This is not the same as following a process, but rather the capacity to navigate when the process doesn’t apply.
Coordination and exception handling. The informal networks and adaptive behaviors that keep work flowing when conditions change. Much of this happens invisibly: quick clarification messages, small adjustments around updates, and proactive communication that prevents larger problems from forming. It doesn’t appear in task lists, but its absence is immediately felt.
These elements don’t scale cleanly, and they can’t be automated without loss. Further, they’re rarely measured directly, which makes them easy to dismiss as inefficiency when the pressure is on to reduce costs.
But reading resilience as inefficiency is rarely sustainable.
When these elements are removed, the system doesn’t collapse immediately. Instead, it becomes fragile and loses the capacity to adapt. Left alone, it often begins to fail in ways that are harder to diagnose and more expensive to repair than the original savings justified.
The productivity gains turn out to be temporary, masking a deeper erosion of system capability. Rather than achieving cost-efficient capacity scaling, this is efficient stagnation, and in some cases, efficient decay.
AI Scales the Framework, Not the Judgment
This is the framework AI enters, though as an accelerant more often than a cause.
AI is exceptionally good at scaling measurable outputs. It can process large volumes of structured work, identify patterns, automate repetitive tasks, and improve throughput in domains where the inputs and desired outputs are well-defined.
What it does not do, and cannot do currently, is originate understanding of what should be measured in the first place.
That part of the work always requires human judgment: recognizing which variables matter, understanding second-order effects, and identifying tradeoffs where optimization in one area creates degradation in another. It requires institutional knowledge and the accumulated experience of how systems actually function under stress, and what kinds of interventions work when the standard process breaks down.
When organizations use AI primarily to reduce costs, they are optimizing for what is already visible. The measurable components of work improve. It can often enable higher task completion rates, lower processing times, and higher output volumes. But the unmeasured components like context awareness, adaptability, and coordination are removed along with the people who carried them.
This is not a flaw in the technology itself. It is a flaw in the framework AI was deployed within.
AI scales execution; it does not originate understanding. When the people who understood the system are removed in favor of a tool that executes tasks efficiently, the organization is left with a system that performs well under expected conditions and fails badly under unexpected ones.
The framework was already optimizing for the visible at the expense of the invisible. AI did not change that. It just made the pattern faster, and harder to reverse.
This also points to an alternative deployment model that is often underexplored.
When AI is used to absorb routine, well-defined work, it does not have to result in the removal of human roles. It can instead expand the capacity of those roles, allowing the same people to operate at a higher level of judgment, handle more complex cases, and engage more directly in areas where human context is most valuable.
In this model, AI does not replace human contribution. It redistributes it toward work that was previously constrained by time and throughput. The outcome is not reduced headcount, but increased capability.
Stakeholders vs. Shareholders
If the framework is faulty and AI accelerates it, the next question is why the pattern persists.
Why don’t organizations correct course when the consequences begin to appear?
The answer is structural, and it has to do with who absorbs the cost first.
Every publicly traded organization operates under competing incentives, but is ultimately held accountable to shareholders, rather than stakeholders. Fiduciary responsibility demands that decisions be made not only to ensure the ongoing viability of a business, but to ensure growth in shareholder value.
The part left behind by decisions framed against this standard is every other stakeholder.
Stakeholders include employees, customers, and communities. When employees disengage, either because they no longer trust the organization’s stated values or because the systems they work within have become incoherent, execution degrades. When customers lose confidence, revenue follows. When communities weaken, the broader environment that supports the organization erodes.
These consequences do not appear immediately in quarterly reports. They accumulate as second-order effects: higher turnover, lower retention, reputational damage, and weakened trust. Over time, they feed back into the organization as reduced performance, increased fragility, and higher costs to maintain the same level of function.
The lag is what makes the pattern self-reinforcing. By the time shareholder metrics reflect the damage, the institutional knowledge that would have prevented it is already gone. The feedback loop that should correct the behavior is delayed long enough for the corrective capacity itself to be removed.
Culture-as-branding may satisfy shareholders and regulatory requirements in the short term by delivering measurable gains. But it erodes stakeholder alignment over the long term, creating conditions where the organization becomes less capable of executing even as it becomes more efficient at producing outputs.
Shareholders may determine direction. But stakeholders determine whether that direction survives contact with reality.
This is the paradox: an organization can become better at doing the wrong things, faster.
The Pattern, Fully Assembled
With each piece in place, the full mechanism becomes visible.
Culture is defined rhetorically rather than enforced structurally. It functions as branding, not as a constraint on decisions. Without that constraint, decisions under pressure default to measurable optimization, because measurable optimization is the only framework left that can be defended.
Compliance becomes the only remaining floor. It shapes the minimum, not the direction. And because compliance only addresses what is legally required, it leaves everything else open to optimization.
Measurable optimization can only see measurable work. It systematically deprioritizes institutional knowledge, judgment under ambiguity, and informal coordination – the precise elements that made the system resilient in the first place. The removal looks like productivity gains because the metrics cannot detect what was lost.
AI adoption accelerates the cycle. It excels at scaling measurable outputs, and in doing so, it reinforces the existing framework. It automates visible work, removes human layers, and ignores the informal systems that made the work function. It does not change the logic of the decision. It just makes the decision faster, cheaper, and harder to reverse.
Short-term gains mask long-term degradation. Costs drop, throughput increases, and dashboards reflect success, while the organization quietly loses resilience, adaptability, and trust.
Stakeholders absorb the impact first. Employees disengage, customers lose confidence, and communities weaken. Eventually, this feeds back into the organization as reduced performance, increased rework, and structural fragility – but by then, the people who would have known how to repair it are gone.
The system degrades, not because AI replaced people, but because the organization removed human systems it never understood in pursuit of metrics that could not capture what those systems were doing.
What This Means
The central insight is not that AI is dangerous, or that automation is inherently harmful. The insight is that AI reveals what was already broken: the gap between what organizations claim to value and what they actually protect when those values become expensive.
When culture functions as branding rather than constraint, organizations optimize for what can be measured, comply with what is required, and passively remove everything that makes them resilient. AI does not create this dynamic, but it excels at accelerating it.
The solution is not to resist AI, but to recognize that the failure mode is structural, rather than technological. It originates in how organizations define culture, understand value, and make decisions under pressure.
If culture does not constrain behavior in ways that prevent an organization from making decisions that are profitable but harmful, it is not culture. It is marketing.
No amount of technological capability can compensate for a framework that treats judgment, context, and human adaptability as overhead to be minimized rather than capacity to be preserved.
The question is not whether AI can replace human work. The question is whether we understood what that work was in the first place.
And whether we’re willing to protect it, even when the numbers suggest otherwise.




Leave a comment