Photo by Victor Rosario on Unsplash
Four astronauts lifted off from Kennedy Space Center aboard Orion, bound for the Moon for the first time in more than fifty years. The mission had taken decades to reach the pad. Thousands of engineers, across dozens of organizations, had contributed systems that had to work together precisely; not just within their own teams, but at every point where one team’s output became another team’s input.
In 1999, that boundary cost NASA a spacecraft.
The Mars Climate Orbiter was lost 286 million miles from Earth. The cause was not a failed component. No hardware malfunctioned, and no software contained an error within its own logic.
One engineering team transmitted navigation data in pound-force seconds. The receiving system expected newton-seconds.
Both teams were competent. Both systems were correct. The failure occurred in the space between them – in the assumption, never verified, that they shared the same definition of a number.
The loss cost $327 million and nearly a year of mission timeline. It was caused by the absence of something that sounds almost administrative: a shared unit of measurement.
Most organizations will never lose a spacecraft. They lose things the same way anyway.
A decision moves from the team that made it to the team that must execute it, and arrives changed. A metric that means one thing to Finance means something adjacent but distinct to Operations. A policy written to govern one context gets applied in another where its underlying logic no longer holds. A process designed around one set of assumptions runs headlong into a reality where those assumptions were never valid.
None of the people involved are wrong, exactly. Each is working correctly within their own frame of reference. The failure lives at the interface: in the absence of a shared definition that would have made the translation visible before the cost was realized.
Definitions, processes, protocols, and policies are almost universally experienced as friction: the bureaucratic residue of someone else’s past mistake. Restrictions layered on top of work that was moving well enough without them. A tax on speed, imposed by people who are no longer in the room.
That perception is understandable. It is also precisely backwards.
What those structures actually do, when they are working, is what a shared unit of measurement does for two engineering teams across 286 million miles of empty space. They do not restrict the work. They make the work transferable.
They are not walls in the system. They are its grammar – the mechanism by which meaning survives the journey from one context to the next without changing shape along the way.
The Climate Orbiter did not need more or fewer engineers. It needed one document: an agreed definition of what a number meant before anyone sent one.
The Failure Anatomy
The failure is rarely inside a component. It is almost always at the interface.
In complex organizations, translation failures follow a consistent geometry. Systems behave as designed, people act with competence, and outputs are locally correct. What breaks is the continuity between them: the point where one context hands off to another without a shared frame for what the output means.
Three patterns show up repeatedly.
Definition drift.
The same word carries different meanings across functions. “Customer” in Finance maps to the bill-to entity. In Sales, it maps to the account relationship. In Operations, it maps to the ship-to location. Each definition is rational within its domain. None are interchangeable.
For a time, most systems tolerate this. Dashboards still populate and conversations still move forward. The divergence goes unnoticed because each team is internally consistent. The fracture appears when outputs cross boundaries: when a metric reconciles in one system and fails in another, or a growth number can be defended and disputed simultaneously. Disagreement becomes personal because the language appears shared while the meaning is not.
Nothing looks broken, until someone asks what the other side thinks the word means.
Context collapse.
A policy or control is designed under a specific set of conditions, then applied outside them.
A pricing rule built for direct sales is extended to distributor channels. A data validation written for a stable schema is applied to a system under active migration. A governance control intended for high-risk transactions becomes a blanket requirement for all work.
In each case, the policy may not be incorrect. It may simply be misapplied.
From a distance, objections read as resistance to governance or rigidity. Up close, the real problem is a loss of context. The original conditions that made the rule valid are no longer present, but the rule persists because it has been detached from the environment it was designed to operate in.
The system doesn’t fail because the control exists. It fails because the control is no longer anchored to reality.
Assumption inheritance.
Processes accumulate premises that are never written down.
A reconciliation step assumes upstream data is complete by a certain time. A reporting pipeline assumes identifiers are unique end-to-end. A workflow assumes a specific sequence of events that was true when it was built.
Those assumptions are invisible when they hold. They become obvious when they don’t.
Over time, environments change. Systems are replaced, ownership shifts, and volumes increase. Edge cases become exceptions, then normal cases. The original context erodes, but the process continues to run as if those conditions still apply.
When it breaks, the response is usually to add another step; another check, another manual control, another exception path. The process grows more complex rather than more correct, because the underlying assumption was never surfaced or revisited.
The failure is not in the step that breaks. It is in the premise that was inherited without being named.
Across all three patterns, the structure is the same. Local correctness is preserved, but global coherence is not.
The system behaves exactly as it was defined to behave within each boundary. What it lacks is a mechanism to ensure that meaning survives the crossing between them.
Why the Friction Framing Persists
The perception that definitions, processes, and policies are friction is not irrational. It emerges naturally from how these structures are experienced. At the point of contact, they are constraints.
A required field blocks submission. A validation rule rejects an invalid entry. A policy requires an approval that wasn’t needed before. A process introduces a step that slows something that was moving faster yesterday.
That is the moment people feel, and the one they remember. What they do not experience directly is what those same structures prevent.
A metric that lands in another team’s system and means the same thing it meant at the source causes no confusion. A decision that survives a handoff without reinterpretation doesn’t need to be relitigated. A process that runs without requiring reconciliation, escalation, or translation by a third party simply runs.
When those things happen, nothing appears to have happened at all. The system works.
This creates a predictable asymmetry. Friction is visible at the point of enforcement. Translation is invisible at the point of success. No one references the orbiter that didn’t crash.
Over time, organizations optimize for what they can see. Teams remember the delays, the approvals, the rejected inputs, and the additional steps. They cannot remember the failures that never occurred, or the conflicts that never materialized because definitions held across boundaries.
So the structures that make work transferable are experienced as the things that slow it down.
That perception carries a cost, but not the one most people expect. It is not speed lost at the point of enforcement, but instead the work created downstream when translation fails.
Rework appears when outputs must be reshaped to fit a different interpretation. Arbitration appears when two systems produce different answers that both appear correct. Parallel dashboards emerge when teams stop trusting shared metrics and build their own. Decisions are revisited because the original intent did not survive the handoff into execution.
None of this registers as a single failure. It accumulates as operational drag, in reconciliation that should not be necessary, in meetings that exist to re-explain decisions already made, in individuals who become intermediaries because the system itself cannot carry the context.
From within, this feels like complexity. From outside, it looks like inconsistency. In practice, it is the absence of shared language expressing itself as repeated work.
The organization is not moving slower because it has too many constraints. It is moving slower because it must continually repair meaning after it has been lost.
What These Structures Actually Do
The mistake is treating these structures as controls. They are more accurately described as translation systems. Each exists to preserve meaning as work moves across boundaries, between people, systems, and contexts where assumptions no longer match.
When they are working, they are invisible. When they are missing, everything downstream becomes negotiation.
Definitions are not just rules. They are the precondition for agreement.
Without a shared definition, alignment is an illusion. Two teams can use the same word, report the same metric, and make decisions that are internally consistent and externally incompatible.
A definition constrains interpretation. In doing so, it makes interpretation transferable.
It establishes what a term refers to, what it excludes, and at what level of abstraction it operates. It is the difference between a number that can be shared confidently and a number that must be re-explained every time it crosses a boundary.
Processes are not arbitrary restrictions. They are encoded memory.
A process captures a sequence of decisions that have already been made: what order things must happen in, what conditions must be met, and what checks are necessary before moving forward.
Without that encoding, those decisions do not disappear. They are remade, repeatedly, by different people, under different conditions, often with less context than the person who made them the first time.
A well-defined process does not slow work. It prevents the organization from having to rediscover the same constraints under pressure.
Protocols are not bureaucracy. They are interface specifications.
A protocol defines how two systems, or two teams, exchange information without requiring either side to understand the internal structure of the other. In practice, most people encounter them as the request form or the ticketing system.
A protocol specifies format, timing, expectations, and failure handling. It establishes what must be true at the boundary for the interaction to succeed.
Without one, integration becomes a matter of interpretation. Each side infers intent from incomplete signals, and correctness becomes contingent on assumptions that were never explicitly stated.
A protocol does not add overhead. It removes ambiguity at the point where ambiguity is most expensive.
Policies are not constraints on work. They are how intent survives context change.
A policy carries a decision across conditions. It expresses what must hold true when circumstances shift, when new teams are involved, or when the original decision-makers are no longer present.
Without that layer, intent degrades as it moves. A decision made in one environment is reinterpreted in another, often in ways that contradict the original purpose while still appearing compliant.
A policy does not limit action without reason. It ensures that action remains aligned with the decision it was meant to implement.
Across all four, the function is the same. They do not control the system arbitrarily. They make it coherent, ensuring that a number sent is the same number received, that a decision made is the same decision executed, and that a process run tomorrow produces the same result it did today.
Without them, organizations lose more than capability. They lose continuity. And without continuity, every boundary becomes a point of failure.
When the Reframe Fails
The argument made so far deserves an honest complication.
If definitions, processes, protocols, and policies are translation systems, then the obvious question is why so many of them feel like anything but. Not because people misunderstand their purpose, but because many of them were never designed for translation in the first place.
The same structures that make work transferable can be built, deliberately or by drift, to serve entirely different purposes. And when they are, the friction they create is not a misperception. It is the point.
Consider what happens to a protocol in an organization under chronic resource pressure. An intake process that is sufficiently agonizing – opaque enough, redundant enough, requiring enough signatures from people with no real context on the work – will naturally reduce demand. Users give up, and requests don’t get submitted. The boundary becomes a moat, and the friction is not a design failure. It is a load-balancing strategy.
In this case, the interface was never meant to translate; it was meant to exhaust.
Or consider what happens to a process when accountability is the primary design constraint. An approval chain that requires sign-off from six directors who have no meaningful context on the project is not translating a decision across boundaries. It is diffusing responsibility so thinly that no single person can be held accountable if something goes wrong.
In this example, the structure exists not to make meaning transferable, but to make blame transferable. The interface carries liability in place of intent.
In both cases, the structure looks identical from the outside to one that is working correctly. It has the same form: a process, a protocol, an approval chain. What differs is what it was built to carry.
This is where the friction framing becomes partially correct, and why it persists even among people who should know better. Some of what gets labeled bureaucracy genuinely is bureaucracy. These are not translation systems operating invisibly. They are political or defensive structures operating exactly as intended, at the expense of the people trying to cross the boundary.
The diagnostic is not found in the structure itself. It is found in where the cognitive load lands.
A translation system moves cognitive load onto the structure and away from the person crossing the boundary. It specifies format so the user doesn’t have to infer it. It confirms receipt so the user doesn’t have to follow up. It routes automatically so the user doesn’t have to know the internal geography of the organization. The boundary crossing becomes, as much as possible, someone else’s problem to manage.
A bureaucratic structure does the opposite. It requires the user to learn internal terminology that serves no external purpose. It demands redundant inputs that exist for the organization’s convenience rather than to advance the work. It routes responsibility back onto the person submitting, leaving them to navigate a system designed around needs they don’t share and constraints they can’t see.
The tell is simple: after crossing the boundary, who did the work?
If the answer is the system, the structure is translating. If the answer is the person who needed something, the structure is defending. Both can be called a process, but only one deserves to be.
In practice, however, the difference between a translation system and a bureaucratic one rarely requires a cultural overhaul to resolve. It usually requires legibility at a very local level.
The Signal That Couldn’t Travel
A new report was being introduced, and facing resistance. Users were encountering situations where numbers didn’t match expectations, or a new acronym was only vaguely defined. Questions and feedback were inevitable.
But the path for submitting them wasn’t.
When something looked wrong, the options were unclear. Was this an email to the analyst? A comment in the next meeting? A message to whoever happened to be nearby? Most of the time, the answer was none of those. The concern was noted internally, mentioned offhand in a conversation, or simply absorbed as a general sense that something wasn’t quite right.
By the time it surfaced formally, context had been lost, the moment had passed, or the person who noticed it could no longer remember the specifics.
When feedback did arrive, it was vague. The data might be wrong. The definition might be unclear. The logic might not match expectations. Without a defined path for submitting that signal, it arrived stripped of the detail needed to act on it. The team went back through the pipelines, validated the outputs, checked the transformations. Everything held.
The feedback continued anyway, because the real problem was never in the data. It was in the absence of an interface between the people experiencing the issue and the system responsible for it.
There was no agreed protocol for how questions or feedback were supposed to travel. No defined format, no designated path, no confirmation that anything had arrived or was being addressed. Feedback took whatever shape the sender could manage, and arrived without the structure needed to make it usable.
The solution was not a data audit. It was a protocol; and a deliberately simple one.
A button, labeled “Errors & Feedback,” placed consistently in the lower left corner of every page of every large report. Same position, same label, every time. Behind it, a PowerApp built in a few hours: instant confirmation of receipt, automatic notification to the report owner, and tracking from submission to resolution.
The right thing to do was made the easiest thing to do, and obvious enough that it required no instruction.
The measure of success wasn’t the volume of submissions. It was that people started using it before the feature was formally announced; organically, without being told to, because the path was clear and the friction was gone.
The interface didn’t need to be explained. It simply needed to exist.
What it replaced is harder to count precisely, but easier to recognize. Every question that would have been absorbed into a hallway conversation. Every concern that would have surfaced three weeks later, stripped of context, in a meeting that shouldn’t have needed to happen. Every report owner who would have learned about a problem through a chain of forwarded emails rather than a direct, trackable submission. Multiplied across teams, across regions, and across every report that now carries the same button in the same corner.
The hours that won’t be spent reconciling feedback that arrived too late, too vague, or not at all don’t appear on any dashboard. They are the orbiter that didn’t crash.
The Interface Is the Design
The difference between a lost orbiter in 1999 and a crew returning from lunar orbit in 2026 was not entirely technology or intelligence. In the places that mattered most, it was legibility.
The mission that lifted off from Kennedy Space Center did not depend on the right people being in the room, holding the right assumptions in their heads. The lessons of the Mars Climate Orbiter were not simply remembered. They were encoded: written into interface specifications, unit standards, and shared definitions that did not require interpretation at the point where interpretation had previously been fatal.
The void between Earth and the Moon has not become simpler to navigate. The organizations navigating it have become more coherent. That is the difference.
Every organization operates across a similar distance, compressed differently. Decisions move from strategy to execution. Data moves from source systems to reporting layers. Intent moves from one function into another where the original context no longer exists. Each of those movements is an interface. Each interface requires translation.
The question is never whether translation happens. It always does. The question is whether it was designed, or left to the people crossing the boundary.
When it is left to people, it shows up in familiar ways. Meetings that exist to re-explain decisions already made. Side conversations that carry context the system cannot. Corrections that happen offline and never make it back into the record. Individuals who become indispensable not because of what they build, but because of what only they can interpret.
That translation has a cost. It is not paid as a single failure. It is paid continuously, in rework, in delay, in decisions that must be revisited because they did not survive the journey intact.
When the interface is designed – when definitions are explicit, protocols are clear, and processes carry the decision instead of deferring it – that cost does not disappear. The complexity remains. But it is carried by the structure instead of the people.
Not every broken interface is the result of bad faith or defensive design. Most persist for a simpler reason: no one made the boundary legible. No one defined what needed to be shared, how it should move, or what it should mean on the other side. So the work of translation falls back onto the people, and the organization absorbs the cost without ever naming it.
Most failures will not look like the loss of a spacecraft. They will look like complexity, inconsistency, work that takes longer than it should, and decisions that do not hold.
The difference is rarely found in a transformation program or a cultural reset. It is found in whether the next boundary someone crosses is clear enough that they do not have to stop and ask what anything means.
That is where the system either holds, or asks its people to do the work it was never designed to carry.




Leave a comment