Photo by Joyce Romero on Unsplash
Author’s Note
The opening story in this piece is fictional—but it’s not invented. It draws from publicly available reporting, court documents, and personal accounts related to the Purdue Pharma case and the wider opioid crisis. The names, moments, and characters have been fictionalized, but the architecture of the story—how well-meaning people became part of a deeply harmful system—is all too real.
That said, this article isn’t about the pharmaceutical industry. It’s not even primarily about opioids.
The deeper focus here is on something more transferable: the collision between ethics, responsibility, and the fragmented, hierarchical, and role-bound structures that define most modern organizations. When systems are built to reward performance over reflection, compliance over conscience, it becomes dangerously easy for harm to scale without malice—and often, without anyone feeling personally accountable.
The story at the beginning is simply a lens. What follows is the more urgent question: What happens when good people do their jobs well inside a flawed system?
We’re not meant to read this with distance, as a case study of something long resolved. We’re meant to see the echoes—across industries, across roles, across the quiet decisions we make every day.
The goal isn’t to assign blame. It’s to illuminate the gaps between intent and impact, and to suggest that responsibility doesn’t end at the edge of our job descriptions.
~Dom
Nobody Meant To: A Sales Rep’s Recollection
I joined the company in the spring of 2001. Fresh out of college, hungry for a career that meant something. During onboarding, they told us that OxyContin was going to change lives. Not just for patients in pain—but for people like me. This was our moonshot, they said. A painkiller with a low risk of addiction, backed by studies, by doctors. It wasn’t just legal. It was noble.
I still remember the first doctor I pitched. Dr. Lowell, family practice, near Cincinnati. I was nervous—my pitch too scripted, my smile a little too wide. But he was kind. He listened. He asked questions. When he finally agreed to trial the product, I drove to a rest stop off I-75 and cried in my car. Relief. Pride. The quiet satisfaction of doing something that mattered.
Over the next year, I got good. Learned to work the angles. If a doctor hesitated, I had brochures with smiling grandmothers and reassuring graphs. If they mentioned addiction, I had a line ready—“Actually, studies show the addiction risk is under 1%.” And it wasn’t a lie. At least, not to me. That’s what we were taught. That’s what the company said. That’s what the experts said. Who was I to question it?
The bonuses helped. Quarterly targets, then stretch goals, then regional competitions. The sales team had this energy—driven, polished, almost missionary. We weren’t pushing a product; we were expanding access to relief. When my territory broke into the top 10%, they flew us to Scottsdale for a recognition retreat. Poolside. Open bar. Plaques with our names etched in glass. “You’re changing lives,” my manager said.
And maybe we were.
But somewhere around 2006, the questions got harder. A doctor mentioned a teenager who died of an overdose after taking pills prescribed to his father. A nurse pulled me aside to say the waiting room was filling with patients she swore were gaming the system. I reported those conversations—flagged them in my weekly summaries, like we were supposed to.
The feedback was always the same. “That’s unfortunate, but we’re not responsible for diversion. That’s on the doctors and pharmacies. Our job is education and access.”
I started skipping offices that made me uneasy. But I didn’t stop selling.
The news coverage ramped up a few years later. Lawsuits. County officials talking about rural overdose spikes. Stories about mothers finding their sons cold and blue in the basement. Still, the company denied everything. “A few bad actors. A misunderstanding of addiction. Media sensationalism.”
I held onto that longer than I should have.
It wasn’t until 2017 that I really saw it. Not just in the news, but in the silence around me. Reps I used to know were gone. Doctors were being investigated. Some of the clinics I helped “educate” were now shuttered or sued. And when the attorney general finally named us—named us—as one of the core drivers of the opioid epidemic, I couldn’t pretend anymore.
I wasn’t a monster. I didn’t lie—not on purpose. I did what I was taught. I followed the rules. I believed in the work. And still… I was part of it. A cog in a wheel I didn’t build, but one I helped turn.
Sometimes, when I try to make sense of it, I think about a question no one asked us during training:
“What if following the process still makes you complicit?”
But that question never helped when the story rolled through the news cycle, fueled by fractured communities, broken families, and death toll counters.
Recommended Listening:
Beyond One Crisis
We’re not here to re-litigate the opioid crisis, or cast blame where the dust has already settled.
By now, you’ve likely heard enough, whether through documentaries, news coverage, or lawsuits that named not just Purdue Pharma, but distributors, marketing agencies, and entire hospital networks. The intent of the narrative (fictional, but grounded on reporting and settlement documents) story above isn’t to rehash the facts. It’s to illuminate something more uncomfortable, and more transferable:
The sales rep in that story didn’t do anything illegal.
She did her job. She followed the training. She hit her targets. She reported concerns through the proper channels. She never forged data, never falsified a prescription, never knowingly harmed a patient.
And still, she helped build a system that harmed millions.
That’s the point.
Because this doesn’t just happen in pharma. It happens anywhere a system allows people to believe that compliance is the same as morality. And the more complex the machine, the easier it becomes to confuse the two.
It happens:
- In tech, when engineers design engagement algorithms that radicalize users or spread misinformation—because they were told to “optimize for retention,” not truth.
- In retail, when frontline workers enforce policies that disproportionately harm vulnerable customers—because “that’s the returns policy,” even if it results in someone going without a basic need.
- In finance, when analysts package and sell products they don’t understand—because “it’s all rated AAA,” even as the foundation begins to collapse.
- In real estate, when investors buy up homes to convert into short-term rentals or flip for profit—because “the ROI is great,” even as local families are pushed out, rent prices surge, and housing becomes a scarcity.
- In social media moderation, when underpaid workers are asked to review violent content at inhuman speed—with quotas, but no psychological support.
- In government contracting, when teams deliver incomplete or non-functional tools to schools, prisons, or public agencies—because it checked the procurement boxes, even if it doesn’t work.
In each case, individuals are doing what they’re told. They’re inside the lines. They’re meeting expectations.
But the system is broken. And they’re part of how it stays that way.
That’s the deeper tension we want to examine.
Responsibility Doesn’t End at Obedience
Sometimes, following the rules does ensure ethical behavior. But sometimes, it becomes the mechanism through which harm is enacted at scale.
And most systems are designed to protect themselves precisely by fragmenting responsibility. No one person designs the whole machine. No one person sees the full outcome. Everyone just does their part, often with good intentions.
That’s how ethical people become instruments of flawed, irresponsible, or even evil systems.
And the hardest part? You usually don’t know until it’s too late… until the headlines come, or the lawsuits, or the history books. And by then, the line between “I did my job” and “I was complicit” is likely blurry, painful, and hard to speak aloud.
The Architecture of Ethical Fragmentation
No one wakes up hoping to contribute to something harmful. Most people, in most companies, are trying to do good work. They follow the rules, meet their targets, support their teams. And yet, systems made up entirely of well-meaning people can still cause enormous damage—quietly, incrementally, and without a clear villain to point to.
This is the unsettling truth behind ethical fragmentation. In modern organizations, responsibility is divided so finely that it all but disappears. You do your part, someone else does theirs, and before long, the whole machine is in motion. But no one individual ever feels like they’re steering it.
Start with role isolation. We’ve carved up business into silos for the sake of efficiency; marketing, legal, finance, operations, compliance. Each team has its own metrics, its own mandates, its own sphere of concern. If your numbers look good, your job is safe. But the cumulative effect of everyone’s “piece” can produce outcomes no single team would have signed off on. You can be exceptional at your role and still help reinforce a system that, from the outside, looks predatory, extractive, or inhumane.
Then add procedure. Policies are supposed to protect us—from inconsistency, from bias, from legal risk. But over time, they become shields we hide behind. When a policy says to deny a claim, issue a termination, escalate a fee—it no longer feels like a choice. The moral weight is gone. “That’s just the process.” And questioning that process is often seen not as integrity, but as insubordination. Objections, even driven by concern or compassion, become a liability.
Incentives compound the problem. Most performance systems are built around metrics that are easy to quantify: speed, volume, efficiency, compliance. What we rarely measure are things like dignity, fairness, or ethical restraint. There’s no KPI for the harm you didn’t cause. So long as the dashboard looks good, the system congratulates itself. Harm becomes invisible when it’s profitable and unmeasured.
And finally, there’s the issue of distributed agency. Decisions pass through so many hands, systems, and tools—algorithms, shared services, legal reviews, vendor agreements—that no one owns the outcome. By the time something unethical happens, it’s not clear who approved it. In some cases, no one did. It just… happened. Because that’s how the system is designed to function. It’s not sabotage. It’s just the machine running smoothly.
This is how people who consider themselves ethical become part of something that, in hindsight, shouldn’t have existed in the form it came to occupy. Not because they were careless or cruel—but because the system they were in rewarded performance, not perspective. It protected process, not people.
And because we’ve built our organizations to value alignment over interrogation, most people never get far enough outside their role to see the whole shape of what they’re part of. They never look down the line to ask, What happens after I click approve? After I file the report? After I hit the target? Most of us are too busy doing our jobs, and too dependent on them for survival, to ask if the job should be done at all, much less if it should be done this way.
When Doing Your Job Is the Problem
If this still feels abstract, let’s bring it down to earth—because these aren’t rare occurrences or historical outliers. They’re happening all around us, and they often start with people who are just doing their jobs well.
Take social media. A product manager at a major platform might be assigned a clear, measurable goal: increase user engagement by 15% this quarter. They’re smart, driven, and good at what they do. They A/B test thumbnails, optimize headlines, tweak algorithms, and eventually discover what works: outrage. Posts that spark anger, tribal identity, and us-versus-them rhetoric get more clicks, more shares, more comments. And so the system leans into it. Not because anyone said “let’s radicalize the population,” but because someone said “we need to hit our engagement targets.” And the product manager and dev teams delivered.
(Rage clicks: Study shows how political outrage fuels social media engagement)
Or consider the 2008 financial crisis. A mortgage broker in 2005 isn’t trying to crash the global economy. He’s sitting across from a young couple with average credit and a dream of owning a home. He offers them a subprime loan—not because he’s malicious, but because the underwriting guidelines say it’s acceptable, and the couple qualifies. His job is to match people with mortgage products and close deals. He does it well. His firm celebrates him. And in doing so, he helps flood the market with bad debt that will later collapse the housing market and destroy millions of lives.
(Subprime Mortgage Crisis)
Then there’s Wells Fargo. Employees under immense pressure to hit aggressive sales quotas were encouraged to open new accounts for customers—often without the customers knowing. Many of these employees didn’t see themselves as fraudsters. They were following instructions, hitting their targets, and doing exactly what the system trained and rewarded them for. It wasn’t until the scandal broke that the full scope of the harm became clear: millions of fake accounts, shattered trust, and lives financially disrupted. Not because any one teller or branch manager wanted to deceive, but because the system incentivized them to act without questioning whether what they were doing was right.
(Wells Fargo Agrees to Pay $3 Billion to Resolve Criminal and Civil Investigations into Sales Practices Involving the Opening of Millions of Accounts without Customer Authorization)
Each of these people was competent, compliant, and—in their view—doing their job, helping people, or at the very least not causing harm. But they were operating inside systems that trained them to look at performance, not consequence. They weren’t asked to see the whole picture. In many cases, they were actively discouraged from doing so.
And that’s the common thread. When responsibility is reduced to role, and success is measured only in metrics, you don’t need villains. Momentum will suffice.
Obedience Without Thought Is Not Virtue
Hannah Arendt, a political philosopher who chronicled the rise of totalitarian systems, was struck not by the presence of evil—but by its banality: how it moved through ordinary people doing ordinary jobs. As she was writing about the trial of Adolf Eichmann, she didn’t find a monster in the courtroom. She found a bureaucrat—an efficient, orderly man who spoke in clichés, followed procedures, and believed himself innocent because he had never physically harmed anyone.
Her phrase for it was unforgettable: “the banality of evil.”
“The sad truth,” she wrote, “is that most evil is done by people who never make up their minds to be good or evil.”
That quote lingers because it still applies—to governments, yes, but also to corporations, to platforms, to markets. When we build systems that prize performance over principle, we risk creating cultures where the absence of intent becomes the foundation of harm.
And here’s the uncomfortable part: the more sophisticated the system, the easier it is to hide behind it. We can point to the algorithm. The dashboard. The legal department. The incentive structure. “I didn’t decide that.” But that’s Arendt’s point. When no one decides, the system decides—and often, it does so without conscience.
Immanuel Kant, centuries earlier, would have had little patience for that abdication. His moral imperative insisted that we act only in ways we would want to be universal law—not based on what’s permitted, profitable, or expected, but on what’s right. That kind of reasoning doesn’t scale easily. It demands inner judgment, not external approval.
And yet, that’s exactly the kind of judgment modern work culture often discourages. We talk about ownership, but we rarely mean moral ownership. We want initiative, but only within sanctioned lanes. The unwritten rule is clear: don’t make waves. Do your job. Trust the system.
But what if the system is wrong?
What if, as Arendt warned, the ability to commit harm isn’t found in some deep inner malice—but in the ability to stop thinking at the moment when thought is most needed?
That’s not just a philosophical risk. It’s a practical one. History is full of examples where societies didn’t fall because everyone agreed on something evil—but because everyone agreed not to look too closely.
What It Looks Like to Choose Otherwise
Not every company lets responsibility vanish into a flowchart. Some have chosen to confront ethical ambiguity head-on—to structure themselves, speak publicly, or build policies in ways that prevent the easy slide into “just doing my job.” They’re not perfect. But they’re trying something different. And that difference matters.
Patagonia is often cited for its environmental commitments—but it’s the company’s willingness to own tradeoffs publicly that sets it apart. When they launched campaigns discouraging overconsumption, or when they exposed the ethical limits of their supply chain (like the traceability challenges in down and wool), they didn’t hide behind marketing. They acknowledged complexity. They didn’t pretend to be clean—they explained how they were getting cleaner. That kind of transparency resists the illusion of ethical simplicity and forces moral questions back into the spotlight where they belong.
Mozilla, the nonprofit behind the Firefox browser, is another example. While most tech companies prioritize monetization of user data, Mozilla has remained committed to user privacy—often to its financial detriment. They’ve built in anti-tracking by default, published clear policy rationales, and openly documented the ethical stances behind their design choices. Employees don’t just write code—they participate in discussions about what the software should be allowed to do. Moral discourse isn’t an afterthought—it’s embedded in the engineering process.
In the world of journalism, ProPublica takes a similar approach. Rather than racing to break news for clicks, they invest in slow, investigative reporting—often in partnership with local outlets—designed to expose systems of harm. But what’s worth noting here is how they publish: often including data sets, methodologies, and sourcing notes alongside the article itself. They treat transparency not as a legal defense, but as a public good. And in doing so, they invite the reader not just to consume information, but to examine the structure behind it.
Valve takes a different kind of stand—structural more than moral, but no less meaningful. I won’t pretend that Valve is perfect – they’ve faced criticism for their internal ambiguity and lack of DEI accountability, but their famously flat hierarchy removes the buffer layers that typically shield decision-makers from consequence through plausible deniability. Employees choose their work, and in doing so, retain ownership over the outcomes. That structure doesn’t allow for the usual deflections—“It wasn’t my call,” or “I was just following orders.” At Valve, you build what you believe in, or you walk away. The clarity is uncomfortable—but it’s honest.
And when it comes to transparency, Valve’s approach to content restrictions is almost unheard of. When payment processors pressure platforms to remove games—often for sexual content or political controversy—most stores cave quietly. They enforce the ban, cite vague policy violations, and pretend the decision was internal.
Valve didn’t pretend. They labeled it. They told developers: We would publish this, but our payment processor won’t allow it. That choice sparked debate. It made people angry. But more importantly, it made people aware. Suddenly, power had a name… and an email address, and phone numbers (much to Visa and MasterCard’s chagrin).
That’s the kind of discomfort organizations must be willing to face if they want to remain ethically coherent in a complex world. When you refuse to hide behind systems, you force yourself—and your users—to confront the actual levers of control.
And that’s where responsibility begins.
Quiet Questions, Clearer Choices
No single person can change a system. That’s a reality worth acknowledging—especially in organizations built for scale, consistency, and control. You can be ethical in your work, intentional in your choices, and still find that the outcomes stretch far beyond your role, your team, or your visibility.
So this isn’t a call for rebellion. It’s a call for attention.
Not dramatic action, but deliberate questioning. The kind that starts quietly and builds over time.
Because sometimes the most impactful thing you can do isn’t to refuse your role—it’s to understand it more fully. To ask: Where does this decision go once it leaves my hands? Who is affected? What assumptions am I making when I follow this policy, or approve this process, or optimize this number?
It’s easy to measure success by what works. But ethical responsibility begins when we also ask: What if it works, but causes harm? What if it fails, and someone else pays the price?
These are not accusations. They’re invitations—to see the system you’re part of a little more clearly, and to imagine your role not just as a function, but as a point of influence. To speak when something doesn’t sit right. To trace the line between intent and outcome. To make visible what is often hidden behind procedures, platforms, or performance metrics.
And over time, when enough people begin to carry that kind of awareness—across roles, across departments, across industries—the system becomes more permeable. More human. More accountable.
We don’t need perfect answers. But we do need more people willing to ask better questions.
Responsibility isn’t always about taking the blame. Sometimes, it’s just about choosing to see.





Leave a comment