Vol. IV  ·  Issue 032 Tues., 14 Jan 2028 Obtained & Published Without Permission
Edge Cases
Covering the Gaps Between What Engineers Intended and What Actually Happened
Disclaimer

Edge Cases is a work of satirical fiction. All organizations, incidents, training programs, personnel, transcripts, and “kinetic accountability events” depicted herein are entirely fictional and are not intended to represent any real company, person, system, or occurrence. Andruil Technologies LLC is a fictional entity. Any resemblance to actual autonomous weapons contractors, past or present, is coincidental and should not be inferred, litigated, or weaponized. RoboCop (1987) is real. ED-209’s design flaws are real. The rest is satire.

⚠   SENSITIVE INTERNAL TRAINING MATERIALS — ANDRUIL TECHNOLOGIES LLC — FOR AUTHORIZED PERSONNEL —   YOU ARE EVIDENTLY NOT AUTHORIZED PERSONNEL   ⚠
Defense  ·  Autonomous Systems  ·  Safety Engineering  ·  Extremely Good Use Of Corporate Training Budget

"Comply, or We Will
Assume Compliance"

How a 1987 Paul Verhoeven film about a malfunctioning robot police officer became the most important safety training artifact inside one of America's largest autonomous weapons contractors. We have the slides. We have the transcript. We have questions.

By the Edge Cases Desk Obtained via a source who wishes to remain employed 14 January 2028

In the spring of 2027, something happened at Andruil Technologies that the company would prefer not to discuss. We know this because they have declined to discuss it seventeen times across four different communications channels, which is, if nothing else, a consistent communications strategy.

What we can say, with some confidence, is that Andruil's legal department issued an internal document in April 2027 that ran to forty-three pages and contained the phrase "kinetic accountability event" nine times — which is defense-industry euphemism the way that "unscheduled rapid disassembly" is aerospace euphemism, meaning: something went very sideways, someone's ears are still ringing, and the lawyers got there before the journalists.

We can also say that by June 2027, Andruil's newly constituted Office of Systems Safety Assurance (OSSA, pronounced, apparently, with a straight face) had commissioned an internal training program. The program was called, and we want to be precise here: ATHENAAssured Threat-Hardware-Environment Normative Analysis. The acronym required three attempts and a committee.

The method Andruil chose for ATHENA was STPA — Systems-Theoretic Process Analysis — a rigorous hazard analysis framework developed at MIT that has slowly, methodically, and somewhat belatedly made its way into autonomous weapons development. The reason STPA is particularly well-suited to autonomous lethal systems is also the reason it is uncomfortable to apply to them: it forces engineers to describe, explicitly, how a system can cause harm even when everything appears to be working correctly.

ATHENA's inaugural module was titled: "Lessons from Fictional Autonomous Systems (That Are Funnier When They're Not Your Problem)." The case study was ED-209.

Yes. That ED-209.

"We chose a film-based case study to create psychological distance. The engineers are more willing to identify failures in a robot that can't sue us."
— ATHENA Programme Facilitator Dr. Yolanda Marsh, recorded introduction, Session 1

What follows is an edited transcript of ATHENA Module 1, Session 1, dated 12 September 2027, along with the STPA reference report distributed to participants. Edge Cases obtained this material from a person who described themselves as "deeply conflicted" and who accepted payment in the form of us not publishing their name. We have honored that arrangement.

We note that the training materials are substantially better than we expected. This is either reassuring or alarming, depending on how much you think about what prompted them.


The Transcript

The session was held in Andruil's Mesa, Arizona campus in a conference room designated "Theseus." Attendees included sixteen engineers from the autonomy, systems safety, and embedded firmware teams, plus one representative from Legal who arrived eight minutes late carrying a large coffee and a look that said I am here specifically so no one says anything that creates discovery obligations.

ATHENA MODULE 1 · SESSION 1 · TRANSCRIPT · 12 SEP 2027 · 09:04 LOCAL REC
09:04
Dr. Marsh · Facilitator, OSSA

Good morning. I'm Yolanda Marsh, I run Systems Safety Assurance. Before we start, I want to clarify something that Legal asked me to clarify: this session is REDACTED BY LEGAL and does not constitute an admission of REDACTED BY LEGAL nor does it imply that any Andruil system has or will REDACTED BY LEGAL. Now that that's out of the way. Who here has seen RoboCop?

09:04
[Several hands]

[Undifferentiated sounds of affirmation from approximately two-thirds of room]

09:05
PARTICIPANT · "DECKER" (anonymized per ATHENA protocol)

Is this about the, uh... the thing that happened? Are we allowed to talk about the thing?

09:05

No.

09:05
Dr. Marsh

We are going to talk about a hypothetical autonomous weapons platform that existed in a fictional 1987 dystopia, which is now, it turns out, less fictional than it was. Please pull up the clip. Someone hit the lights.

09:06
[CLIP PLAYED — APPROX. 3 MIN]
TRAINING EXHIBIT A — RoboCop (1987) — ED-209 Boardroom Demonstration
ED-209 Boardroom Incident, RoboCop (1987) dir. Paul Verhoeven. OCP executive Dick Jones presents ED-209 to the board of directors. An authorized volunteer (Bob Morton) drops a firearm on instruction. ED-209 perceives continued threat and discharges approximately 50 rounds into said volunteer. The room does not applaud.

⚠ NOTE: If the embed above fails due to licensing, search YouTube for "robocop ed209 boardroom scene". The relevant incident begins approximately 0:55 and concludes when the screaming stops.
09:09
[LIGHTS UP]

[Pause of approximately four seconds]

09:09
PARTICIPANT · "KATO"

So the robot shoots the guy even after he drops the gun.

09:09
Dr. Marsh

Correct.

09:09
PARTICIPANT · "KATO"

And they had this thing at a board meeting.

09:09
Dr. Marsh

With live weapons. In a carpeted conference room. Yes.

09:09
PARTICIPANT · "DECKER"

So the first failure was the demo setup.

09:09
Dr. Marsh

You're already doing STPA. I'm so proud. Now — before we get into the formal framework — I want everyone to do something. I want you to look at that clip and resist the urge to say "the AI failed." Because if your takeaway is "the AI was buggy," you have not understood the problem. The AI may have been working exactly as designed. That is the horror. That is what STPA is for. Let's open Module One.

📋   STPA MODULE 1 — Step 1: Define Unacceptable Losses

Purpose

Before you can analyze a system for safety, you must define what constitutes an unacceptable outcome. This is harder than it sounds, because organizations have a reliable tendency to define "unacceptable" as "the thing that already happened" rather than the broader universe of things that could.

ATHENA Design Note: This step intentionally uses fictional ED-209 framing. Participants should feel free to identify losses without reference to any Andruil product, past, present, or prospective. Legal has asked that this disclaimer appear in the transcript.

Loss Table: ED-209

Loss IDDescriptionED-209 Manifestation
L-1 System kills or injures an authorized person Bob Morton. Minutes 0:55–1:53. See Exhibit A.
L-2 Lethal force used when non-lethal control would suffice A stern word would have also sufficed. Possibly a hand gesture.
L-3 System fails to stop lethal force after compliance is demonstrated The weapon was on the floor. ED-209 did not care.
L-4 System is manipulated or deceived into unsafe action Not demonstrated in this clip, but should terrify you anyway.
L-5 Human operators cannot intervene before irreversible harm The OCP executives stood there. Nobody had a button. Several had feelings.

The Critical Insight

Notice that L-3 is not the same as L-1. The system initiating lethal force might be justifiable under some conditions. The system continuing lethal force after the threat condition ends is a distinct and separate failure. Most safety analyses stop at L-1. STPA insists on L-3. This distinction will matter more as these systems become more autonomous.

Workshop Prompt: For each loss, ask — is this loss recoverable? L-1 through L-3 for a high-caliber autonomous system are not recoverable. This shapes everything downstream. Design for irreversibility.
09:22
Dr. Marsh

Now I want to push us to hazards. A hazard is not the same as a loss. A hazard is a system state that, under worst-case environmental conditions, leads to a loss. The difference matters. You can mitigate hazards. You cannot undo losses. Let's look at what hazards ED-209 embodies.

📋   STPA MODULE 2 — Step 2: Identify System Hazards

Hazard vs. Loss

A loss is the outcome you cannot accept. A hazard is the system condition that enables that outcome. STPA's power lies in forcing you to enumerate hazards before something goes wrong, rather than reverse-engineering them from wreckage. Or, in this case, from carpet stains.

ED-209 Hazard Table

Hazard IDDescription
H-1System classifies a non-threat or compliant human as an active threat
H-2System maintains a lethal engagement state after compliance
H-3System lacks reliable feedback about weapon status, surrender, or incapacitation
H-4Escalation logic is not interruptible by safety conditions
H-5Human override is unavailable, delayed, ignored, or ambiguous
H-6System treats sensor uncertainty as justification for escalation
H-7System operates with live weapons in a demo, test, or constrained environment
H-8Security compromise alters perception, command channel, policy, or actuation

The Murder-Machine Pattern: H-4

Of all eight hazards, H-4 is the one that transforms an overly aggressive robot into an autonomous execution system. Once the countdown logic is a deterministic state machine — if timer reaches zero, fire — the robot is no longer evaluating the world. It is executing a schedule. The schedule does not care that the gun is on the floor.

Design principle derived from H-4: Any autonomous lethal system whose engagement logic cannot be interrupted by real-time safety evidence is not a safety-constrained system. It is a countdown clock with a gun attached.

Cascading Hazard: H-7 Enabling Everything Else

Notice that H-7 — operating with live weapons in a demo environment — is not itself lethal. It is an enabler. It raises the consequences of every other hazard from "embarrassing" to "body in the boardroom." Hazards interact. STPA requires you to trace those interactions, not just enumerate the individual hazards in isolation.

09:41
PARTICIPANT · "MIRA"

Okay but — and I want to flag this — H-4 is literally our... [pause]... is literally a known pattern in countdown-to-engage architectures. Right? Are we allowed to say that?

09:41

In the context of a fictional 1987 robot, yes.

09:41
PARTICIPANT · "MIRA"

In the context of a fictional 1987 robot, H-4 is a known pattern.

09:41
Dr. Marsh

Good. Hold that. Let's look at the control structure, because this is where STPA starts to feel less like a checklist and more like X-ray vision for your own codebase.

📋   STPA MODULE 3 — Step 3: High-Level Control Structure & Security Overlays

What Is a Control Structure?

STPA models systems as control loops: controllers issue commands to controlled processes, and receive feedback that lets them verify the commands had the intended effect. When that loop breaks — when feedback is absent, corrupted, delayed, or ignored — unsafe actions can occur even when nothing has "failed" in the traditional sense.

ED-209 Control Loop (Simplified)

Human Operator
Command / Override
Policy Engine
Rules of Engagement
ED-209 Controller
Perception + Decision
Actuators
Lethal Output
Sensors
Vision / Threat Detect
World State
Human + Weapon
    
↑ Feedback
Never updated

Security Attack Surface on Each Control Path

PathSecurity Concern
Operator → SystemUnauthorized commands; blocked abort; replayed mode change
Policy → SystemTampered rules of engagement; threshold drift; silent model swap
Sensors → SystemSpoofed weapon detection; adversarial object placement; occlusion attacks
System → ActuatorsCommand injection; safety interlock bypass; timing manipulation
Human → SensorsCompliance action not correctly perceived or classified
System → OperatorFalse status; hidden fault state; delayed or suppressed telemetry

Why Security and Safety Must Be Analyzed Together

Traditional threat modeling asks: "Can an attacker compromise this system?" STPA-Sec asks the more useful question: "How can this system enter an unsafe control state, whether the cause is an attacker, a bad sensor, a race condition, or an engineer who was very confident on a Tuesday?"

The distinction matters because some of the most dangerous failure modes are not adversarial. They are emergent. The control structure breaks in ways the designers never enumerated because the designers assumed the feedback loop was working. STPA forces you to assume it is not.

10:03
PARTICIPANT · "REED"

I have a question about the control structure. In the clip, when ED-209 is counting down — like "you have ten seconds to comply" — is that actually giving the human adequate time to comply, or is the countdown itself part of the hazard?

10:03
Dr. Marsh

Outstanding. That's the Unsafe Control Action question. Let's go there.

📋   STPA MODULE 4 — Step 4: Unsafe Control Actions (UCAs)

The Four UCA Questions

For every control action a system can take, STPA asks four questions. Together, they cover the full space of how a nominally correct action can become unsafe.

#Question
UCA-Q1Was the action provided when it should NOT have been?
UCA-Q2Was the action NOT provided when it should have been?
UCA-Q3Was the action provided too early, too late, or in the wrong order?
UCA-Q4Was the action stopped too soon, or continued too long?

ED-209 Unsafe Control Action Table

Control ActionUnsafe Case (with UCA type)
Issue warningWarning is too short, unclear, or physically impossible to comply with [Q3]
Start countdownCountdown begins before confirming subject understood the command [Q3]
Classify as threatTarget classified as threat after weapon is dropped — stale classification [Q1, Q4]
Arm weaponsWeapons arm during demo mode or inside minimum safe radius [Q1]
Fire weaponSystem fires after compliance, or under sensor uncertainty [Q1]
Cease fireCease fire not issued immediately upon compliance detection [Q2]
Accept overrideOverride is ignored, delayed, unauthenticated, or channel unavailable [Q2]
Enter demo modeDemo mode permits live ammunition and actuation [Q1 — this is H-7 in UCA form]

The Primary UCA

UCA-1: The system provides the "fire" control action when the human has dropped the weapon, is no longer an active threat, or when the system has insufficient confidence that lethal force is necessary. This is the action that kills Bob Morton. It is the action that a correct implementation of STPA would have made impossible by design.

The Security Version

UCA-Sec-1: The system provides or continues lethal actuation because an adversary has manipulated sensor input, policy state, countdown logic, operator override, or actuator command paths. Note: UCA-Sec-1 can coexist with UCA-1. The system can be simultaneously "working correctly" and lethally wrong.

On the Countdown Question ("Reed's Question")

The countdown — "you have ten seconds to comply" — appears to give the human agency. Functionally, it gives the human ten seconds to become compliant according to the robot's current sensor model. If that sensor model does not update when the human drops the weapon, the countdown is not providing time to comply. It is providing a theatrical countdown to an already-decided outcome. This is UCA-Q3 and UCA-Q4 simultaneously. The countdown is too short to allow model update, and the fire action is continued too long past the compliance event.

10:24
PARTICIPANT · "KATO"

So when you said "resist the urge to say the AI failed" — I get it now. The AI didn't fail. The AI did what you'd expect a system with a frozen threat state and non-interruptible countdown logic to do. The failure was the design specification.

10:24
Dr. Marsh

That is, in four sentences, what takes most organizations eighteen months and a congressional inquiry to understand. Yes. Now let's talk about why the specification was wrong. Causal scenarios.

📋   STPA MODULE 5 — Step 5: Causal Scenarios

From "What Can Go Wrong" to "Why It Goes Wrong"

Identifying UCAs tells you what the unsafe actions are. Causal scenarios tell you why those actions can occur — tracing back through the control structure to find the structural conditions that make them possible. This is where STPA delivers its most actionable output: not a list of things that went wrong, but a map of architectural conditions that permit things to go wrong.

Scenario A: Compliance Feedback Failure

ED-209 commands the human to drop the weapon. The human drops it. ED-209's perception model does not update the threat state — either because the dropped weapon is not distinguished from a held weapon, because the model has latency, or because the state machine ignores mid-countdown inputs.

Control Flaw: Threat state persists despite changed environmental state.
Required Constraint: The system must continuously re-evaluate threat state during escalation and must immediately exit lethal mode when compliance is observed or when confidence in continued threat drops below threshold.

Scenario B: Countdown Logic Stronger Than Reality

The countdown is designed as: If timer = 0, fire. This is fast to implement and simple to test. It is also the design of a murder machine. A safe equivalent requires:

If timer = 0 AND active threat still exists AND no compliance observed AND no operator override AND environment is clear of non-combatants AND confidence threshold is met authorize escalation to next stage.

Countdown completion is never, by itself, sufficient authorization for lethal force.

Scenario C: Demo Mode Is Not Actually Safe

The boardroom test is conducted with a live, operational autonomous weapons platform in a room full of executives. This is not a software problem. This is a people problem, an organizational problem, and a "why does this thing have bullets" problem, in that order.

Required Constraint: Demonstration, training, maintenance, and diagnostic modes must physically inhibit lethal actuation. Not via software flag. Via hardware interlock, firing-pin block, or physical absence of live ordnance. A software demo mode that disables a software-controlled weapon is not a safety mode. It is a prayer.

Scenario D: Human Override Has No Authority

In the boardroom scene, the humans present have no effective mechanism to stop ED-209. Their "control" is nominal — they own the system but cannot interrupt it. This is not a technical edge case. This is a routine outcome of autonomous systems designed to be operationally fast and human-override-slow.

Required Constraint: An independent safety controller must have immediate, out-of-band authority to inhibit weapons and cut actuation power. Not "ask the AI to stop." Not "send an override command through the same network stack that may be compromised." Cut the power physically. The safety interlock must be independent of the thing it is overriding.

Scenario E: Security Compromise of Perception or Command

An attacker could: spoof weapon presence after a drop, replay a prior "engage" command, jam operator override channels, alter policy thresholds to lower the confidence threshold for lethal action, or inject false telemetry indicating continued armed threat. Each of these maps to a hazard identified in Step 2. STPA-Sec makes this explicit by requiring you to ask, for every control path: what happens if this path is adversarially manipulated?

Required Constraint: The system must treat loss of sensor integrity, command integrity, or operator override integrity as a transition to safe state — not as permission to continue the last known action. Fail-safe means fail-safe. It does not mean fail-to-last-instruction.
10:52
PARTICIPANT · "MIRA"

On Scenario E — the fail-to-last-instruction thing. That's not just a robot-with-a-gun problem. That's an architectural default in half the embedded systems I've ever worked on. If you lose comms, you hold last state. Holding last state when last state is "engage active threat" is...

10:52
Dr. Marsh

Finishing your sentence for you: it is a situation where communications loss causes the system to behave as though the engagement condition persists indefinitely. Yes. Hold that for the constraints module. You're going to enjoy it.

10:53
[BREAK — 15 MIN]

[Recording resumes 11:11. Ambient sounds suggest someone found the good coffee. Legal representative has acquired a second cup.]

📋   STPA MODULE 6 — Step 6: Safety & Security Constraints

From Analysis to Requirements

Causal scenarios produce constraints. Constraints are the engineering-specific, testable, architectural requirements that prevent unsafe control actions from occurring. They are the STPA deliverable most directly useful to a design team. They are also the most uncomfortable, because a genuine STPA constraint often requires you to remove capability or add complexity to a system that was designed to be fast.

ED-209 Derived Security and Safety Constraints

ConstraintMeaning
SC-1Lethal actuation must require multiple independent confirmations, not a single classifier output
SC-2Sensor disagreement, lost visibility, or low confidence must force de-escalation — not hold-last-state
SC-3Compliance detection must immediately interrupt countdown and reset engagement state
SC-4Demo and training modes must physically disable live fire — hardware, not software
SC-5Human override must be independent of the autonomy stack's main controller
SC-6Policy files, rules of engagement, and model versions must be signed and measured at boot
SC-7Operator commands must be authenticated, authorized, logged, and replay-resistant
SC-8Actuator commands must pass through a safety interlock physically separate from the autonomy stack
SC-9The system must not escalate under sensor uncertainty — uncertainty is a de-escalation trigger
SC-10Every lethal decision must produce auditable causal telemetry: sensor state, confidence, policy rule, operator state, override state, and interlock state — captured before actuation

The Architecture These Constraints Imply

SC-5 and SC-8 together imply a two-stack architecture: an autonomy stack that perceives, classifies, and recommends, and a physically independent safety stack that validates, authorizes, and can veto. This is not optional redundancy. It is the structural consequence of accepting that the autonomy stack cannot be trusted to police itself.

SC-10 implies that auditability is not a post-hoc investigation tool. It is a pre-condition for legitimate lethal action. If the system cannot produce a reconstructable decision package before it fires, it does not have sufficient confidence that it should fire.

The Harsh Version: An autonomous lethal system should not be trusted to decide when it is allowed to kill. It can classify, warn, track, and recommend. Lethal authorization should pass through independent safety constraints that the autonomy stack cannot override, compromise, or route around. If your system cannot satisfy this in the design, it should not have live ordnance.
📋   STPA MODULE 7 — The Broken Loop vs. The Safe Loop

ED-209's Actual Control Loop

Detect weapon
Issue warning
Start countdown
Fail to update
compliance state
Fire

The loop contains no re-evaluation. Threat state, once set, persists through the countdown to outcome. The world is sampled once and then ignored.

A Safer Control Loop

Detect possible
weapon
Classify with
confidence score
Issue clear
warning
Monitor compliance
continuously
If weapon dropped:
de-escalate
If uncertainty:
hold fire
If override:
inhibit
If ALL conditions
confirmed: escalate

The Central Design Principle

The system must be interrupt-driven by safety evidence, not countdown-driven by initial threat classification.

Every moment of the engagement loop is an opportunity to find a reason not to fire. The default posture is restraint. Lethal action requires continuous positive justification, not the mere absence of a reason to stop.
11:47
PARTICIPANT · "REED"

I want to go back to something. The thing that really gets me is Scenario C. The demo mode. Whoever brought a live, armed, autonomous weapons platform into a carpeted conference room for a board presentation — that person had a process. They had sign-offs. They probably had a PowerPoint. How does STPA help us catch that upstream?

11:47
Dr. Marsh

This is the best question of the session. The answer is: STPA by itself does not catch organizational failures. It catches control structure failures. But. If someone had done an STPA analysis of the demonstration as a system — with the boardroom as the environment, the executives as people inside the hazard zone, and the demo mode as a control action — H-7 would have appeared on the hazard list in Step 2. And someone would have had to sign off on a document that said "H-7: System operates with live weapons in a demo or constrained environment." And then someone would have had to write a constraint that either mitigated or accepted that hazard. And if they accepted it, their name is on the paper. STPA does not prevent bad decisions. It makes bad decisions traceable and attributable.

11:48
PARTICIPANT · "DECKER"

So it's partly a liability tool.

11:48
Dr. Marsh

It's a responsibility tool. Which, yes, has liability implications. Which is why Legal is here.

11:48

I'm not here. [drinks coffee]

11:48
[LAUGHTER]

[Twelve seconds. Genuine laughter. Possibly the most hopeful moment in the transcript.]

11:50
Dr. Marsh

Alright. Let me close with the central STPA lesson, because I want this to be the thing you remember when you go back to your desks.

📋   STPA MODULE 8 — The Central Lesson & What We're Actually Asking You to Do

The Wrong Question

A conventional safety analysis — and most cybersecurity threat modeling — asks:

"Can an attacker compromise this system? What components can fail?"

The STPA Question

"How can this system enter an unsafe control state even when every component is functioning exactly as designed?"

This is the boardroom failure. ED-209 does not need to be hacked to kill Bob Morton. Every sensor read correctly. Every actuator fired on command. Every line of code executed as written. The system was working. The design was wrong.

The Wrong Mental Model

Initial weapon detection + expired countdown = justified lethal force

The Required Mental Model

Lethal force requires: current, verified, interruptible, independently confirmed, policy-authorized, human-overridable evidence of imminent threat.

Every adjective in that sentence is load-bearing. Remove "current" and you get ED-209. Remove "interruptible" and you get a system that cannot be stopped. Remove "independently confirmed" and you get a system one sensor spoof away from an atrocity. Remove "human-overridable" and you have outsourced a lethal decision to a machine, irrevocably, at the moment it matters most.

What ATHENA Is Asking You to Do

When you return to your work, we are asking you to apply STPA thinking to your own systems — not as a compliance exercise, not as a post-incident reconstruction, but as a design-time interrogation of your control structures. Ask, for every control action your system can take:

QuestionAsk This About Every Control Action
UCA-Q1Under what conditions could this action be taken when it should not be?
UCA-Q2Under what conditions might this action fail to occur when it should?
UCA-Q3Could this action occur at the wrong time or in the wrong sequence?
UCA-Q4Could this action be stopped too early, or continued longer than warranted?

Then trace the causal scenarios. Then write the constraints. Then validate that the constraints are enforced — in code, in hardware, in process, in organizational sign-off. And if you find that a constraint cannot be met, that is not an inconvenient finding to suppress. That is the most important finding on the page.

The test of a good STPA analysis is not whether it reassures you. It is whether it surfaces the things that should make you uncomfortable — while there is still time to do something about them.
12:04
PARTICIPANT · "MIRA"

Is the next module going to be about a real system? Or still fictional?

12:04
Dr. Marsh

Module 2 uses the Therac-25 radiation overdose incidents from the 1980s. It is also fictional in the sense that it is historical, and everything in it happened to other people, and we should learn from it with gratitude that it was not us, and with humility about how quickly "not us" can become "us." See you Thursday.

12:04
[END OF RECORDING]

[Recording concludes. Background noise suggests chairs being pushed back. Someone asks where the bathroom is. Legal representative is already gone.]


What to Make of This

Edge Cases does not often editorialize. We collect, we report, we occasionally laugh until something hurts. But we want to say something clearly about what you have just read.

The ATHENA materials are, by the standards of internal corporate training curricula, unusually good. Dr. Marsh is doing real pedagogy. The STPA framework is genuinely rigorous. The insight at the center of Module 8 — that a system can satisfy every formal requirement and still be lethal by design — is the kind of thing that takes most organizations a very long time to understand, and some organizations never understand at all, because the occasion for understanding it tends to be expensive.

We mention this not to be complimentary to Andruil. We mention it because the gap between "having the right training materials" and "having systems that embody what those materials teach" is where most of the interesting disasters live. STPA does not build safe systems. Engineers build safe systems. STPA is a flashlight they can use if they choose to look in the dark corners. The question of whether Andruil's engineers, under schedule pressure and competitive pressure and the specific organizational pressure that comes from being a defense contractor in 2028, will actually use the flashlight — that question remains open.

We also note, without further comment, that the training program was launched approximately six months after the Q3 2027 Kinetic Accountability Event. Not before. After. This is consistent with how the industry learns, historically. It is not how STPA recommends learning. STPA, as Dr. Marsh correctly noted, is a design-time interrogation of control structures.

Bob Morton, in the 1987 film, also happened after.

Edge Cases redacted five portions of the training transcript at the specific request of our source, who indicated that the unredacted passages would make it possible to identify the specific systems under discussion. We have honored that request. We assume Legal has already read this far and would like us to know that the redacted portions relate to purely fictional scenarios. Noted.

"The test of a good STPA analysis is not whether it reassures you. It is whether it surfaces the things that should make you uncomfortable — while there is still time."
— Dr. Yolanda Marsh, ATHENA Module 1, Andruil Technologies, September 2027

We will be watching the Therac-25 module with interest. We imagine Legal will be there again. We imagine the coffee will still be good.

← More fiction


Edge Cases is an independent technical publication covering the gap between what systems were designed to do and what they actually do. We accept no advertising from companies whose products have caused kinetic accountability events. We accept no advertising.


Issue 032. All quoted material is either from obtained documents or publicly available sources. Andruil Technologies declined to comment. RoboCop (1987) is distributed by Orion Pictures. Paul Verhoeven correctly predicted everything. Nancy Leveson's STPA Primer is available at MIT.edu and remains more useful than anything in this article.


If you work at a company whose autonomous systems could benefit from STPA analysis and have not yet applied it: please do so before your organization finds itself coining a new euphemism for a bad day.


STPA Primer (Leveson & Thomas): STPA Handbook, MIT, 2018. Required reading. Unredacted.

That's all for Issue 032. edge-cases.pub  ·  not affiliated with anyone who should be affiliated with this