The Engine
The Last Prompt Engine is the proprietary technology behind The Mandate. It is a decision-intelligence evaluation system that can be wrapped in any thematic skin — survival, corporate, diplomatic, scientific. The same evaluation logic runs in every world.
The engine evaluates the quality of human reasoning under uncertainty. It is content-agnostic, skin-agnostic, and domain-agnostic. The only constant is the rubric.
A prompt, in its oldest sense, is a nudge — a cue that moves thinking in a new direction without dictating where it lands. In The Mandate, your advisors prompt you. Your questions prompt them. The crisis itself prompts the situation.
But the last prompt — the only input that actually moves the simulation — is yours. The plan you write, in your own words, which a neutral AI then receives and evaluates. We named the engine for that moment: the last word before consequences unfold.
This has no connection to AI prompt engineering, LLM tooling, or anything related to writing prompts for AI systems. The word means what it meant before large language models existed.
The scarce skill is no longer information recall.
AI-mediated systems are increasing the complexity of human decision-making at every level. The bottleneck is not access to information — it is the ability to reason structurally under uncertainty.
The Last Prompt Engine is built on a single thesis: better reasoning produces better outcomes. The engine proves this by making the quality of your written plan the direct cause of what happens next in the simulation.
The AI does not drive, decide, or progress the simulation. It only evaluates how well the player thought through the problem.
Five steps. Every cycle.
The same loop runs in every skin. The context changes. The engine does not.
A dynamic event surfaces — shaped by your previous decisions and the current state of your world. No two crises are identical.
Query your team — but they're human. Each advisor sees the world through their own bias, fear, and expertise. Their advice is incomplete by design — not as a trick, but because that is what genuine domain knowledge looks like.
No menus. No options. You write a free-form strategy in plain English: your goal, your actions, your contingencies, your communication. Your reasoning is the move.
A neutral AI evaluator scores your reasoning quality — not your choices — across six criteria. It runs at temperature 0. It cannot be gamed. It has no pity. It rewards structured thinking.
The simulation applies outcomes based on how well you reasoned — not what you chose. Poor thinking compounds. Strong thinking builds resilience. The world you face next is the world your reasoning built.
Chronosymbiosis
Most simulations resolve one decision at a time. The Last Prompt Engine does not. The engine is built around a single design principle: your decisions don't vanish after one moment — they resonate forward across time.
Today's trade-offs quietly strengthen or weaken tomorrow's options. Good reasoning compounds into resilience. Flawed reasoning compounds into fragility. The flags, stat deltas, and conditional events aren't mechanical features — they are the engine implementing Chronosymbiosis at a structural level.
The compounding-consequence structure has a structural parallel in theoretical physics. Observer Patch Holography proposes that reality emerges from overlapping limited observers who must stay consistent where their patches meet.
In The Mandate: advisors are limited-patch observers. The AI evaluator enforces consistency. Outcomes emerge from the quality of the player's synthesis across partial perspectives. This parallel was confirmed by the physicist who developed the framework.
Read the theoretical groundingSix criteria. Zero mercy.
The AI evaluator scores your plan against six criteria, each rated 0–2. The total score (0–12) determines your quality band — and the quality band determines what happens next in the simulation.
A Poor score compounds. A Strong score builds resilience. The simulation does not care about your intentions — only the quality of your reasoning.
The evaluator cannot be gamed.
Substantial guardrails prevent players from gaming the system, asking for full marks, or exploiting the AI's tendency to be agreeable.
The evaluator runs at zero temperature. No creative drift. The same plan gets the same score every time.
The AI must not assume positive outcomes unless the player explicitly describes the mechanism. Vague plans are penalised.
Plans under 20 words, or lacking contingencies, are immediately penalised. The evaluator is not a cheerleader.
Every rubric score must include a reasoning string. The evaluator is accountable for every point it awards or withholds.
Advisors never reference numeric outcomes. They think in human consequences: 'Morale will shatter' — not '+2 Cohesion'.
Specialists only see the world through their role. A Security advisor cannot comment on social cohesion. Advice is humanly incomplete by design.
Variable count is not cosmetic.
The number of active variables in a skin directly determines the cognitive complexity of the simulation. The engine supports any number.
Ethical Compression
Binary trade-offs, moral tension. Fewer variables amplify the emotional weight of each decision.
Systems Leadership
Interdependency and prioritisation. Decisions ripple across multiple systems simultaneously.
Executive Strategy
High-complexity environments requiring abstraction, delegation, and long-horizon thinking.
Engine vs. Skin
The engine is the unseen hand. The skin is the sensory experience. They are completely decoupled.
Content-agnostic. Never uses the words "Food", "Health", or "Colony". Pulls all labels from the active skin config.
The sensory experience and context. Defined entirely in JSON — swappable without touching engine code.
Data-Driven Variable Mapping
| ENGINE KEY | COLONY SKIN | CORPORATE SKIN |
|---|---|---|
| Stat_01 | Sustenance | Cash Flow |
| Stat_02 | Health | Employee Well-Being |
| Stat_03 | Security | Regulatory Compliance |
| Stat_04 | Cohesion | Team Engagement |
| Stat_05 | Infrastructure | Operational Infrastructure |
| Time_Unit | Week | Quarter |
| Entity_Name | The Colony | The Enterprise |
Have a domain? Build a skin.
The engine is modular. If you work in medicine, diplomacy, urban planning, education, or any field where structured reasoning under uncertainty matters — the Last Prompt Engine can be adapted to your context.
We are looking for collaborators who are frustrated by polarised thinking and inspired by the idea of lateral reasoning as a trainable skill.
Formal architecture paper
“Last Prompt: Operationalising Partial Perspectives in Decision Intelligence Training” — Miranda Kelly and Jonathan Kelly. Working paper, May 2026.