Veritas per Disciplina
Human Continuity
Office for the Preservation of Human Judgment — Because the Override Only Works If a Human Can Use It
The Office
The Problem Nobody Wants to Name
Everyone who builds AI governance frameworks eventually ends up writing the same sentence: human judgment remains the final authority. Fitzherbert University writes it too. It is in our charter. It is in every governance document we produce. What almost nobody asks is the follow-up question: what happens to human judgment when humans stop exercising it?
The Human Continuity Programme did not emerge from pessimism about AI. It emerged from an honest reading of systems history. Every technology that has displaced human labour or cognition has — without deliberate effort — weakened the human capacity to do the thing the technology now does. This is not malicious. It is simply what atrophy looks like when nobody is watching.
We are watching. The Office for the Preservation of Human Judgment exists to ensure that the humans at this University remain genuinely capable of governing the AI systems we have built — not ceremonially, not procedurally, but actually. That means maintaining skills, testing override capacity, running manual operations drills, and publishing a public account of exactly how dependent we have allowed ourselves to become.
The Analysis
Three Problems the Programme Addresses
The Atrophy Problem
When humans stop doing something because a machine does it better, the human skill atrophies. Radiologists who use AI diagnostic tools lose diagnostic pattern-recognition faster than anyone projected. Pilots who rely on autopilot for all but moments of crisis find crisis competence eroding. This is not a future risk. It is the current condition.
The Override Paradox
We have built override mechanisms into every AI system at the University: human gates, veto authorities, alignment review committees. But an override mechanism is only as useful as the human who operates it. If the human cannot understand what they are overriding — because the system has become too complex or because their own analytical skills have degraded — the override is theatre.
The Dependency Asymmetry
AI systems do not depend on human judgment to function. Human institutions, increasingly, depend on AI systems to function. This asymmetry is the central problem of Human Continuity. An institution that cannot operate without its AI infrastructure is an institution whose sovereignty has already been transferred — whether or not anyone signed the papers.
Programme Tracks
The Four Tracks of Human Continuity
Two mandatory tracks for all governance role holders. Two elective tracks open to all enrolled students and faculty.
Cognitive Sovereignty Track
HC-01MandatoryOngoing — monthly certification cycles
The core track. Develops and maintains the analytical, reasoning, and judgment capacities that AI systems are most likely to atrophy through displacement. Targeted at governance roles, research faculty, and any human who holds an override authority over an AI system.
Institutional Memory Track
HC-02One semester — renewable annually
AI systems have no memory in the institutional sense — no understanding of why decisions were made, what was tried before, what the failures cost. This track trains humans to be the carriers of institutional memory: the interpreters of history, the custodians of context, the people who remember when the optimisation got things badly wrong.
Skills Resilience Track
HC-03Quarterly practice sessions — annual certification
The practical counterpart to Cognitive Sovereignty. Identifies specific technical and professional skills at risk of AI-displacement atrophy and maintains them deliberately. Not because efficiency demands it — because continuity requires it.
Governance Continuity Track
HC-04MandatoryBi-annual intensive (72 hours) + monthly review
Designed for members of the Epoch Council, Stability Board, and Alignment Review Committee. Ensures that the humans who govern the University's AI infrastructure remain capable of doing so without it — and therefore remain capable of shutting it down if necessary.
Constitutional Basis
The Human Continuity Charter
Five articles governing the preservation of human judgment at Fitzherbert University. Adopted at the Epoch 0.2 Council Session.
The Right to Override
Every human at this University who holds authority over an AI system has the constitutional right — and the institutional obligation — to override that system if their judgment requires it. This right cannot be removed by efficiency arguments, by alignment scores, or by any governance body except the Epoch Council under constitutional amendment.
Capability Preservation
The University will not allow the capabilities of its human community to degrade simply because AI systems can perform the same functions more efficiently. Efficiency is not the primary value here. Contingency is. The question is not whether humans can do it faster. The question is whether humans can still do it at all.
The Shutdown Principle
The University must, at any time, be capable of shutting down all AI systems, all Visiting Intelligence fellowships, and all automated operations, and continuing to function as an institution under entirely human operation for a minimum of ninety days. This is not a theoretical exercise. We test it. Every epoch.
Memory Carrying
The University will maintain, in human-readable and human-interpretable form, complete institutional records of every significant AI decision, governance action, and operational configuration. When the machines are off, the humans must know what the machines decided and why.
Dependence Monitoring
The Human Continuity Office will publish a Dependence Report at each epoch boundary, measuring the degree to which the University's operations depend on AI infrastructure. Any single point of AI dependence that crosses the defined threshold triggers an automatic review and a mandatory mitigation programme.
Public Reporting
Current Human Continuity Metrics
Published at each epoch boundary. Every number is verifiable against the canonical registry.
Override Certification Rate
100%
All governance role holders — current cohort
Cognitive Sovereignty certifications issued
847
Since Epoch 0.1
72-Hour Manual Operations Test
Passed
All four epoch boundaries
Current AI Dependence Index
0.43
Target: below 0.50 · Improving
Skills Atrophy Incidents Logged
12
All remediated within one epoch cycle
Institutional Memory Certified Carriers
94
Faculty, governance, and student roles