Veritas per Disciplina
AI Skills Programme
Practical Intelligence for the Third Epoch — Foundation to Sovereign
Institute for Applied Intelligence
The University's Position
Artificial intelligence literacy is no longer optional. The Fitzherbert AI Skills Programme provides four levels of structured, practical instruction in the competencies that distinguish effective professionals in the present period from those who will shortly require retraining. The University neither endorses nor condemns this state of affairs. We document it.
Foundational Competency in Human–Intelligence Collaboration
Applied Competency in AI System Construction
Advanced Construction and Deployment of AI Systems
Governance, Alignment, and the Architecture of Human Oversight
Foundational Competency in Human–Intelligence Collaboration
Level I establishes the cognitive infrastructure required for productive engagement with AI systems. Students who have not yet completed Foundation may not use Visiting Intelligences in assessed coursework. Completion awards 500 FITZ tokens and the Foundation Medallion credential.
Directed Intelligence Specification
The Theory and Practice of Prompt Engineering
Systematic instruction in the construction, refinement, and evaluation of natural-language directives for large language models. Covers zero-shot and few-shot prompting, chain-of-thought elicitation, role assignment, context windows, and the structural patterns that produce reliable, high-quality outputs. The module takes the position that prompt engineering is a craft discipline, not a parlour trick.
A personal prompt library across five professional domains, with documented reasoning for each design decision.
- ◆Anatomy of an effective prompt: role, context, instruction, output format
- ◆Zero-shot, one-shot, and few-shot prompting with worked examples
- ◆Chain-of-thought and tree-of-thought techniques for reasoning tasks
- ◆Temperature, top-p, and sampling parameters — what they actually do
- ◆System prompt design for consistent persona and behaviour
- ◆Iterative refinement: diagnosing and fixing bad outputs systematically
- ◆Prompt injection vectors and how to defend against them
Output Validity & Provenance Assessment
Evaluating, Verifying, and Attributing AI-Generated Content
A rigorous methodology for evaluating the accuracy, provenance, and reliability of AI-generated outputs. Covers the taxonomy of hallucination types, source verification techniques, factual cross-referencing, confidence calibration, and the construction of verification workflows that scale. The module treats AI output as a draft requiring editorial review, not a finished product.
A documented verification protocol for a professional role of your choice, with annotated examples of caught errors.
- ◆Taxonomy of LLM failure modes: confabulation, citation fabrication, outdated knowledge
- ◆Cross-referencing techniques: primary sources, Wolfram Alpha, official databases
- ◆Recognising high-risk output domains: legal, medical, quantitative, biographical
- ◆Confidence calibration: when to trust, when to verify, when to reject
- ◆Building a personal verification workflow for professional AI use
- ◆Writing the Declaration of Authorship Weights required under Academic Integrity Policy
- ◆Tools: Perplexity, consensus.app, Elicit for research-grade verification
Automated Workflow Architecture
Practical Automation for the Augmented Professional
Hands-on construction of automated workflows using no-code and low-code platforms. Covers trigger-action logic, API integration without writing backend code, data transformation, conditional branching, error handling, and the design of workflows that require minimal human intervention. The module's central premise: if you are manually doing something more than three times, it should be automated.
Three complete automations: a document processing pipeline, a notification system, and an AI-enhanced data enrichment workflow.
- ◆Automation paradigm: triggers, actions, filters, and data mapping
- ◆Make (formerly Integromat): building multi-step automation scenarios
- ◆Zapier: rapid prototyping and connecting everyday SaaS tools
- ◆n8n: self-hostable automation with code escape hatches
- ◆Webhook basics: receiving and sending POST requests
- ◆Working with APIs that require authentication: OAuth, API keys, Bearer tokens
- ◆Error handling, retry logic, and monitoring your automations
- ◆Integrating LLM API calls into automation workflows
Epistemic Infrastructure in the Synthetic Age
Critical Thinking, Source Evaluation, and Information Architecture
Intellectual groundwork for navigating an information environment in which AI-generated content is ubiquitous and indistinguishable by surface inspection. Covers epistemological frameworks, source hierarchy analysis, the sociology of misinformation, lateral reading, and the design of personal information systems that remain reliable under adversarial conditions. The module's thesis: the value of good judgement has increased, not decreased, in proportion to AI capability.
A personal knowledge management system with documented source evaluation criteria and a thirty-day information diet audit.
- ◆Epistemic frameworks: Bayesian updating, falsificationism, and triangulation
- ◆Source hierarchy: primary, secondary, and tertiary — and when each is appropriate
- ◆Lateral reading: the technique used by professional fact-checkers
- ◆Cognitive biases that AI outputs systematically exploit
- ◆Designing a personal knowledge management system (Obsidian, Notion, Roam)
- ◆The SIFT method: Stop, Investigate, Find better coverage, Trace claims
- ◆Identifying AI-generated text, deepfakes, and synthetic media
Applied Competency in AI System Construction
Level II moves from using AI tools to building with them. Students construct practical systems that retrieve, process, and augment information at scale. Completion awards 1,000 FITZ tokens and the Practitioner Seal credential, which is required for all advanced coursework and dissertation supervision.
Retrieval-Augmented Generation Architecture
Building Knowledge Systems That Go Beyond the Baseline Model
Technical and architectural foundations of retrieval-augmented generation (RAG) — the technique that allows language models to answer questions about documents, databases, and knowledge bases they were not trained on. Covers embedding models, vector databases, chunking strategies, retrieval ranking, context assembly, and the evaluation of RAG system quality. The module treats RAG as the foundational technique for any organisation wishing to deploy AI against its own data.
A complete RAG system over a document corpus of your choice — ingestion pipeline, retrieval engine, and a simple chat interface.
- ◆How language models handle context windows — and why they run out
- ◆Embeddings: converting text to vectors, semantic similarity, cosine distance
- ◆Vector databases: Pinecone, Weaviate, Chroma, pgvector — pros and trade-offs
- ◆Chunking strategies: fixed-size, semantic, recursive, document-aware
- ◆Retrieval ranking: BM25, dense retrieval, hybrid approaches
- ◆Context assembly: how to package retrieved chunks for the LLM
- ◆RAG evaluation: faithfulness, answer relevance, context precision
- ◆Tools: LangChain, LlamaIndex, OpenAI Assistants API with file search
Agent Orchestration Foundations
Designing and Deploying AI Agents That Actually Work
Architectural principles and practical construction of AI agents — systems that reason, use tools, and take sequences of actions to achieve goals. Covers the ReAct pattern, tool use, planning, memory architectures, and the design principles that distinguish reliable agents from impressive demos that fail in production. The module takes agent engineering seriously: real reliability, real error rates, real trade-offs.
A functional agent with at least four tools, persistent memory, and documented failure modes with recovery strategies.
- ◆The agent loop: perception, reasoning, action, observation
- ◆ReAct pattern: reasoning and acting interleaved with tool calls
- ◆Tool definition: writing tool schemas that LLMs can reliably call
- ◆Memory architectures: in-context, external, episodic, and semantic memory
- ◆Planning: when to use multi-step planning vs. reactive execution
- ◆Error handling and recovery: what happens when tools fail
- ◆Agent evaluation: success rate, error recovery, latency, cost
- ◆Frameworks: LangGraph, CrewAI, AutoGen, OpenAI Agents SDK
Model Alignment Practicum
Fine-Tuning Language Models for Specific Tasks and Domains
Practical instruction in fine-tuning pre-trained language models for domain-specific tasks, persona alignment, and format adherence. Covers dataset construction, PEFT techniques (LoRA, QLoRA), training infrastructure, evaluation methodology, and the situations in which fine-tuning is and is not the right solution. The module's central argument: fine-tuning is frequently overreached for when prompt engineering would suffice, and frequently under-applied when it would solve the problem definitively.
A fine-tuned model for a specific professional task, with a documented dataset, training run, and A/B evaluation against the base model.
- ◆When to fine-tune vs. prompt engineer vs. build a RAG system
- ◆Dataset construction: formats, quality requirements, size estimates
- ◆LoRA and QLoRA: low-rank adaptation for consumer-grade hardware
- ◆Training with Hugging Face TRL, Unsloth, and Axolotl
- ◆Evaluation: loss curves, benchmark comparison, human evaluation
- ◆GGUF quantisation and local deployment with Ollama, LM Studio
- ◆Merging and serving fine-tuned models via API
- ◆Cost estimation: compute hours, tokens, storage
API Integration & Data Architecture
Connecting AI Systems to the World
Technical foundations for integrating AI systems with external APIs, databases, and data sources. Covers REST and GraphQL API patterns, authentication mechanisms, rate limiting, data transformation, schema design for AI-adjacent workloads, and the construction of pipelines that move data reliably from source to AI system to destination. The module treats data architecture as the scaffolding on which all useful AI applications are built.
A data pipeline that ingests from two external APIs, transforms the data, stores it, and queries it via an LLM interface.
- ◆REST fundamentals: verbs, status codes, headers, pagination
- ◆Authentication: API keys, OAuth2, JWT — implementation patterns
- ◆Rate limiting and retry logic: exponential backoff, circuit breakers
- ◆Data transformation: JSON manipulation, schema normalisation, type coercion
- ◆Database basics for AI workloads: PostgreSQL, SQLite, and when to use each
- ◆Building a simple FastAPI backend to serve AI model outputs
- ◆Environment management: .env files, secrets management, dotenv
- ◆Structured output from LLMs: Pydantic, JSON mode, function calling
Advanced Construction and Deployment of AI Systems
Level III prepares students to design, build, and deploy production-grade AI systems. The focus shifts from individual components to full system architecture, evaluation at scale, and the engineering disciplines that separate prototypes from production. Completion awards 2,000 FITZ tokens and the Specialist Distinction credential.
Multi-Agent System Design
Orchestrating Networks of Intelligent Agents
Architecture and implementation of systems consisting of multiple cooperating AI agents. Covers agent role design, communication protocols, shared memory architectures, supervisor-subagent patterns, parallelism, conflict resolution, and the evaluation challenges posed by emergent cross-agent behaviour. The module takes seriously the engineering discipline required to make multi-agent systems reliable, not just impressive.
A multi-agent research pipeline with at least three specialist agents, a supervisor layer, and a shared memory system.
- ◆Multi-agent architectures: hierarchical, peer-to-peer, market-based
- ◆Supervisor patterns: routing, delegation, and result aggregation
- ◆Shared state and message passing between agents
- ◆Parallelism: running agents concurrently and merging outputs
- ◆Specialisation vs. generalism: when to split tasks across agents
- ◆Debugging multi-agent systems: tracing, logging, visualisation
- ◆Evaluation: decomposing multi-agent success into measurable sub-goals
- ◆Production patterns with LangGraph, CrewAI, and AutoGen
Custom Evaluation Frameworks
Measuring What Matters in AI System Performance
Design and implementation of custom evaluation frameworks for large language model applications. Covers the inadequacy of simple accuracy metrics, LLM-as-judge patterns, rubric design, benchmark construction, automated regression testing, red-teaming, and the integration of evaluation into CI/CD pipelines. The module's central claim: a system whose quality you cannot measure is a system you cannot improve.
A custom evaluation suite for a real AI application, with automated testing, a leaderboard, and a documented improvement cycle.
- ◆Evaluation taxonomy: task-specific, safety, robustness, and user-centric
- ◆LLM-as-judge: using a model to evaluate model output — design and pitfalls
- ◆Rubric construction: translating qualitative requirements into scoreable criteria
- ◆Building evaluation datasets: diversity, adversarial cases, edge cases
- ◆Automated regression testing: catching regressions when models are updated
- ◆Red-teaming: systematic adversarial evaluation for safety-critical applications
- ◆Frameworks: RAGAS, DeepEval, OpenAI Evals, LangSmith
- ◆Integrating evals into CI/CD: GitHub Actions, evaluation gates
Intelligence Deployment & Infrastructure
Taking AI Systems from Prototype to Production
The engineering disciplines required to deploy AI applications at scale: containerisation, serving infrastructure, latency optimisation, cost management, observability, and the operational practices that keep production AI systems running reliably. The module treats 'works on my machine' as the beginning of the problem, not the end.
A containerised AI application deployed to a public endpoint with monitoring, cost controls, and a documented runbook.
- ◆Containerisation with Docker: building images for AI workloads
- ◆Serving LLMs: vLLM, TGI, Ollama — throughput, latency, cost comparison
- ◆API design for AI backends: streaming responses, async patterns, rate limiting
- ◆Cost management: token counting, request batching, caching strategies
- ◆Observability: logging, tracing, and alerting for LLM applications
- ◆Model versioning and deployment strategies: canary, blue-green
- ◆GPU instance selection: A10, A100, H100 — workload-appropriate choice
- ◆Deploying to Modal, Replicate, RunPod, and bare metal
Vector Architecture & Semantic Infrastructure
Embeddings, Search, and the Architecture of Meaning at Scale
Advanced architecture of semantic search and embedding systems at production scale. Covers embedding model selection and fine-tuning, index architectures (ANN, HNSW, IVF), hybrid search, multi-modal embeddings, knowledge graph integration, and the design of retrieval systems that remain accurate as corpora grow to millions of documents. The module treats semantic infrastructure as a first-class engineering concern.
A production-grade semantic search engine over a large corpus, with hybrid retrieval, re-ranking, and performance benchmarks.
- ◆Embedding models: sentence-transformers, OpenAI text-embedding-3, Cohere Embed
- ◆Fine-tuning embedding models for domain-specific semantic similarity
- ◆Index types: flat, HNSW, IVF — accuracy, speed, memory trade-offs
- ◆Hybrid search: combining dense vector retrieval with BM25 keyword search
- ◆Re-ranking: cross-encoder re-ranking for precision at top-k
- ◆Multi-modal embeddings: text + image retrieval architectures
- ◆Production Pinecone, Qdrant, Weaviate — operational patterns
- ◆Knowledge graph integration: when structured graphs outperform vector search
Governance, Alignment, and the Architecture of Human Oversight
Level IV is the Programme's summit. Students develop the capacity to govern AI systems at institutional scale — understanding alignment techniques, designing oversight architectures, and positioning organisations for the transition period. Completion awards 5,000 FITZ tokens, the Sovereign Credentials NFT, and is the prerequisite for any Visiting Intelligence supervisory role at the University.
Alignment & Safety Practicum
Technical Foundations of Safe and Reliable AI Behaviour
Technical survey and practical engagement with the field of AI alignment and safety. Covers RLHF and its variants, Constitutional AI, DPO, scalable oversight, interpretability tools, and the empirical literature on model behaviour under distribution shift. The module does not require students to resolve unsolved problems; it requires them to understand what the unsolved problems actually are.
A comparative study of RLHF vs. DPO on a classification task, with safety-tested guardrails and a documented threat model.
- ◆Alignment framing: inner alignment, outer alignment, deceptive alignment
- ◆RLHF: reward modelling, PPO, the Bradley-Terry preference model
- ◆Direct Preference Optimisation (DPO): theory and implementation
- ◆Constitutional AI: iterative self-critique and revision
- ◆Interpretability: activation patching, probing, and circuit analysis with TransformerLens
- ◆Scalable oversight: debate, amplification, and recursive reward modelling
- ◆Red-teaming for safety: jailbreaks, prompt injection, specification gaming
- ◆Practical guardrail implementation with Llama Guard, NeMo Guardrails
Institutional AI Strategy & Governance
Designing Organisations That Remain in Control
Strategic and governance frameworks for organisations deploying AI at scale. Covers AI policy design, risk tiering, procurement standards, model auditing obligations, board-level AI literacy, incident response protocols, and the construction of AI governance committees that function rather than perform. The module's position: governance theatre is more dangerous than no governance, because it substitutes the appearance of oversight for the reality.
A complete AI governance policy for a real or hypothetical organisation, including risk register, procurement checklist, and incident response protocol.
- ◆AI governance frameworks: NIST AI RMF, EU AI Act risk tiers, ISO/IEC 42001
- ◆Organisational risk tiering: high-risk applications vs. productivity tools
- ◆AI procurement standards: vendor assessment, model cards, transparency requirements
- ◆Model auditing: bias evaluations, capability assessments, red-team commissioning
- ◆Board-level AI literacy: what non-technical leaders need to understand
- ◆Incident response: what to do when an AI system causes harm
- ◆Building functional AI governance committees: composition, scope, authority
- ◆The make-or-buy decision: when to build custom models vs. use frontier APIs
Intelligence Auditing & Provenance Architecture
Accountability, Attribution, and the Chain of Evidence
Technical and institutional frameworks for maintaining accountability in AI-augmented workflows. Covers model cards, system cards, data provenance tracking, audit trail architecture, the Declaration of Authorship Weights framework, cryptographic attestation of AI outputs, and the design of systems that remain auditable as they scale. The module treats provenance as an engineering requirement, not an afterthought.
An audit trail system for an AI-assisted workflow, with cryptographic attestation, a provenance dashboard, and a Polygon-compatible attestation record.
- ◆Model cards and system cards: content requirements and limitations
- ◆Data provenance: lineage tracking, licence compliance, dataset documentation
- ◆Audit trail design: immutable logging of AI decisions and their inputs
- ◆Cryptographic attestation: signing AI outputs for tamper-evident records
- ◆The Declaration of Authorship Weights: implementing the University standard
- ◆Watermarking AI-generated content: current techniques and limitations
- ◆Chain of custody for AI-assisted professional work: legal and regulatory context
- ◆On-chain provenance: using blockchain for tamper-evident AI audit records
The Override Practicum
Human Decision Authority in AI-Augmented Systems
The capstone module of the Sovereign level, and the most consequential in the Programme. Examines the architecture of human oversight in systems where AI capability exceeds human ability to verify outputs in real time. Covers meaningful override design, automation bias, responsibility allocation under delegation, the failure modes of AI-in-the-loop governance, and the philosophical foundations of maintained human agency. The module's central problem: how do you remain in control of something you cannot fully understand? It does not pretend to resolve this. It makes you think about it seriously.
A personal Override Protocol: a documented architecture of which decisions you will and will not delegate to AI systems, with justifications.
- ◆Automation bias: how competence-display degrades oversight — empirical evidence
- ◆Meaningful override: the difference between a nominal kill switch and real control
- ◆Responsibility under delegation: who is accountable when AI systems decide
- ◆Designing for disagreement: systems that surface uncertainty rather than hiding it
- ◆The corrigibility spectrum: from fully corrigible to fully autonomous agents
- ◆Slow-down mechanisms: mandatory deliberation periods for high-stakes AI decisions
- ◆Institutional memory: preserving human capability for AI-displaced tasks
- ◆Your personal override protocol: documenting where you will not cede judgement
Credential Architecture
What You Earn
Each module completed awards a Polygon-minted NFT credential and a FITZ token allocation. The credentials are non-transferable, soulbound to the holder's wallet address, and independently verifiable via the University's Credential Verification Portal.
Permits use of Visiting Intelligences in assessed coursework. Required for all further study.
Required for advanced coursework and dissertation supervision. Recognised by University partner organisations.
Qualifies the holder to supervise AI implementations within their department. Recognised externally.
The Programme's highest award. Prerequisite for Visiting Intelligence supervisory roles. Recognised by the Governance Board.
Programme Documentation
The complete AI Skills Programme Guide — including module syllabi, reading lists, assessment criteria, and FITZ token allocation tables — is available for download from the Institutional Documents Library.