Clinical AI Safety

Curriculum

30 modules, written for the CSOs doing the work.

The curriculum is structured across five tracks — foundations, hazard logs, safety cases, AI-specific failure modes, and post-deployment assurance. The three launch modules go live first.

Launching first

The three launch modules.

Module 05Launch module

Writing a Hazard Log That Actually Works

Beyond the template. How to identify, articulate and quantify clinical hazards so your DCB0129 hazard log withstands audit — and actually prevents harm on the ward.

DCB0129ISO 14971
Module 07Launch module

Evaluating a Supplier Safety Case

Reading between the lines of a DCB0129 Safety Case Report. What questions to ask a vendor. What absence of evidence actually means when you sign the Clinical Safety Case.

DCB0129DTAC
Module 11Launch module

Why AI Breaks DCB Standards

DCB0129 assumes deterministic software behaviour. AI systems don't deliver it. What changes when you are deploying a model — drift, distributional shift, opacity — and how to adapt your safety case.

DCB0129BS AAMI 34971MHRA AIaMD

Curriculum structure

Five tracks across 30 modules.

Track 1 — Foundations

The standards, in context, as a practising CSO encounters them.

  • DCB0129 and DCB0160 in practice
  • DTAC: what really gets scrutinised
  • The CSO role in an NHS Trust
  • Clinical risk management lifecycle

Track 2 — Hazard identification and the log

From clinical workflow to auditable hazard entry.

  • Writing a Hazard Log That Actually Works
  • Quantifying severity and likelihood honestly
  • Controls that actually control
  • The hazard log as a living document

Track 3 — Safety case authorship and review

Writing, reading, and challenging the Clinical Safety Case.

  • Evaluating a Supplier Safety Case
  • Writing a deployment-side safety case
  • What 'sufficient evidence' looks like
  • Signing the Clinical Safety Case Report

Track 4 — AI-specific failure modes

Where DCB assumptions break and what to do about it.

  • Why AI Breaks DCB Standards
  • Distributional shift and drift in deployment
  • Opacity, explainability and the informed consent question
  • Human factors when the model is 'usually right'

Track 5 — Post-deployment

Monitoring, incident response, and continuous assurance.

  • Post-market surveillance for AI
  • PSIRF response to AI incidents
  • When to pause, when to withdraw
  • Re-assessing after model updates

Module list is indicative and evolves with deployment experience. Full titles, learning outcomes and framework mappings are published as each module goes live.

Disclaimer. Independent educational resource — not affiliated with NHS England or any regulatory body. For educational use only.