COMPANY
About us
January 9, 2026

AI scribes are rapidly moving from a “future technology” to a mainstream expectation in the NHS. In fact, the NHS Medium-Term Plan (1) explicitly commits to ambient voice technology (AVT) being available across all primary care. Yet unlike previous large-scale digital programmes such as online consultation rollouts, practices are currently expected to procure, deploy, and assure AI scribes themselves.
This has left many GP practices unsure about what is actually required from a regulatory and clinical safety perspective, and often overwhelmed by conflicting advice from suppliers, commentators, and internal teams.
As AI scribes become embedded in real clinical workflows, the stakes are rising. It is essential that deployment is safe, well-governed, and aligned with NHS clinical risk-management standards. As a Clinical Safety Officer, I have seen first-hand how ensuring safe adoption isn’t just a regulatory formality; it is fundamental to protecting patients, supporting clinicians, and securing trust in AI-enabled care.
This piece aims to cut through the noise by busting the most persistent myths around clinical safety assurance for AI scribes in primary care and helps define best practice.
This is a very common misconception. But in fact, DCB0160 has been a legal requirement for healthcare providers deploying any digital product since 2012.
It is not an “AI regulation”. It is a clinical risk management standard that applies equally to EHRs, booking systems, messaging platforms, and now AI scribes.
A recent national cross-sectional study analysing almost 15,000 digital deployments across NHS organisations, found that only 25% were fully assured against DCB0129/0160. More than 10,000 live tools lacked documented assurance (2).
So we have a paradox:
AI absolutely heightens the importance of good safety governance. But the idea that DCB0160 only applies when introducing AI is simply incorrect. The standard has always existed to protect patients, and its importance has never been more critical than now, as we embed AI into real clinical workflows.
AI scribes are new, but no longer untested. We now have enough real-world evidence to draw early and meaningful conclusions about their safety and performance:
However, all digital clinical systems, AI or otherwise, have the capacity to cause harm if deployed without proper governance, oversight, or monitoring.
That is why robust incident reporting and proactive monitoring remain absolutely essential. The NHS must continue to embed a culture where practices feel confident and supported to report any issues, however small, both to vendors and through NHS frameworks and The MHRA Yellow Card Scheme. This data is critical. It helps suppliers refine their models, helps practices understand emerging risks, and strengthens national learning. The safest systems are not those with “no reported incidents”. They are those where issues are surfaced early, shared transparently, and used to drive continuous improvement.
It is absolutely best practice to inform and consent patients for the use of an AI scribe. This is because patient consent is required for the delivery of care and to meet the expectations of the common law duty of confidentiality. Patients should understand what is happening, how their information will be used, and have the opportunity to say no. This consent can be verbal or implied, depending on context.
However, this is different from consent under GDPR for the purposes of data processing. For GDPR specifically, consent is not required in order to use an AI scribe. This is because other, more appropriate lawful bases for processing health data apply:
In other words, an AI scribe is simply another tool within the clinical care stack- much like a cloud-based electronic health record. We do not obtain separate GDPR consent for each individual system used in care delivery.
So while AI scribes are relatively new and it remains best practice to explain their use and record patient agreement, it is likely that explicit consent will not be a routine requirement in the future.
I have heard senior NHS leaders encourage reporting of hallucinations via the Yellow Card scheme. However the Yellow Card scheme is for adverse incidents – events that caused or almost caused injury or affected diagnosis or treatment.
That means:
Over-reporting inconsequential hallucinations buries the real signals in noise and risks desensitising people to true safety events.
This is another area of concern I hear frequently from clinicians, and it has often been a blocker to adopting AI innovation. However, the two largest indemnity providers in the UK — MDU and MPS — both state that they will support clinicians using AI tools, including ambient scribes, as long as the clinician remains the human decision-maker and actively checks, edits and signs off all AI-generated notes. In other words, your defence organisation will support you, but you must retain full clinical oversight.
Taken together, both organisations reassure clinicians that AI scribes are acceptable to use when deployed safely within NHS governance, and that maintaining professional oversight of the final record is key to keeping indemnity intact.
In theory, the model is simple:
In practice, this creates a disproportionate burden for small GP practices with limited digital expertise. It also risks widening digital inequality, where large organisations can assure products safely, while smaller ones struggle.
Every practice does, of course, need to understand the risks of any technology they introduce. But that does not mean they must undertake the entire assurance process alone and nor should they.
Rather than every practice reinventing DCB0160 from scratch, we should be moving towards a model where clinical safety is coordinated, consistent, and supported. Good assurance is not about creating more paperwork. It is about creating the right infrastructure around practices so they can deploy technology safely and confidently. So instead of each practice trying to build its own miniature safety function, expert CSO teams should operate across multiple organisations. These teams run structured hazard workshops, produce high-quality safety cases, monitor incident trends, and provide rapid escalation pathways.
This creates far higher safety maturity than individual practices could reasonably achieve alone.
This is where suppliers, ICBs, and specialist organisations can meaningfully support practices. Effective models include:
These approaches not only improve safety but also dramatically reduce the workload on practices, freeing clinicians to focus on care.
Taken together, these six myths reveal a clear pattern: the real challenge with AI scribes is not that they are inherently unsafe or inadequately regulated, but that misunderstandings about standards, consent, incident reporting, indemnity, and assurance are creating unnecessary friction and uncertainty for practices. By dispelling these misconceptions, we can refocus attention on what genuinely underpins safe and effective deployment.
AI scribes represent one of the most significant digital transformations in primary care since electronic health records. They offer the potential for faster documentation, reduced clinician burnout, and more meaningful time with patients. But, as with any technology deeply embedded in clinical workflows, their benefits can only be realised when supported by strong governance, well-defined roles, and mature clinical safety processes that operate consistently across the system.
The message is simple: safe adoption is entirely achievable, and it is a shared responsibility. When suppliers, ICBs, and practices work together to embed scalable, high-quality clinical safety assurance, AI scribes can become a trusted and almost invisible part of everyday practice, supporting clinicians, strengthening documentation, and ultimately enhancing patient care.
If we get this right, AI scribes will not just be a technological upgrade; they will help shape a safer, more sustainable future for primary care.
2) Oskrochi Y, Roy-Highley E, Grimes K, Shah S. Digital health technology compliance with clinical safety standards in the National Health Service in England: national cross-sectional study. J Med Internet Res; 2025.
3) Tailor et al. Evaluation of AI Summaries on Interdisciplinary Understanding of Ophthalmology Notes. JAMA Ophthalmology; 2025
4) Devine et al. Medical Scribe Impact on Provider Efficiency in Outpatient Radiation Oncology Clinics Before and During the COVID-19 Pandemic. Telemedicine Reports; 2022
5) Bell S et al. Frequency and Types of Patient-Reported Errors in Electronic Health Record Ambulatory Care Notes. JAMA Network; 2020
6) Hata R, Jones C, Seitz D, Tummala A, Moore M, Cooper D. Evaluating the accuracy and provider wellness impact of an ambient artificial intelligence scribe in a complex simulated emergency department environment. Annals of Emergency Medicine; 2024
7) Evans K, Papinniemi A, Ploderer B, Nicholson V, Hindhaugh T, et al. Impact of using an AI scribe on clinical documentation and clinician–patient interactions in allied health private practice: perspectives of clinicians and patients. Musculoskeletal Science and Practice; 2025.
10) Medical Protection Society. New policy: Artificial Intelligence (AI) and robotics in healthcare. Medical Protection; 2025.
11) Medical Defence Union. Using AI safely and responsibly in primary care. The MDU; 2025.
Lizzie is a GP with extensive experience leading large-scale digital transformation across UK healthcare. She is passionate about patient safety and the systematic embedding of clinical safety into digital workflows at scale. She has held senior Clinical Governance and Clinical Safety Officer roles across the NHS and the private sector, working on both provider and supplier sides.
Read more
No card details needed. Get started in 5 minutes.