January 9, 2026

The Clinician's Perspective – Myth Busting: Regulation for AI Scribes in Primary Care

Blog

Introduction

AI scribes are rapidly moving from a “future technology” to a mainstream expectation in the NHS. In fact, the NHS Medium-Term Plan (1) explicitly commits to ambient voice technology (AVT) being available across all primary care. Yet unlike previous large-scale digital programmes such as online consultation rollouts, practices are currently expected to procure, deploy, and assure AI scribes themselves.

This has left many GP practices unsure about what is actually required from a regulatory and clinical safety perspective, and often overwhelmed by conflicting advice from suppliers, commentators, and internal teams.

As AI scribes become embedded in real clinical workflows, the stakes are rising. It is essential that deployment is safe, well-governed, and aligned with NHS clinical risk-management standards. As a Clinical Safety Officer, I have seen first-hand how ensuring safe adoption isn’t just a regulatory formality; it is fundamental to protecting patients, supporting clinicians, and securing trust in AI-enabled care.

This piece aims to cut through the noise by busting the most persistent myths around clinical safety assurance for AI scribes in primary care and helps define best practice.

Myth 1: DCB0160 only applies because this is AI.

This is a very common misconception. But in fact, DCB0160 has been a legal requirement for healthcare providers deploying any digital product since 2012.

It is not an “AI regulation”. It is a clinical risk management standard that applies equally to EHRs, booking systems, messaging platforms, and now  AI scribes.

A recent national cross-sectional study analysing almost 15,000 digital deployments across NHS organisations, found that only 25% were fully assured against DCB0129/0160. More than 10,000 live tools lacked documented assurance (2).

So we have a paradox:

  • Practices are tying themselves in knots about AI scribe compliance
  • Meanwhile the majority of digital tools in the NHS aren’t meeting the basic DCB0160 bar

AI absolutely heightens the importance of good safety governance. But the idea that DCB0160 only applies when introducing AI is simply incorrect. The standard has always existed to protect patients, and its importance has never been more critical than now, as we embed AI into real clinical workflows.

Myth 2: AI scribes are still unproven

AI scribes are new, but no longer untested. We now have enough real-world evidence to draw early and meaningful conclusions about their safety and performance:

  • Multiple studies now show low risk of harm from AI-generated summaries (3) and even improvements in EHR safety when scribes are used appropriately (4).
  • At Tandem, we audit our accuracy monthly and it has consistently been ≥96%. When viewed against human documentation performance, this is significant: a patient-survey study found that 21% of patients reported spotting a possible error in their own clinical notes, highlighting how common human-generated documentation issues can be (5).
  • Early user studies also suggest that the use of AI scribes significantly reduce time spent on admin (6) and positively impact on cognitive and physical workload (7). An important contributor to safety in its own right.

However, all digital clinical systems, AI or otherwise, have the capacity to cause harm if deployed without proper governance, oversight, or monitoring.

That is why robust incident reporting and proactive monitoring remain absolutely essential. The NHS must continue to embed a culture where practices feel confident and supported to report any issues, however small, both to vendors and through NHS frameworks and The MHRA Yellow Card Scheme. This data is critical. It helps suppliers refine their models, helps practices understand emerging risks, and strengthens national learning. The safest systems are not those with “no reported incidents”. They are those where issues are surfaced early, shared transparently, and used to drive continuous improvement.

Myth 3: You must get explicit GDPR consent from a patient to use an AI scribe

It is absolutely best practice to inform and consent patients for the use of an AI scribe. This is because patient consent is required for the delivery of care and to meet the expectations of the common law duty of confidentiality. Patients should understand what is happening, how their information will be used, and have the opportunity to say no. This consent can be verbal or implied, depending on context.

However, this is different from consent under GDPR for the purposes of data processing. For GDPR specifically, consent is not required in order to use an AI scribe. This is because other, more appropriate lawful bases for processing health data apply:

  1. Article 6 (personal data): Public task (8)
  2. Article 9 (special category data, including health data): Provision of health or social care (9)

In other words, an AI scribe is simply another tool within the clinical care stack- much like a cloud-based electronic health record. We do not obtain separate GDPR consent for each individual system used in care delivery.

So while AI scribes are relatively new and it remains best practice to explain their use and record patient agreement, it is likely that explicit consent will not be a routine requirement in the future.

Myth 4: Every error/inaccuracy must be reported to the Yellow Card scheme

I have heard senior NHS leaders encourage reporting of hallucinations via the Yellow Card scheme. However the Yellow Card scheme is for adverse incidents – events that caused or almost caused injury or affected diagnosis or treatment.

That means:

  • A clinically significant error made by the AI scribe that makes it into the clinical record, is communicated to a patient, or changes care in a way that could cause harm. So yes, that is incident territory and would warrant Yellow Card reporting, alongside local reporting routes.
  • A minor hallucination or wording error that you spot and correct before saving - the AI equivalent of a typo you notice when dictating is not an incident.

Over-reporting inconsequential hallucinations buries the real signals in noise and risks desensitising people to true safety events.

What you should do?

  • Locally: Log incidents and near-misses within your organisation.
  • Nationally: Use Yellow Card for serious AVT incidents where harm occurred or could reasonably have occurred.
  • With suppliers: Feed issues back through in-product mechanisms so they can launch their incident management response.

Myth 5: My indemnity won’t cover me for the use of an AI scribe

This is another area of concern I hear frequently from clinicians, and it has often been a blocker to adopting AI innovation. However, the two largest indemnity providers in the UK — MDU and MPS — both state that they will support clinicians using AI tools, including ambient scribes, as long as the clinician remains the human decision-maker and actively checks, edits and signs off all AI-generated notes. In other words, your defence organisation will support you, but you must retain full clinical oversight.

  • MDU guidance emphasises that clinicians remain responsible for the accuracy of the clinical record, even when an AI tool drafts it. Clinicians must ensure patient consent, follow local data-protection processes, and only use AI tools that are implemented through proper organisational governance. (10)
  • MPS policy confirms that support is available for matters arising from the clinician’s own judgement or actions when using AI systems that are not fully autonomous and still rely on a human-in-the-loop for oversight. They make clear that assistance relates to your clinical practice, not failures of the AI product itself. (11)

Taken together, both organisations reassure clinicians that AI scribes are acceptable to use when deployed safely within NHS governance, and that maintaining professional oversight of the final record is key to keeping indemnity intact.

Myth 6: Every practice must do full clinical safety assurance on its own.

In theory, the model is simple:

  • Vendors complete DCB0129 for the product
  • Deploying organisations complete DCB0160 for local use

In practice, this creates a disproportionate burden for small GP practices with limited digital expertise. It also risks widening digital inequality, where large organisations can assure products safely, while smaller ones struggle.

Every practice does, of course, need to understand the risks of any technology they introduce. But that does not mean they must undertake the entire assurance process alone and nor should they.

What Good Looks Like: A Better Model for Assurance

Rather than every practice reinventing DCB0160 from scratch, we should be moving towards a model where clinical safety is coordinated, consistent, and supported. Good assurance is not about creating more paperwork. It is about creating the right infrastructure around practices so they can deploy technology safely and confidently. So instead of each practice trying to build its own miniature safety function, expert CSO teams should operate across multiple organisations. These teams run structured hazard workshops, produce high-quality safety cases, monitor incident trends, and provide rapid escalation pathways.

This creates far higher safety maturity than individual practices could reasonably achieve alone.

This is where suppliers, ICBs, and specialist organisations can meaningfully support practices. Effective models include:

  • ICB-level or regional CSO teams delivering AI scribe assurance and monitoring
  • Cross-practice collaboration to share hazard logs, mitigations, and lessons learned
  • Specialist external partners (e.g. Assuric) managing clinical safety across multiple deployments

These approaches not only improve safety but also dramatically reduce the workload on practices, freeing clinicians to focus on care.

Conclusion: Safe Adoption Is a Shared Responsibility and Entirely Achievable

Taken together, these six myths reveal a clear pattern: the real challenge with AI scribes is not that they are inherently unsafe or inadequately regulated, but that misunderstandings about standards, consent, incident reporting, indemnity, and assurance are creating unnecessary friction and uncertainty for practices. By dispelling these misconceptions, we can refocus attention on what genuinely underpins safe and effective deployment.

AI scribes represent one of the most significant digital transformations in primary care since electronic health records. They offer the potential for faster documentation, reduced clinician burnout, and more meaningful time with patients. But, as with any technology deeply embedded in clinical workflows, their benefits can only be realised when supported by strong governance, well-defined roles, and mature clinical safety processes that operate consistently across the system.

The message is simple: safe adoption is entirely achievable, and it is a shared responsibility. When suppliers, ICBs, and practices work together to embed scalable, high-quality clinical safety assurance, AI scribes can become a trusted and almost invisible part of everyday practice, supporting clinicians, strengthening documentation, and ultimately enhancing patient care.

If we get this right, AI scribes will not just be a technological upgrade; they will help shape a safer, more sustainable future for primary care.

References

1) NHS England. Medium Term Planning Framework: Delivering Change Together 2026/27 to 2028/29. London: NHS England; 2025.

2) Oskrochi Y, Roy-Highley E, Grimes K, Shah S. Digital health technology compliance with clinical safety standards in the National Health Service in England: national cross-sectional study. J Med Internet Res; 2025.

3) Tailor et al. Evaluation of AI Summaries on Interdisciplinary Understanding of Ophthalmology Notes. JAMA Ophthalmology; 2025

4) Devine et al. Medical Scribe Impact on Provider Efficiency in Outpatient Radiation Oncology Clinics Before and During the COVID-19 Pandemic. Telemedicine Reports; 2022

5) Bell S et al. Frequency and Types of Patient-Reported Errors in Electronic Health Record Ambulatory Care Notes. JAMA Network; 2020

6) Hata R, Jones C, Seitz D, Tummala A, Moore M, Cooper D. Evaluating the accuracy and provider wellness impact of an ambient artificial intelligence scribe in a complex simulated emergency department environment. Annals of Emergency Medicine; 2024

7) Evans K, Papinniemi A, Ploderer B, Nicholson V, Hindhaugh T, et al. Impact of using an AI scribe on clinical documentation and clinician–patient interactions in allied health private practice: perspectives of clinicians and patients. Musculoskeletal Science and Practice; 2025.

8) European Parliament and Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (United Kingdom General Data Protection Regulation); 2016.

9) European Parliament and Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (United Kingdom General Data Protection Regulation); 2016.

10) Medical Protection Society. New policy: Artificial Intelligence (AI) and robotics in healthcare. Medical Protection; 2025.

11) Medical Defence Union. Using AI safely and responsibly in primary care. The MDU; 2025.

About Dr Lizzie Marston

Lizzie is a GP with extensive experience leading large-scale digital transformation across UK healthcare. She is passionate about patient safety and the systematic embedding of clinical safety into digital workflows at scale. She has held senior Clinical Governance and Clinical Safety Officer roles across the NHS and the private sector, working on both provider and supplier sides.

Read more

News

Tandem Health becomes the first AI medical scribe company to achieve ISO 13485 certification

AI in healthcare should meet the same standards as the tools clinicians already trust. Tandem Health’s ISO 13485 certification makes that standard official.

Blog

The Clinician's Perspective: The hidden burden in mental health care – and how AI can help

Blog

Committing to data security, privacy and clinical safety

Tandem is free to try

No card details needed. Get started in 5 minutes.