BOPA E&T Webinar: Using ChatGPT & LLMs in Healthcare: A Short, Practical Introduction

February 23, 2026|12:00 PM GMT|Past event

In early January 2026, OpenAI launched ChatGPT Health, enabling users to connect personal medical records from millions of providers, followed days later by Anthropic's Claude for Healthcare. These moves mark the first major consumer-facing integrations of leading large language models (LLMs) directly with electronic health records, shifting AI from general chat tools to personalised health companions.

Over 40 million people weekly were already querying ChatGPT on health topics before these launches, representing more than 5% of global messages on the platform. The new features formalise this trend, allowing users to upload records, ask about symptoms or results, and receive explanations—while companies stress these tools support, rather than replace, professional care.

The timing reflects broader acceleration in healthcare AI adoption. Ambient scribes and administrative tools generated hundreds of millions in revenue in 2025, with major deployments like Kaiser Permanente's rollout across hundreds of sites. Clinical evidence continues to mount: studies show LLMs summarising records to save clinician time, aiding diagnostics, and even outperforming physicians on complex cases in controlled settings.

In the UK, the NHS faces intense pressure from workforce shortages and rising demand. The government's 10-Year Health Plan positions the NHS as the world's most AI-enabled care system, backed by billions in digital investment. Around 20-28% of GPs already report using generative AI, and public support hovers near 54% when safeguards exist. Oncology pharmacy, like other specialties, confronts growing administrative loads and complex data; LLMs offer potential relief in summarisation, decision support, and workflow efficiency amid these constraints.

Regulatory landscapes are evolving in parallel. The EU AI Act's high-risk rules phase in through 2026-2027, while the UK advances a principles-based approach with a new National Commission reviewing healthcare AI rules, aiming for recommendations in 2026. These developments create urgency for professionals to understand practical LLM applications safely, as tools move from pilots to everyday use.

The real-world stakes are high: better-informed patients could reduce miscommunication and non-adherence, while overburdened clinicians gain time for direct care. Yet risks remain—misleading outputs, inconsistent advice, and potential widening of inequities if access or accuracy varies. In oncology, where decisions hinge on precise, up-to-date information, these shifts demand attention now.

We use cookies to measure site usage. Privacy Policy