Charting the progress of the NHS AI Lab's work, and key activities from the regulatory bodies we support, to ensure the development and adoption of safe, effective and ethical AI in health and care.
What we're working on
Investigating the potential of AI
We are exploring the opportunities and implications of AI adoption, and evaluating its outcomes.
-
Available now
- Awarded 79 innovations with more than £100 million AI Award funding and support for development and evaluation
- Delivered 5 proof-of-concept AI tools with NHS trusts as part of investigative projects through the NHS AI Lab Skunkworks programme
- Launched NHS AI Lab Imaging programme to support imaging data quality and access
-
Coming soon
- Award the next cohort of AI Award technologies with funding and support (round 3)
- Evaluation reports on the success of AI technologies in the AI Award programme
- Develop further Skunkworks proof-of-concept AI tools in collaboration with NHS and public sector
-
On the horizon
- Continued development of the open source code library on Github for use by wider NHS AI projects
- Results from the AI Award evaluation, which will help better informed commissioning of AI
- ARIAS project - evidence to support the commissioning and deployment of Automated Retinal Image Analysis Systems (ARIAS) for widespread use within the NHS Diabetic Screening Programme
Building confidence and demonstrating trustworthiness
We are supporting the adoption of AI by providing learning opportunities to increase confidence and researching ways to minimise the potential risks
-
Available now
- Published the first part of a study with Health Education England about developing healthcare workers' confidence in AI technologies
- Developed a National COVID-19 Chest Imaging Database to support pandemic response and provide data for research
- Launched the NHS AI Lab Ethics Initiative, driving policy on ethical assurance of AI
- Created an online resource library of AI information for health and care
- Delivered the first draft of a National AI Strategy for Health and Social Care for review by the AI community
- Developed an AI Buyer's Guide to help people commissioning AI
- Built an AI community platform with around 2,000 members: the AI Virtual Hub
- Published an AI Dictionary of terms to support learning
- Created a scalable validation process by using the NCCID to test model performance
- Provided "Deep Dive" Skunkworks workshops - supporting real world AI projects and upskilling
- AI learning and development tools provided via Open source code released on Github for Skunkworks proof-of-concept projects
- Delivered a report on Algorithmic Impact Assessment with the Ada Lovelace Institute
- Explored synthetic data generation and developed a repeatable workflow process to share
- Provided an education and training resource library on the AI Virtual Hub
- Published the first part of a study with Health Education England about levels of trust and engagement in health workers
- Awarded funding for 4 research projects looking at optimising AI for minority ethnic communities
- Published peer-reviewed NCCID research papers to support learning and development for AI in imaging
- Published insights from a survey about public perceptions of AI for use in our development of a national AI strategy
- Held research workshops and focus groups to build evidence for the structure of our AI Strategy planning
-
Coming soon
- Guidance on external validation of AI models, in continuation of G7 work - with MHRA
- Trial of Algorithmic Impact Assessments (AIAs) based on a model designed with the Ada Lovelace Institute
- Continued support for AI adoption and upskilling through Skunkworks Deep Dive workshops and capability initiatives
- A second report identifying educational and training needs for developing workforce confidence in AI
- Further research papers on synthetic data with MHRA
-
On the horizon
- Education and development opportunities from a 3-year knowledge base of Skunkworks AI investigations
- I-SIRch project - prototype for identifying factors that contribute to adverse maternity incidents involving black mothers and families
- I-SIRch project - safety recommendations to improve maternity care
- STANDING Together project - published standards for inclusive and diverse datasets underpinning AI
- ARIAS project - platform for ongoing evaluation of algorithms for AI Retinal Image Analysis Systems
- AUDITED project - chatbot optimised to provide advice on sexually transmitted infections to minoritised ethnic populations
- AUDITED project - guidance and framework for implementing AI chatbots in health and care
- Meaningful human control - research into the level of human control needed for AI in healthcare to be ethical and safe
- A “meta-toolkit” for trustworthy and accountable AI (part of trustworthiness auditing for AI)
Clarifying who does what
Helping AI developers, adopters, commissioners and the public to navigate the regulations, guidelines and incident reporting around AI for health and care.
-
Available now
- Launched the NHS AI Labs Regulation programme exploring requirements for safe, ethical and robust use of AI in health and care
- Private beta phase launched for the Multi-Agency Advisory Group (MAAS) to provide regulatory support
- Published white paper on behalf of the Global Digital Health Partnership: AI for healthcare - creating an international approach together
- Published G7 papers about international principles for evaluation, development and deployment of AI
- Provided guidance on how to put data-driven technology policy into practice: "AI: How to get it right" report
- Delivered patient and public engagement workshops for NCCID regulatory approval
- Learnings published about the challenges and successes of the AI evaluation approach for the AI Award technologies
- Resource collection provided to help developers understand AI regulation
- Streamlined the application methodology for data access - led by the Health Research Authority
-
Coming soon
- Complete beta phase pilot for multi-agency advisory service (MAAS)
- Refreshed NICE Evidence Standards Framework for the commissioning of digital health technologies
- Further improvements to MHRA’s Yellow Card technology to deliver data-driven, smart reporting on adverse incidents
- Launch of a liability and accountability project with NHS Resolution
- Future World project (horizon scanning about potential risks around technology and transformation)
- Development of new MHRA guidance setting out standards for the development of machine learning technologies and data hygiene
-
On the horizon
- Multi-agency advisory service (MAAS) available as a live service to help users navigate technology regulation
- Position paper on liability and accountability for AI as a medical device (using results from research with NHS Resolution)
- Definition of the regulatory position on the acceptability of using synthetic data as training data for AI as a medical device - led by MHRA