Charting the progress of the NHS AI Lab's work, and key activities from the regulatory bodies we support, to ensure the development and adoption of safe, effective and ethical AI in health and care.
What we're working on
Investigating the potential of AI
We are exploring the opportunities and implications of AI adoption, and evaluating its outcomes.
-
Available now
- Awarded 86 innovations with £113 million AI Award funding and support for development and evaluation
- Delivered proof-of-concept AI tools with NHS trusts as part of investigative projects through the NHS AI Lab Skunkworks programme
- Launched NHS AI Lab Imaging programme to support imaging data quality and access
-
Coming soon
- Pilot AI Deployment Platform which is testing a cloud-based platform to see if it can help accelerate the deployment of AI technologies
- Deliver AI Diagnostic Fund, which is providing £21 million to the NHS to purchase AI diagnostic tools
- Virtual data decision tool will be introduced that will support developers to identify what research consent approvals they require for their research
- Secure data environment will be established by HRA to enable developers to access data for training where assessment risk is classified as low
- Evaluation reports on the success of AI technologies in the AI Award programme
-
On the horizon
- Results from the AI Award evaluation, which will help better informed commissioning of AI
- ARIAS project - evidence to support the commissioning and deployment of Automated Retinal Image Analysis Systems (ARIAS) for widespread use within the NHS Diabetic Screening Programme
Building confidence and demonstrating trustworthiness
We are supporting the adoption of AI by providing learning opportunities to increase confidence and researching ways to minimise the potential risks
-
Available now
- Published the first part of a study with Health Education England about developing healthcare workers' confidence in AI technologies
- Published second report with Health Education England identifying educational and training needs for developing workforce confidence in AI
- Developed a National COVID-19 Chest Imaging Database to support pandemic response and provide data for research
- Launched the NHS AI Lab Ethics Initiative, driving policy on ethical assurance of AI
- Created an online resource library of AI information for health and care
- Developed an AI Buyer's Guide to help people commissioning AI
- Built an AI community platform with around 2,000 members: the AI Virtual Hub
- Published an AI Dictionary of terms to support learning
- Created a scalable validation process by using the NCCID to test model performance
- Provided "Deep Dive" Skunkworks workshops - supporting real world AI projects and upskilling
- AI learning and development tools provided via Open source code released on Github for Skunkworks proof-of-concept projects
- Delivered a report on Algorithmic Impact Assessment with the Ada Lovelace Institute
- Made synthetic datasets available which represent symptoms of COVID-19 and the collective populations of three GP surgeries
- Provided an education and training resource library on the AI Virtual Hub
- Awarded funding for 4 research projects looking at optimising AI for minority ethnic communities
- Published peer-reviewed NCCID research papers to support learning and development for AI in imaging
- Published insights from a survey about public perceptions of AI for use in our development of a national AI strategy
- Research findings from a dialogue with patients and the public, exploring approaches to collection, management and use of data published
-
Coming soon
- Guidance on external validation of AI models, in continuation of G7 work - with MHRA
- Trial of Algorithmic Impact Assessments (AIAs) based on a model designed with the Ada Lovelace Institute
- Synthetic datasets linked to geolocational data to be made available
-
On the horizon
- I-SIRch project - prototype for identifying factors that contribute to adverse maternity incidents involving black mothers and families
- I-SIRch project - safety recommendations to improve maternity care
- STANDING Together project - published standards for inclusive and diverse datasets underpinning AI
- ARIAS project - platform for ongoing evaluation of algorithms for AI Retinal Image Analysis Systems
- AUDITED project - chatbot optimised to provide advice on sexually transmitted infections to minoritised ethnic populations
- AUDITED project - guidance and framework for implementing AI chatbots in health and care
- A “meta-toolkit” for trustworthy and accountable AI (part of trustworthiness auditing for AI)
- Project Glass Box (AI Interpretability) - one of the work packages of MHRA's Software and AI as a Medical Device Change Programme, the Lab is funding this project which will develop new guidance
Clarifying who does what
Helping AI developers, adopters, commissioners and the public to navigate the regulations, guidelines and incident reporting around AI for health and care.
-
Available now
- Launched the NHS AI Labs Regulation programme exploring requirements for safe, ethical and robust use of AI in health and care
- Launched beta phase of the AI and digital regulations service
- Published white paper on behalf of the Global Digital Health Partnership: AI for healthcare - creating an international approach together
- Published G7 papers about international principles for evaluation, development and deployment of AI
- Provided guidance on how to put data-driven technology policy into practice: "AI: How to get it right" report
- Delivered patient and public engagement workshops for NCCID regulatory approval
- Learnings published about the challenges and successes of the AI evaluation approach for the AI Award technologies
- Resource collection provided to help developers understand AI regulation
- Refreshed NICE Evidence Standards Framework for the commissioning of digital health technologies
-
Coming soon
- Further improvements to MHRA’s Yellow Card technology to deliver data-driven, smart reporting on adverse incidents
- Future World project (horizon scanning about potential risks around technology and transformation)
- Development of new MHRA guidance setting out standards for the development of machine learning technologies and data hygiene
- New HRA Confidentiality Advisory Group practices to be introduced to drive greater collaboration between developers and assessment panels
-
On the horizon
- The AI and digital regulations service available as a full live service to help users navigate technology regulation
- Definition of the regulatory position on the acceptability of using synthetic data as training data for AI as a medical device - led by MHRA