Artificial Intelligence
Artificial Intelligence (AI) is the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence. This guidance focuses on the IG implications of using AI in health and care settings, and should help support the lawful and safe use of data for AI innovations.
- I'm a patient/service user - what do I need to know?
- I work in a health and care organisation - what do I need to know?
- I'm an IG Professional - what do I need to know?
Guidance for patients and service users
AI is likely to be something you have already encountered in your everyday life. For example, face recognition and voice assistants on mobile phones commonly use machine learning and AI. AI is being used by some health and care organisations and may become more commonplace in the future.
Examples of AI which are currently being used to benefit people in health and care include:
- Analysing X-ray images, for example mammograms, to support radiologists in making assessments. This frees up radiologists to spend more time with patients, or to screen greater numbers of people more quickly.
- Supporting people in virtual wards, who would otherwise be in hospital to receive the care and treatment they need in their own home or usual place of residence. Remote monitoring technology such as apps and medical devices can assess patients’ health and care while they are being cared for at home.
- Helping clinicians read brain scans more quickly. This shortens the time it takes for patients to be treated, giving them a better quality of care.
A health and care organisation may use your personal information in AI systems to provide you with individual care. AI can help a health and care professional reach a decision about your care, for example, diagnosing a condition you have or helping you choose a treatment option. In these cases, your consent to the use of your data is implied. Decisions will not be made by the AI system. Health and care professionals will always provide advice and allow you to make the final decision on the care and treatment you receive.
AI-based technologies may use algorithms which allow the technology to learn using data. This is a purpose beyond your individual care, because the data is not being directly used for your own care, but instead the care of many others in the future. For example, lots of images of skin growths (moles) and information about whether or not they are cancerous can be used to train a system to assist in the detection of cancer. For purposes such as these, data may be taken from your health and care records and accessed by developers working in collaboration with the NHS. Only the specific information needed for training algorithms will be used, and wherever possible, identifying information such as your name and address will be removed or replaced with a code to protect your confidentiality.
If there is a chance that your data could enable the staff developing training algorithms to identify you, they would have to submit an application to use confidential patient information to the Health Research Authority’s Confidentiality Advisory Group (CAG). The CAG carefully considers the proposed use of the data against a number of factors: what is in the public interest, how necessary each piece of information is and how it will improve people’s care. The organisation must provide evidence of strong justifications for using the data.
Your confidential patient information will not be used for marketing or insurance purposes (unless you request this). You can read more about who decides how information is accessed in the Understanding Patient Data guidance.
Health and care organisations will inform you of the ways in which your confidential patient information is used, for example, via information on their website or on noticeboards. If you have concerns about your information being used for these purposes, you should speak to the professionals who are caring for you. You have the right to ask for your information not to be used.
Guidance for healthcare workers
Data can lawfully be used to support AI developments. Your IG lead, Data Protection Officer (DPO) and Caldicott Guardian should be involved in any decision to implement or share data to develop AI technology. Additionally you can contact the NHS IG Policy Team if you require further IG support for individual projects.
If you are using AI-based technology and you have any concerns or questions about the results, you should raise these within your organisation. For example, you may see false outputs or inconsistent results. Your concerns should usually be raised via your clinical management route. This is important not only from a clinical perspective, but also to ensure that data is being used fairly and appropriately. For example, irregular results may indicate bias or inaccuracy in the data which has been used to train the system.
Although AI-based technology is a useful tool to support you in your role, for example, for aiding clinical decision making, the final decision about the care that people receive should be made in consultation with the patient or service user, using your professional judgement.
People may have questions about how their information is used by AI products or processes. You should discuss any concerns with them or refer them to your IG lead, DPO or Caldicott Guardian. Your organisation’s privacy notice should also provide details about how information is being used and shared and the choices people have.
Guidance for IG professionals
The following information highlights some areas to be particularly mindful of to ensure you are meeting the requirements of data protection legislation when implementing AI-based technologies or sharing data for AI-based research in a health and care setting. The Health Research Authority has also published broader guidance for those developing, deploying or monitoring data driven technologies.
Data Protection Impact Assessment (DPIA)
A DPIA must legally be completed prior to implementing AI-based technologies, in order to manage and mitigate the likelihood and severity of any potential harm to individuals. The DPIA will also support you to consider how you are meeting the accountability principle of UK GDPR, as well as data protection by design and default. If your organisation is the controller for the data, you will need to demonstrate and document that you have analysed, identified and minimised the data protection risks.
The ICO has produced detailed AI guidance which should be read alongside this piece for a more overarching view of the topic, including DPIAs and taking a risk based approach. There is also an AI toolkit to support organisations auditing compliance of their AI-based technologies.
Purpose and legal basis
It is important that the purpose for which the data is used is clearly defined and agreed before any data is processed. This purpose will impact on the legal basis you rely upon. It is for the controller to identify which legal basis is most suitable for the purpose. The ICO guidance includes information on selecting the appropriate legal basis for AI-based technologies.
Where the data is being processed to inform a health and care professional’s decision about an individual’s care, UK GDPR condition 9 (2) (h) direct care will generally apply. Consent may be implied under the common law duty of confidentiality.
Where possible, the information being used for research or planning should be anonymous so a legal basis for processing will not be required. However, it is important that you carefully assess whether individuals can be identified using the contents of the information, even if the identifiers such as name, address and phone number are removed. For example, the combined details of a local area, a rare disease and a very young age may enable a patient to be identified, particularly if they have been featured on the news. You would therefore need to treat this as personal data, which would require a legal basis for processing, as well as meeting the requirements of the common law duty of confidentiality. You should also consider whether other information held by the receiving organisation would enable them to identify individuals, for example, if they are in possession of other data sets which could be linked to the one which you are sending them. If this is the case, you should apply technical and contractual controls to restrict re-identification.
Where AI is being used solely for research, then the legal basis may be UK GDPR condition 9 (2) (j) research purposes. Or for public health purposes, the legal basis may be condition 9 (2) (i) public health. To satisfy common law, you would need explicit consent. However, it is likely to be impractical to obtain explicit consent because of the numbers involved in developing an AI system. You would therefore need to submit an application to the Health Research Authority’s Confidentiality Advisory Group for support to process the data without explicit consent from the data subjects, known as Section 251 support.
When AI is used for research, projects may not always have a well-defined question at the outset, or may ask multiple questions at once. However, the purpose of your data processing should be considered very carefully so that the individual rights which apply can be clearly established and any possible availability of research exemptions to those rights under data protection law may be applied. If the purpose for using data changes, for example, data being used for research then being subsequently used in the delivery of care, individuals must be made aware before the new processing begins, and where appropriate ethical approval should be obtained. A legal basis must also be established for the new data processing purpose and appropriate checks made that the common law and ethical requirements regarding confidentiality are still addressed. The DPIA and Privacy Notice must also be updated. The ICO has published further information about UK GDPR research provisions.
Where the data you wish to use is about deceased individuals, the Data Protection Act and UK GDPR no longer apply and therefore there would be no controller. However, the common law duty of confidentiality extends beyond death so you would need to take the same steps under common law as you would with data for living individuals.
Controllers and processors
AI usually involves processing personal data and may involve several different organisations. It is therefore important to establish who the controllers and processors are, as an organisation’s obligations under the UK GDPR will vary depending on whether they are the sole data controller, joint controller or processor. Additionally, appropriate contracts and data processing agreements must be put in place between controllers and processors, clearly stating how the data can be used and any restrictions in place for further processing.
Health and care organisations should ensure that they are the controller or joint controller when entering into agreements with technology providers, as the health and care organisation should be determining the purpose of the data processing.
Statistical accuracy in artificial intelligence
Statistical accuracy in AI refers to how often the AI system’s predictions match the truth, or conclusions formed from high quality test data. For example, an AI system might look to predict a patient's length of stay at a hospital based on their previous admissions to hospital. Statistical accuracy within AI does not need to be 100% accurate to comply with data protection legislation. However, in order to avoid statistical analysis being misinterpreted as inaccurate or factual, health and care professionals should document in the patient or service user’s record that these are predictions informed by the analysis of statistics rather than factual information. Where staff have concerns about the accuracy of an AI system, they should also ensure the data is reviewed by a staff member.
In order to overcome the implications of an AI system making an incorrect decision, if you are implementing an AI system you should ensure that:
● the system has undergone a rigorous testing regime
● a clinically acceptable level of accuracy has been determined
● statistical accuracy claims made by third parties have been examined and tested as part of the procurement process
● everyone involved in developing and using the system understands what is required in terms of statistical accuracy
● you have identified whether or not the system qualifies as a medical device, and if so, you have followed the Medicines and Healthcare products Regulatory Agency (MHRA) requirements for the certification of a medical device
● outputs can be reviewed by a staff member if required
This buyer’s guide to AI in health and care may help you assess potential products.
Fairness
It is important that you ensure that any processing is fair. This means you need to ensure that:
● the system is sufficiently statistically accurate
● the system avoids discrimination as supported by the system’s equality impact assessment, for example it does not lead to or encourage disparities in outcomes between groups
● you understand how the system uses data, for example whether the system shares information with the developers automatically, and if so, whether this is addressed by your DPIA and assessment of the legal basis for processing
● you consider how individuals would reasonably expect their data to be used
● any processing you do is consistent with any explicit consent you have obtained from individuals or the section 251 exemption you have received and with your transparency information
● people are informed where a decision has been made by an algorithm, as required by Article 22 of UK GDPR (see section below on automated decision making)
These principles of good practice lay out what should be built into the strategy and product development ‘by design’. Any concerns should be flagged to the AI developers so that improvements can be made to the algorithm.
Transparency
If your organisation is implementing or sharing data to develop AI technology, you need to ensure that you inform individuals how their data will be used. You should provide transparency information and privacy notices, which people can read on your organisation’s website or in a waiting area. These materials may also be directly provided to them if a treatment they are having involves the use of AI.
You should:
● be open and honest and explain the purposes for using AI
● be clear about what you are going to do with their data
● inform people of any new uses of personal data before you start processing
● explain the logic involved in a clear and simple way. For example: “thousands of anonymous images are analysed against known cases of skin cancer so that the machine learns and can then predict if a scan is potential skin cancer. A clinician will make the final decision and explain the results to the patient”.
The ICO and Alan Turing Institute have co-produced practical guidance on explaining decisions made with AI to individuals.
Using the minimum amount of data for the purpose
AI-based technology has the ability to analyse large amounts of data. However it is important that the minimum amount of data for the required purpose is used, and that de-identified data is used where possible. For example, the national COVID-19 Chest Imaging Database collects a small amount of clinical data and imaging scans which are clearly set out in the data sharing agreement. Clinical data is de-identified before it is uploaded to the Chest Imaging Database so that only pseudonymised data is available to those who access the database.
In the initial stages of training algorithms, synthetic data may offer an alternative to using personal data as a privacy enhancing technology (PET), which helps to satisfy the data minimisation principle. Synthetic data is data which has been created artificially, and can improve prediction success whilst minimising the personal data being used. Synthetic data may potentially be useful when the requirement for privacy limits the data that is available. For example, conducting clinical trials with a few patients can lead to inaccurate results. Synthetic data has the potential to create control groups for clinical trials related to rare or recently discovered diseases that lack sufficient existing data, enabling better prediction of rare diseases. However, the use of synthetic data in the NHS is in its infancy.
Security
It is important that AI is implemented securely, particularly in view of the large amount of data being processed. You must therefore:
● satisfy yourself that appropriate security measures are in place
● record and document all movement and storage of personal data, for example in an operational log
● review and update governance and security policies to ensure they are fit for purpose following adaptation of AI
You may wish to implement some of the following organisational and technical measures to ensure that only those who are authorised can access data. These should be set out in your data protection impact assessment and should include:
● passwords and two factor authentication so only authorised individuals have access
● role-based access controls to ensure that only relevant information can be accessed
● audit logs
● encryption
● restrictions on data being downloaded or exported
● legally binding contracts or data processing agreements with restrictions on the use of data
● confidentiality policies or clauses within contracts
● clear retention and deletion clauses within contracts for what will happen to the data at the end of the contract
● IG and cyber training for staff
● effective starters and leavers processes
Automated decision making
The use of AI in health and care is at an early stage. Currently AI-based technologies are used for augmented decision making for health and care treatment decisions. This means that health and care professionals make the final decision on what care or treatment a person should receive, taking into account outputs from AI solutions. There are some current uses of AI for automated decision making, for example for automated rostering of staff, which uses staff information. There are also instances of AI that use automated decision making to improve efficiency, which do not use personal data.
However you should be aware that people have the right under UK GDPR Article 22 not to be subject to automated decision making, where the outcome produces a legal or similarly significant effect on them. It is important that any review by a human must be substantial and not just a token gesture. If, in the future, you are implementing a system that relies on automated decision making that produces such an effect, you must ensure that there is always an option available to have a human take the decision.