Transformation Directorate

Applications of Optical Character Recognition

How can machine vision technology help to accurately digitise data displayed on non-connected devices? We've been carrying out ongoing discovery work into the applications of this technology.

Prototype information

Carried out November 2021 - May 2023

What was the problem?

We initially began looking at Cardiovascular diseases (CVD). CVDs affect 6.8 million people in England, and account for a quarter of all deaths. They cost the NHS £7 billion a year, and so tackling them is a priority for the NHS Long Term Plan.

High blood pressure is one of the top risk factors of CVD. Measurements can be done at home with a blood pressure monitor device: bought cheaply, or distributed free by NHS schemes such as BP@Home (Blood Pressure at Home). Measurements may be taken for diagnostic purposes, or as part of a preventative/wellness routine. However, most devices aren’t smart/connected, and data from manual home readings doesn’t flow to clinical records unless manually recorded, sent to a GP, and entered onto a patient’s record.

Blood pressure monitors are not the only devices which rely on humans to transfer readings from a digital display into digital systems. Other home devices such as pulse oximeters, thermometers and glucose monitors as well as multiple pieces of equipment used in clinical settings often work on the same principal.

User research with participants on the BP@Home trial found that often people didn’t complete the programme because blood pressure measurement involves recording several numbers, and some patients worry about “getting it wrong” - so would rather not do it, than risk doing it but making a mistake.

When they did carry out the readings, they tended to be written on a piece of paper and handed to GPs, who then relied on manual processes and re-keying data - creating staff burden, risk of errors, and a long delay between readings taking place and being seen by clinicians.

What could be the benefit of improving this?

  • More people taking their blood pressure more regularly, because they find it easier to record the readings, resulting in earlier detection and treatment of CVDs
  • Less burden for GP surgeries, by avoiding the need to manually enter readings to the clinical records
  • Faster reactions to changes or concerning readings, as readings could be analysed algorithmically in real time, alerting clinicians to potentially harmful deterioration and allowing earlier interventions
  • Increased accuracy of readings by avoiding input errors, improving the quality of data held
  • Cost savings, as cheaper non-connected blood pressure monitoring devices can be used

Our hypothesis

We believe that if we made digitally capturing a blood pressure reading as simple as snapping and sending a photo, we will increase patient participation, and get the coded data onto patients’ digital records more quickly and accurately.

What did we do?

Based on user research we built a prototype which allowed a patient to take a photo of a blood pressure monitor screen. Using the AWS Rekognition service, we extracted the reading from the image, and encoded it as a FHIR message which would be suitable to send to clinical systems. We tested this with a selection of users, on a selection of monitors and devices, and in a variety of settings and lighting conditions.

We found the off-the-shelf OCR models sometimes struggled with the 7-bar LCD digital displays found on many monitors, and reflections from shiny surfaces on many screens made it tricky in some circumstances to get an accurate reading. However it often produced an accurate result, and was quicker than recording and then typing in numbers into a digital form, and overall dramatically quicker than recording readings on paper for later entry into digital systems.

Blood pressure OCR prototype

The system could potentially be enhanced by enhanced machine learning model training, and by individually learning what a person’s nominal values look like, in order to increase confidence in the accuracy of the output, and to flag outliers. While we prototyped this as a feature within the NHS App, the functionality could be abstracted to allow patients to send photos of their monitor’s readings via WhatsApp or other messaging platforms.

We further explored machine vision when we created a prototype called the Magic Scanner to allow us to test Show My Patient ID.

This was a bespoke scanning device which looks like a barcode scanner but used a Raspberry Pi and custom machine learning model to scan the homepage of the NHS App in order to recognise a patient’s name and NHS number, and then feed it as a keyboard input into a hospitals’ admissions computer. A full write-up of this build can be found on Github.

More information

If you would like to talk to us about this project, you can contact us at england.innovation.lab@nhs.net

About the NHS Innovation Lab

The NHS Innovation Lab was established to develop and test novel solutions to challenges facing the health and social care systems. Using innovative thinking and user-centred design processes, between 2020 and 2023 it explored dozens of problems across many different areas which, if solved, had the potential for substantial impact for patients, staff and organisations.

More about the NHS Innovation Lab