NHS First to Trial Ethical AI Assessments
The Ada Lovelace Institute has published a new proposal for the use of algorithmic impact assessments (AIAs) to maximise the benefits and mitigate the harms of AI technologies in healthcare. By trialling this detailed process, the NHS in England will be the first health system in the world to use this new approach to the ethical use of AI.
The NHS is set to trial the use of this assessment within the context of the NHS AI Lab. The framework will be used in a pilot to support researchers and developers in assessing the possible risks of an algorithmic system before they are granted access to NHS patient data. It will be trialled across a number of initiatives and used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).
Octavia Reeve, Interim Lead, Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can, in turn, build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit. We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts”.
The NCCID is a central database of medical images from hospital patients across the country that supports researchers to better understand COVID-19 and develop technology enabling the best care. The proposed NMIP will expand on the NCCID and enable the training and testing of a wider range of AI systems using medical imaging for screening and diagnostics.
Data-driven technologies (including AI) are increasingly being used in healthcare to help with detection, diagnosis and prognosis. However, there are legitimate concerns that AI could exacerbate health inequalities and entrench social biases (for example, training data biases have resulted in AI systems for diagnosing skin cancer that are less accurate for people of colour).
AIAs are an emerging approach for holding the people and institutions that design and deploy AI systems accountable. They are one way to help pre-empt and identify the potential impact of algorithms on people, society and the environment.
Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab, said: “Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.
‘The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good”.
You can find more information about the Ada Lovelace Institute and the NHS trial on its website.
Stay up to date with the most recent automation, computer vision, machine vision and robotics news on Automate Pro Europe, CVPro, MVPro and RBPro.
Automation Articles of the Month
Celera Motion has announced new sizes and capabilities for its Aura Absolute Chip Encoder Series...
Revalize has announced that Marc Maurer will join the organization as the Managing Director for...
NI has released its Test Workflow subscription bundle for automating test systems, extending engineers’ access...