Introduction

Artificial intelligence (AI) offers significant potential to enhance eye care in the UK by improving diagnostic accuracy, streamlining workflows, and supporting better patient outcomes. However, its safe and effective integration into clinical practice requires an informed, evidence-based approach to protect patient safety, comply with regulations, and ensure equitable and effective care.

The UK optical sector is committed to adopting AI technologies that adhere to ethical principles, are grounded in robust evidence, and comply with relevant legal frameworks. Core principles for AI implementation include evidence-based validation, data transparency, compliance with GDPR and other regulations, and clearly defined lines of clinical accountability.

The development and deployment of AI systems should be mindful of the impacts of health inequalities and strive to reduce rather than exacerbate them. This requires diverse datasets, regular bias audits, and a commitment to inclusive care delivery. The environmental impact and resource demands of these technologies should also be carefully understood and managed. The UK optical sector is committed to responsibly harnessing AI’s potential to support clinicians and improve patient outcomes.

Workforce training is essential to equip clinicians with the skills needed to choose and use AI tools safely and effectively. Healthcare professionals should develop critical AI literacy so that they feel confident making informed decisions about when and how to use AI tools. Clinicians should recognise when and how software and devices that they are using in their practice may use AI, and have some understanding of the benefits, risks and limitations of these uses.

Integrating AI into eye care may be a transformative opportunity to improve patient care, increase efficiency, and expand access to specialist services. Currently, AI innovation is outpacing the regulatory regime, and there is an ongoing state of regulatory flux. The UK optical sector acknowledges the availability of AI-driven clinical tools, administrative systems, and generally available open Large Language Models (LLMs), Vision Language Models (VLMs), and Rise Language Models (RLMs), and recommends that clinicians consider the following when considering using them. 

  • Regulatory compliance: Ensure the tool is registered with the UK Medicines and Healthcare products Regulatory Agency (MHRA) as a medical device, indicating it has met required safety standards. This is referred to as Artificial Intelligence as a Medical Device (AIaMD). Even where these standards have been met, wherever possible, clinicians should look for independent research that has evaluated the tool and its use in the context in which they intend to use it.
  • Dataset evaluation: Assess the accuracy of the outputs. Ask how the tool was validated. Where possible, understand the size, composition, and quality of the training and testing datasets. Consider potential biases and limitations and whether the tool could inadvertently lead to discrimination or exacerbate health inequalities. 
  • Data privacy and security: Evaluate data privacy protocols and confirm compliance with relevant legislation, including GDPR for the nation where you practice. 
  • Patient transparency: Inform patients when using AI tools, and how they are being used. Where possible, offer them the option to opt out of AI-assisted aspects of their care.
  • Clinical judgment: Be aware of cognitive biases that might arise when using AI to assist clinical reasoning and decision-making. For example, rapidly-made decisions may be inaccurate, and lead to biased judgments or there may be inappropriate levels of confidence or scepticism in the information. An individual may prefer to agree with AI when it agrees with them and distrust it when it disagrees. Failure to identify and mitigate such biases could lead to unnecessary clinical risk. 
  • Clinical accountability: You remain responsible for the decisions you make about patient care. Clinicians must retain oversight and the ability to override AI-generated diagnoses or treatment recommendations. Treatment recommendations should continue to take into account clinical expertise on the treatment options, evidence, risks and benefits, and the patients’ preferences, personal circumstances, goals, values and beliefs. Clinicians should ensure that accurate records are kept, which show what information was provided by the AI platform, how this was used to inform clinical decision making, and when/why other clinical decision-making processes were preferred to AI guidance. 
  • Incident reporting: Ensure mechanisms are in place to audit results, report safety incidents, and to respond to such incidents. Clinicians should retain accurate records of all incidents in an independent format.
  • Training requirements: Ensure all team members have appropriate training, which enables them to understand how to use the tool and its performance metrics, including its sensitivity, specificity, and potential biases. Training should also support clinicians to recognise any inherent biases, and understand how these affect their use of AI-derived information in clinical reasoning and decision-making. Locums should be able to access independent sources of training. 
  • Long-term monitoring and feedback: Establish a framework for long-term monitoring, audit and continuous improvement of AI tools to ensure ongoing safety and effectiveness. This should include mechanisms for tracking and developing erroneous results.
  • National Health Service compliance: Ensure AI deployment for NHS care meets the requirements of your local NHS health system, such as data governance, security, privacy and safety standards. 
  • Check the products' intended use MHRA registration and license: Ensure the manufacturer has appropriately classified the AI tool as a AIaMD or non-medical device and determine whether it contains functionality that would require MHRA approval. The MHRA flowchart helps identify whether a tool has a medical purpose.
  • Regulatory compliance: Ensure the tool holds a valid UKCA or CE mark. Be aware of the pace of innovation and limitations of the regulatory regime. 
  • Data privacy and security: Confirm that data handling practices comply with UK GDPR and other applicable national and local regulations. Understand if patient data is accessible to the AI tool as means of refining the tool in real time, either locally or remotely and confirm that such data is being legally processed (including disposal). 
  • Training requirements: Ensure all team members are adequately trained, ensuring they understand how to use the tool appropriately including its limitations, and when it should not be used.
  • Patient transparency: Inform patients when AI tools are being used, and how they are being used and whether you or the practice has any research or commercial interest in their use. Offer them the option to opt out of AI-assisted aspects of administration.
  • Regulation: LLMs, VLMs, and RLMs are generally not registered as AIaMD, and are not intended for medical use, despite their extensive capabilities. 
  • Patient data protection: Do not input patient-identifiable or other sensitive information into open-access LLMs. 
  • Awareness of limitations: Recognise the potential for bias, hallucinations (fabricated content), and errors with these models. LLM-generated information may be incomplete, inaccurate, or outdated.  Erroneous, and potentially harmful, information could be presented by the LLM with very high confidence. Outputs from these models should be critically evaluated based on clinical experience and judgement, but caution is needed to avoid the risk that an individual may agree with AI when it agrees with them and distrust it when it disagrees.
  • Verification responsibility: Users are responsible for validating the accuracy of systems that use AI-generated information before applying it in clinical or other contexts.
  • Model variability: Understand that LLMs may be fixed or evolve over time, making their outputs potentially inconsistent and non-replicable.
  • AI literacy: Encourage all team members to include AI literacy in their professional development plans.
  • Critical thinking: Emphasise the importance of critical thinking skills for assessing AI-generated insights. This should include recognising any inherent biases in the datasets used to train the AI and user biases and understanding how these affect the use of AI-derived information in clinical reasoning and decision-making.
  • Skill retention: When AI tools automate tasks previously performed by clinicians or other colleagues, ensure strategies are in place to maintain these skills within the workforce.
  • Academic integrity: Ensure there is transparency when AI is used to support learning and professional development. Clinicians should disclose any use of AI when producing reports, educational materials, or coursework submissions. 

AI is poised to transform eye care, but its successful deployment will depend on responsible, transparent, and well-regulated implementation and evaluation. 

The College of Optometrists is committed to supporting clinicians with the guidance, training, and resources needed to integrate AI into practice safely and effectively. We have established an AI Expert Advisory Group to work with us to identify the key issues relating to AI and eye health, focusing on optometry and primary eye care. This brings together eye health professionals and researchers, experts from the AI field, including the legal and ethical aspects, patient representatives and optical sector bodies. The AI Expert Advisory Group will identify the key priorities in relation to the opportunities, issues, risks and benefits related to AI within eye health and eye health research. Following the AI summit, the advisory group will inform these key priority areas. 

The AI Expert Advisory Group is exploring these key areas:

  • evidence-based AI development
  • ethical and equitable implementation
  • regulatory compliance
  • workforce, education, training, and sustainability
  • clinical accountability
  • interoperability and integration
  • environmental impact of AI systems
  • resource constraints and funding allocation

After work with the AI Expert Group is concluded, the College will produce further guidance and other resources on AI to support primary eye care professionals. These will be published in late 2025.

This interim statement is supported by:

/COO/media/Media/Images/Logos/ABDO-logo.jpg
/COO/media/Media/Images/Logos/AOP-logo.jpg
/COO/media/Media/Images/Logos/FODO-logo.png
/COO/media/Media/Images/Logos/locsu-logo.jpg
/COO/media/Media/Images/Logos/Optometry-Ireland.jpg
/COO/media/Media/Images/Logos/Optometry-Northern-Ireland.jpg
/COO/media/Media/Images/Logos/Optometry-Scotland-logo.jpg
/COO/media/Media/Images/Logos/Optometry-Wales-logo.jpg