Family doctors are risking patient safety by relying on AI to help with diagnoses, a study has warned.
One in five GPs admitted using programmes such as ChatGPT and Bing AI during clinical practice, despite no official guidance on how to work with them.
Experts warned problems such as ‘algorithm biases’ could lead to misdiagnoses and that patient data could also be in danger of being compromised. They said doctors must be made aware of the risks and called for legislation to cover their use in healthcare settings.
Researchers sent the survey to a thousand GPs using, the largest professional network for UK doctors currently registered with the General Medical Council.
The medics were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or ‘Other’. More than half of the respondents (54 per cent) were aged 46 or older.
One in five GPs are using AI to help with diagnoses, a study has found (file photo)
One in five (20 per cent) reported using generative AI tools in their clinical practice.
Of these, almost one in three (29 per cent) reported using these tools to generate documentation after patient appointments.
A similar number (28 per cent) said they used them to suggest a differential diagnosis, according to the findings published in the BMJ.
One in four (25 per cent) said they used the tools to suggest treatment options, such as potential medicines or referrals.
The researchers – which included scientists from Uppsala University, Sweden, and the University of Zurich – said that while AI can be useful for assisting with documentation, they are ‘prone to creating erroneous information’.
They write: ‘We caution that these tools have limitations since they can embed subtle errors and biases.
‘They may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather.’
While chatbots are increasingly the target of regulatory efforts, it ‘remains unclear’ how the legislation will intersect in a practical way with these tools in clinical practice, they added.
Researchers say that while helpful for assisting with documentation, AI is ‘prone to creating erroneous information’ (file photo)
Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the ‘inherent risks’ it poses, they conclude.
Professor Kamila Hawthorne, Chair of the Royal College of GPs, agreed the use of AI ‘is not without potential risks’ and called for its implementation in general practice to be closely regulated to guarantee patient safety and the security of their data.
She said: ‘Technology will always need to work alongside and complement the work of doctors and other healthcare professionals, and it can never be seen as a replacement for the expertise of a qualified medical professional.
‘Clearly there is potential for the use of generative AI in general practice but it’s vital that it is carefully implemented and closely regulated in the interest of patient safety.’