LLM-chatbots do not meet key principles for AI in healthcare. It is likely that AI chatbots may provide harmful medical responses, say experts.
Open AI’s ChatGPT, Google’s MedPaLM, Meta’s LLaMA, these large language models (LLMs) have some great use cases. These chatbots also have great medical potential, but their unregulated use in healthcare can be dangerous. How to regulate Large Language Models (LLMs) in healthcare is also one of the most pressing global issues today. Let’s delve into the potential benefits and risks associated with the use of AI chatbots as medical devices.
Large Language Models are capable of generating highly convincing human-like responses and engaging in interactive conversations. But they often generate wrong or provide inappropriate statements. Wrong responses to medical questions can lead to dangerous consequences. This is the biggest fear experts have regarding the use of AI chatbots as medical devices.
Prof. Stephen Gilbert, Professor for Medical Device Regulatory Science at Else Kr ner Fresenius Center for Digital Health at Technische Universit t Dresden (TU Dresden), is not in favour of using current LLM-chatbots in healthcare.
Writing in an article, Prof. Gilbert stated these chatbots are unsafe tools and stressed the need to develop new frameworks that ensure patient safety.
The dangers of using AI chatbots in healthcare
Do you research your symptoms on internet before seeking medical advice? You’re not alone. Today, search engines play a key role in people’s decision-making process.
LLM-chatbots are known for their remarkable conversational skills and highly convincing responses, and experts fear that the integration of LLM-chatbots into search engines may increase users’ confidence and dependency in the information given by a chatbot.
In the article, Prof. Gilbert cited that LLMs can provide extremely dangerous information when it comes to medical questions.
READ RELATED: 7 Most Effective Exercises To Shrink Stomach Fat in 30 Days
The article further mentioned there had been unethical use of chat interfaced LLMs in ‘experiments’ on patients without consent. It highlighted the need for regulatory control on medical LLM use.
How chatbots could find application in healthcare
According to Prof. Gilbert, LLM-chatbots developed today do not meet key principles for AI in healthcare, such as bias control, explainability, systems of oversight, validation and transparency.
The article also talked about how developers can design LLM-based tools that could be approved as medical devices, and creation of new frameworks that preserve patient safety.
For medical use, the accuracy of chatbots must be improved, their safety and clinical efficacy must be demonstrated and approved by regulators, added Prof. Gilbert.
Total Wellness is now just a click away.
Follow us on
Don’t Miss Out on the Latest Updates.
Subscribe to Our Newsletter Today!
window.addEventListener(‘load’, (event) => {
$(‘#commentbtn’).on(“click”,function(){
(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.3”; fjs.parentNode.insertBefore(js, fjs);}(document, ‘script’, ‘facebook-jssdk’));
$(“.cmntbox”).toggle();
});
});